Kalman Filtering with Inequality Constraints for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2003-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops two analytic methods of incorporating state variable inequality constraints in the Kalman filter. The first method is a general technique of using hard constraints to enforce inequalities on the state variable estimates. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The second method uses soft constraints to estimate state variables that are known to vary slowly with time. (Soft constraints are constraints that are required to be approximately satisfied rather than exactly satisfied.) The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results. The use of the algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate health parameters. The turbofan engine model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.
Optimal control of singularly perturbed nonlinear systems with state-variable inequality constraints
NASA Technical Reports Server (NTRS)
Calise, A. J.; Corban, J. E.
1990-01-01
The established necessary conditions for optimality in nonlinear control problems that involve state-variable inequality constraints are applied to a class of singularly perturbed systems. The distinguishing feature of this class of two-time-scale systems is a transformation of the state-variable inequality constraint, present in the full order problem, to a constraint involving states and controls in the reduced problem. It is shown that, when a state constraint is active in the reduced problem, the boundary layer problem can be of finite time in the stretched time variable. Thus, the usual requirement for asymptotic stability of the boundary layer system is not applicable, and cannot be used to construct approximate boundary layer solutions. Several alternative solution methods are explored and illustrated with simple examples.
Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2006-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).
State-dependent rotations of spins by weak measurements
NASA Astrophysics Data System (ADS)
Miller, D. J.
2011-03-01
It is shown that a weak measurement of a quantum system produces a new state of the quantum system which depends on the prior state, as well as the (uncontrollable) measured position of the pointer variable of the weak-measurement apparatus. The result imposes a constraint on hidden-variable theories which assign a different state to a quantum system than standard quantum mechanics. The constraint means that a crypto-nonlocal hidden-variable theory can be ruled out in a more direct way than previously done.
Kalman Filter Constraint Tuning for Turbofan Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2005-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints are often neglected because they do not fit easily into the structure of the Kalman filter. Recently published work has shown a new method for incorporating state variable inequality constraints in the Kalman filter, which has been shown to generally improve the filter s estimation accuracy. However, the incorporation of inequality constraints poses some risk to the estimation accuracy as the Kalman filter is theoretically optimal. This paper proposes a way to tune the filter constraints so that the state estimates follow the unconstrained (theoretically optimal) filter when the confidence in the unconstrained filter is high. When confidence in the unconstrained filter is not so high, then we use our heuristic knowledge to constrain the state estimates. The confidence measure is based on the agreement of measurement residuals with their theoretical values. The algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate engine health.
Aircraft Turbofan Engine Health Estimation Using Constrained Kalman Filtering
NASA Technical Reports Server (NTRS)
Simon, Dan; Simon, Donald L.
2003-01-01
Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results obtained from application to a turbofan engine model. This model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, D. J.
It is shown that a weak measurement of a quantum system produces a new state of the quantum system which depends on the prior state, as well as the (uncontrollable) measured position of the pointer variable of the weak-measurement apparatus. The result imposes a constraint on hidden-variable theories which assign a different state to a quantum system than standard quantum mechanics. The constraint means that a crypto-nonlocal hidden-variable theory can be ruled out in a more direct way than previously done.
General Constraints on Sampling Wildlife on FIA Plots
Larissa L. Bailey; John R. Sauer; James D. Nichols; Paul H. Geissler
2005-01-01
This paper reviews the constraints to sampling wildlife populations at FIA points. Wildlife sampling programs must have well-defined goals and provide information adequate to meet those goals. Investigators should choose a State variable based on information needs and the spatial sampling scale. We discuss estimation-based methods for three State variables: species...
NASA Technical Reports Server (NTRS)
Markopoulos, N.; Calise, A. J.
1993-01-01
The class of all piecewise time-continuous controllers tracking a given hypersurface in the state space of a dynamical system can be split by the present transformation technique into two disjoint classes; while the first of these contains all controllers which track the hypersurface in finite time, the second contains all controllers that track the hypersurface asymptotically. On this basis, a reformulation is presented for optimal control problems involving state-variable inequality constraints. If the state constraint is regarded as 'soft', there may exist controllers which are asymptotic, two-sided, and able to yield the optimal value of the performance index.
On Matrices, Automata, and Double Counting
NASA Astrophysics Data System (ADS)
Beldiceanu, Nicolas; Carlsson, Mats; Flener, Pierre; Pearson, Justin
Matrix models are ubiquitous for constraint problems. Many such problems have a matrix of variables M, with the same constraint defined by a finite-state automaton A on each row of M and a global cardinality constraint gcc on each column of M. We give two methods for deriving, by double counting, necessary conditions on the cardinality variables of the gcc constraints from the automaton A. The first method yields linear necessary conditions and simple arithmetic constraints. The second method introduces the cardinality automaton, which abstracts the overall behaviour of all the row automata and can be encoded by a set of linear constraints. We evaluate the impact of our methods on a large set of nurse rostering problem instances.
Improved Sensitivity Relations in State Constrained Optimal Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bettiol, Piernicola, E-mail: piernicola.bettiol@univ-brest.fr; Frankowska, Hélène, E-mail: frankowska@math.jussieu.fr; Vinter, Richard B., E-mail: r.vinter@imperial.ac.uk
2015-04-15
Sensitivity relations in optimal control provide an interpretation of the costate trajectory and the Hamiltonian, evaluated along an optimal trajectory, in terms of gradients of the value function. While sensitivity relations are a straightforward consequence of standard transversality conditions for state constraint free optimal control problems formulated in terms of control-dependent differential equations with smooth data, their verification for problems with either pathwise state constraints, nonsmooth data, or for problems where the dynamic constraint takes the form of a differential inclusion, requires careful analysis. In this paper we establish validity of both ‘full’ and ‘partial’ sensitivity relations for an adjointmore » state of the maximum principle, for optimal control problems with pathwise state constraints, where the underlying control system is described by a differential inclusion. The partial sensitivity relation interprets the costate in terms of partial Clarke subgradients of the value function with respect to the state variable, while the full sensitivity relation interprets the couple, comprising the costate and Hamiltonian, as the Clarke subgradient of the value function with respect to both time and state variables. These relations are distinct because, for nonsmooth data, the partial Clarke subdifferential does not coincide with the projection of the (full) Clarke subdifferential on the relevant coordinate space. We show for the first time (even for problems without state constraints) that a costate trajectory can be chosen to satisfy the partial and full sensitivity relations simultaneously. The partial sensitivity relation in this paper is new for state constraint problems, while the full sensitivity relation improves on earlier results in the literature (for optimal control problems formulated in terms of Lipschitz continuous multifunctions), because a less restrictive inward pointing hypothesis is invoked in the proof, and because it is validated for a stronger set of necessary conditions.« less
General constraints on sampling wildlife on FIA plots
Bailey, L.L.; Sauer, J.R.; Nichols, J.D.; Geissler, P.H.; McRoberts, Ronald E.; Reams, Gregory A.; Van Deusen, Paul C.; McWilliams, William H.; Cieszewski, Chris J.
2005-01-01
This paper reviews the constraints to sampling wildlife populations at FIA points. Wildlife sampling programs must have well-defined goals and provide information adequate to meet those goals. Investigators should choose a State variable based on information needs and the spatial sampling scale. We discuss estimation-based methods for three State variables: species richness, abundance, and patch occupancy. All methods incorporate two essential sources of variation: detectability estimation and spatial variation. FIA sampling imposes specific space and time criteria that may need to be adjusted to meet local wildlife objectives.
State-constrained booster trajectory solutions via finite elements and shooting
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.; Seywald, Hans
1993-01-01
This paper presents an extension of a FEM formulation based on variational principles. A general formulation for handling internal boundary conditions and discontinuities in the state equations is presented, and the general formulation is modified for optimal control problems subject to state-variable inequality constraints. Solutions which only touch the state constraint and solutions which have a boundary arc of finite length are considered. Suitable shape and test functions are chosen for a FEM discretization. All element quadrature (equivalent to one-point Gaussian quadrature over each element) may be done in closed form. The final form of the algebraic equations is then derived. A simple state-constrained problem is solved. Then, for a practical application of the use of the FEM formulation, a launch vehicle subject to a dynamic pressure constraint (a first-order state inequality constraint) is solved. The results presented for the launch-vehicle trajectory have some interesting features, including a touch-point solution.
Joint Chance-Constrained Dynamic Programming
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob
2012-01-01
This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.
A programing system for research and applications in structural optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Rogers, J. L., Jr.
1981-01-01
The flexibility necessary for such diverse utilizations is achieved by combining, in a modular manner, a state-of-the-art optimization program, a production level structural analysis program, and user supplied and problem dependent interface programs. Standard utility capabilities in modern computer operating systems are used to integrate these programs. This approach results in flexibility of the optimization procedure organization and versatility in the formulation of constraints and design variables. Features shown in numerical examples include: variability of structural layout and overall shape geometry, static strength and stiffness constraints, local buckling failure, and vibration constraints.
A Framework of Covariance Projection on Constraint Manifold for Data Fusion.
Bakr, Muhammad Abu; Lee, Sukhan
2018-05-17
A general framework of data fusion is presented based on projecting the probability distribution of true states and measurements around the predicted states and actual measurements onto the constraint manifold. The constraint manifold represents the constraints to be satisfied among true states and measurements, which is defined in the extended space with all the redundant sources of data such as state predictions and measurements considered as independent variables. By the general framework, we mean that it is able to fuse any correlated data sources while directly incorporating constraints and identifying inconsistent data without any prior information. The proposed method, referred to here as the Covariance Projection (CP) method, provides an unbiased and optimal solution in the sense of minimum mean square error (MMSE), if the projection is based on the minimum weighted distance on the constraint manifold. The proposed method not only offers a generalization of the conventional formula for handling constraints and data inconsistency, but also provides a new insight into data fusion in terms of a geometric-algebraic point of view. Simulation results are provided to show the effectiveness of the proposed method in handling constraints and data inconsistency.
Guidance and flight control law development for hypersonic vehicles
NASA Technical Reports Server (NTRS)
Calise, A. J.; Markopoulos, N.
1993-01-01
During the third reporting period our efforts were focused on a reformulation of the optimal control problem involving active state-variable inequality constraints. In the reformulated problem the optimization is carried out not with respect to all controllers, but only with respect to asymptotic controllers leading to the state constraint boundary. Intimately connected with the traditional formulation is the fact that when the reduced solution for such problems lies on a state constraint boundary, the corresponding boundary layer transitions are of finite time in the stretched time scale. Thus, it has been impossible so far to apply the classical asymptotic boundary layer theory to such problems. Moreover, the traditional formulation leads to optimal controllers that are one-sided, that is, they break down when a disturbance throws the system on the prohibited side of the state constraint boundary.
Kalman Filter Estimation of Spinning Spacecraft Attitude using Markley Variables
NASA Technical Reports Server (NTRS)
Sedlak, Joseph E.; Harman, Richard
2004-01-01
There are several different ways to represent spacecraft attitude and its time rate of change. For spinning or momentum-biased spacecraft, one particular representation has been put forward as a superior parameterization for numerical integration. Markley has demonstrated that these new variables have fewer rapidly varying elements for spinning spacecraft than other commonly used representations and provide advantages when integrating the equations of motion. The current work demonstrates how a Kalman filter can be devised to estimate the attitude using these new variables. The seven Markley variables are subject to one constraint condition, making the error covariance matrix singular. The filter design presented here explicitly accounts for this constraint by using a six-component error state in the filter update step. The reduced dimension error state is unconstrained and its covariance matrix is nonsingular.
PlanWorks: A Debugging Environment for Constraint Based Planning Systems
NASA Technical Reports Server (NTRS)
Daley, Patrick; Frank, Jeremy; Iatauro, Michael; McGann, Conor; Taylor, Will
2005-01-01
Numerous planning and scheduling systems employ underlying constraint reasoning systems. Debugging such systems involves the search for errors in model rules, constraint reasoning algorithms, search heuristics, and the problem instance (initial state and goals). In order to effectively find such problems, users must see why each state or action is in a plan by tracking causal chains back to part of the initial problem instance. They must be able to visualize complex relationships among many different entities and distinguish between those entities easily. For example, a variable can be in the scope of several constraints, as well as part of a state or activity in a plan; the activity can arise as a consequence of another activity and a model rule. Finally, they must be able to track each logical inference made during planning. We have developed PlanWorks, a comprehensive system for debugging constraint-based planning and scheduling systems. PlanWorks assumes a strong transaction model of the entire planning process, including adding and removing parts of the constraint network, variable assignment, and constraint propagation. A planner logs all transactions to a relational database that is tailored to support queries for of specialized views to display different forms of data (e.g. constraints, activities, resources, and causal links). PlanWorks was specifically developed for the Extensible Universal Remote Operations Planning Architecture (EUROPA(sub 2)) developed at NASA, but the underlying principles behind PlanWorks make it useful for many constraint-based planning systems. The paper is organized as follows. We first describe some fundamentals of EUROPA(sub 2). We then describe PlanWorks' principal components. We then discuss each component in detail, and then describe inter-component navigation features. We close with a discussion of how PlanWorks is used to find model flaws.
On the use of internal state variables in thermoviscoplastic constitutive equations
NASA Technical Reports Server (NTRS)
Allen, D. H.; Beek, J. M.
1985-01-01
The general theory of internal state variables are reviewed to apply it to inelastic metals in use in high temperature environments. In this process, certain constraints and clarifications will be made regarding internal state variables. It is shown that the Helmholtz free energy can be utilized to construct constitutive equations which are appropriate for metallic superalloys. Internal state variables are shown to represent locally averaged measures of dislocation arrangement, dislocation density, and intergranular fracture. The internal state variable model is demonstrated to be a suitable framework for comparison of several currently proposed models for metals and can therefore be used to exhibit history dependence, nonlinearity, and rate as well as temperature sensitivity.
Comparative study of flare control laws. [optimal control of b-737 aircraft approach and landing
NASA Technical Reports Server (NTRS)
Nadkarni, A. A.; Breedlove, W. J., Jr.
1979-01-01
A digital 3-D automatic control law was developed to achieve an optimal transition of a B-737 aircraft between various initial glid slope conditions and the desired final touchdown condition. A discrete, time-invariant, optimal, closed-loop control law presented for a linear regulator problem, was extended to include a system being acted upon by a constant disturbance. Two forms of control laws were derived to solve this problem. One method utilized the feedback of integral states defined appropriately and augmented with the original system equations. The second method formulated the problem as a control variable constraint, and the control variables were augmented with the original system. The control variable constraint control law yielded a better performance compared to feedback control law for the integral states chosen.
Design sensitivity analysis of rotorcraft airframe structures for vibration reduction
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta
1987-01-01
Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.
Water-resources optimization model for Santa Barbara, California
Nishikawa, Tracy
1998-01-01
A simulation-optimization model has been developed for the optimal management of the city of Santa Barbara's water resources during a drought. The model, which links groundwater simulation with linear programming, has a planning horizon of 5 years. The objective is to minimize the cost of water supply subject to: water demand constraints, hydraulic head constraints to control seawater intrusion, and water capacity constraints. The decision variables are montly water deliveries from surface water and groundwater. The state variables are hydraulic heads. The drought of 1947-51 is the city's worst drought on record, and simulated surface-water supplies for this period were used as a basis for testing optimal management of current water resources under drought conditions. The simulation-optimization model was applied using three reservoir operation rules. In addition, the model's sensitivity to demand, carry over [the storage of water in one year for use in the later year(s)], head constraints, and capacity constraints was tested.
Transient times in linear metabolic pathways under constant affinity constraints.
Lloréns, M; Nuño, J C; Montero, F
1997-10-15
In the early seventies, Easterby began the analytical study of transition times for linear reaction schemes [Easterby (1973) Biochim. Biophys. Acta 293, 552-558]. In this pioneer work and in subsequent papers, a state function (the transient time) was used to measure the period before the stationary state, for systems constrained to work under both constant and variable input flux, was reached. Despite the undoubted usefulness of this quantity to describe the time-dependent features of these kinds of systems, its application to the study of chemical reactions under other constraints is questionable. In the present work, a generalization of these magnitudes to linear metabolic pathways functioning under a constant-affinity constraint is carried out. It is proved that classical definitions of transient times do not reflect the actual properties of the transition to the steady state in systems evolving under this restriction. Alternatively, a more adequate framework for interpretation of the transient times for systems with both constant and variable input flux is suggested. Within this context, new definitions that reflect more accurately the transient characteristics of constant affinity systems are stated. Finally, the meaning of these transient times is discussed.
Finite element solution of optimal control problems with state-control inequality constraints
NASA Technical Reports Server (NTRS)
Bless, Robert R.; Hodges, Dewey H.
1992-01-01
It is demonstrated that the weak Hamiltonian finite-element formulation is amenable to the solution of optimal control problems with inequality constraints which are functions of both state and control variables. Difficult problems can be treated on account of the ease with which algebraic equations can be generated before having to specify the problem. These equations yield very accurate solutions. Owing to the sparse structure of the resulting Jacobian, computer solutions can be obtained quickly when the sparsity is exploited.
Local sensitivity analyses and identifiable parameter subsets were used to describe numerical constraints of a hypoxia model for bottom waters of the northern Gulf of Mexico. The sensitivity of state variables differed considerably with parameter changes, although most variables ...
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
NASA Astrophysics Data System (ADS)
Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin
2018-03-01
Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.
Decision theory for computing variable and value ordering decisions for scheduling problems
NASA Technical Reports Server (NTRS)
Linden, Theodore A.
1993-01-01
Heuristics that guide search are critical when solving large planning and scheduling problems, but most variable and value ordering heuristics are sensitive to only one feature of the search state. One wants to combine evidence from all features of the search state into a subjective probability that a value choice is best, but there has been no solid semantics for merging evidence when it is conceived in these terms. Instead, variable and value ordering decisions should be viewed as problems in decision theory. This led to two key insights: (1) The fundamental concept that allows heuristic evidence to be merged is the net incremental utility that will be achieved by assigning a value to a variable. Probability distributions about net incremental utility can merge evidence from the utility function, binary constraints, resource constraints, and other problem features. The subjective probability that a value is the best choice is then derived from probability distributions about net incremental utility. (2) The methods used for rumor control in Bayesian Networks are the primary way to prevent cycling in the computation of probable net incremental utility. These insights lead to semantically justifiable ways to compute heuristic variable and value ordering decisions that merge evidence from all available features of the search state.
Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems
NASA Technical Reports Server (NTRS)
Balling, R. J.; Wilkinson, C. A.
1997-01-01
A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.
Constrained model predictive control, state estimation and coordination
NASA Astrophysics Data System (ADS)
Yan, Jun
In this dissertation, we study the interaction between the control performance and the quality of the state estimation in a constrained Model Predictive Control (MPC) framework for systems with stochastic disturbances. This consists of three parts: (i) the development of a constrained MPC formulation that adapts to the quality of the state estimation via constraints; (ii) the application of such a control law in a multi-vehicle formation coordinated control problem in which each vehicle operates subject to a no-collision constraint posed by others' imperfect prediction computed from finite bit-rate, communicated data; (iii) the design of the predictors and the communication resource assignment problem that satisfy the performance requirement from Part (ii). Model Predictive Control (MPC) is of interest because it is one of the few control design methods which preserves standard design variables and yet handles constraints. MPC is normally posed as a full-state feedback control and is implemented in a certainty-equivalence fashion with best estimates of the states being used in place of the exact state. However, if the state constraints were handled in the same certainty-equivalence fashion, the resulting control law could drive the real state to violate the constraints frequently. Part (i) focuses on exploring the inclusion of state estimates into the constraints. It does this by applying constrained MPC to a system with stochastic disturbances. The stochastic nature of the problem requires re-posing the constraints in a probabilistic form. In Part (ii), we consider applying constrained MPC as a local control law in a coordinated control problem of a group of distributed autonomous systems. Interactions between the systems are captured via constraints. First, we inspect the application of constrained MPC to a completely deterministic case. Formation stability theorems are derived for the subsystems and conditions on the local constraint set are derived in order to guarantee local stability or convergence to a target state. If these conditions are met for all subsystems, then this stability is inherited by the overall system. For the case when each subsystem suffers from disturbances in the dynamics, own self-measurement noises, and quantization errors on neighbors' information due to the finite-bit-rate channels, the constrained MPC strategy developed in Part (i) is appropriate to apply. In Part (iii), we discuss the local predictor design and bandwidth assignment problem in a coordinated vehicle formation context. The MPC controller used in Part (ii) relates the formation control performance and the information quality in the way that large standoff implies conservative performance. We first develop an LMI (Linear Matrix Inequality) formulation for cross-estimator design in a simple two-vehicle scenario with non-standard information: one vehicle does not have access to the other's exact control value applied at each sampling time, but to its known, pre-computed, coupling linear feedback control law. Then a similar LMI problem is formulated for the bandwidth assignment problem that minimizes the total number of bits by adjusting the prediction gain matrices and the number of bits assigned to each variable. (Abstract shortened by UMI.)
Ahlfeld, David P.; Barlow, Paul M.; Baker, Kristine M.
2011-01-01
Many groundwater-management problems are concerned with the control of one or more variables that reflect the state of a groundwater-flow system or a coupled groundwater/surface-water system. These system state variables include the distribution of heads within an aquifer, streamflow rates within a hydraulically connected stream, and flow rates into or out of aquifer storage. This report documents the new State Variables Package for the Groundwater-Management Process of MODFLOW-2005 (GWM-2005). The new package provides a means to explicitly represent heads, streamflows, and changes in aquifer storage as state variables in a GWM-2005 simulation. The availability of these state variables makes it possible to include system state in the objective function and enhances existing capabilities for constructing constraint sets for a groundwater-management formulation. The new package can be used to address groundwater-management problems such as the determination of withdrawal strategies that meet water-supply demands while simultaneously maximizing heads or streamflows, or minimizing changes in aquifer storage. Four sample problems are provided to demonstrate use of the new package for typical groundwater-management applications.
NASA Astrophysics Data System (ADS)
Truckenbrodt, Sina C.; Gómez-Dans, José; Stelmaszczuk-Górska, Martyna A.; Chernetskiy, Maxim; Schmullius, Christiane C.
2017-04-01
Throughout the past decades various satellite sensors have been launched that record reflectance in the optical domain and facilitate comprehensive monitoring of the vegetation-covered land surface from space. The interaction of photons with the canopy, leaves and soil that determines the spectrum of reflected sunlight can be simulated with radiative transfer models (RTMs). The inversion of RTMs permits the derivation of state variables such as leaf area index (LAI) and leaf chlorophyll content from top-of-canopy reflectance. Space-borne data are, however, insufficient for an unambiguous derivation of state variables and additional constraints are required to resolve this ill-posed problem. Data assimilation techniques permit the conflation of various information with due allowance for associated uncertainties. The Earth Observation Land Data Assimilation System (EO-LDAS) integrates RTMs into a dynamic process model that describes the temporal evolution of state variables. In addition, prior information is included to further constrain the inversion and enhance the state variable derivation. In previous studies on EO-LDAS, prior information was represented by temporally constant values for all investigated state variables, while information about their phenological evolution was neglected. Here, we examine to what extent the implementation of prior information reflecting the phenological variability improves the performance of EO-LDAS with respect to the monitoring of crops on the agricultural Gebesee test site (Central Germany). Various routines for the generation of prior information are tested. This involves the usage of data on state variables that was acquired in previous years as well as the application of phenological models. The performance of EO-LDAS with the newly implemented prior information is tested based on medium resolution satellite imagery (e.g., RapidEye REIS, Sentinel-2 MSI, Landsat-7 ETM+ and Landsat-8 OLI). The predicted state variables are validated against in situ data from the Gebesee test site that were acquired with a weekly to fortnightly resolution throughout the growing seasons of 2010, 2013, 2014 and 2016. Furthermore, the results are compared with the outcome of using constant values as prior information. In this presentation, the EO-LDAS scheme and results obtained from different prior information are presented.
Lin, Fu; Leyffer, Sven; Munson, Todd
2016-04-12
We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Fu; Leyffer, Sven; Munson, Todd
We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less
Optimization of Water Resources and Agricultural Activities for Economic Benefit in Colorado
NASA Astrophysics Data System (ADS)
LIM, J.; Lall, U.
2017-12-01
The limited water resources available for irrigation are a key constraint for the important agricultural sector of Colorado's economy. As climate change and groundwater depletion reshape these resources, it is essential to understand the economic potential of water resources under different agricultural production practices. This study uses a linear programming optimization at the county spatial scale and annual temporal scales to study the optimal allocation of water withdrawal and crop choices. The model, AWASH, reflects streamflow constraints between different extraction points, six field crops, and a distinct irrigation decision for maize and wheat. The optimized decision variables, under different environmental, social, economic, and physical constraints, provide long-term solutions for ground and surface water distribution and for land use decisions so that the state can generate the maximum net revenue. Colorado, one of the largest agricultural producers, is tested as a case study and the sensitivity on water price and on climate variability is explored.
Relaxation in control systems of subdifferential type
NASA Astrophysics Data System (ADS)
Tolstonogov, A. A.
2006-02-01
In a separable Hilbert space we consider a control system with evolution operators that are subdifferentials of a proper convex lower semicontinuous function depending on time. The constraint on the control is given by a multivalued function with non-convex values that is lower semicontinuous with respect to the variable states. Along with the original system we consider the system in which the constraint on the control is the upper semicontinuous convex-valued regularization of the original constraint. We study relations between the solution sets of these systems. As an application we consider a control variational inequality. We give an example of a control system of parabolic type with an obstacle.
ERIC Educational Resources Information Center
Del Razo, Parvati Heliana
2012-01-01
The purpose of this study was to find out if the demographic variables of country of origin, generation in the United States (immigration status), income and parental education had an impact on the financial aid packages of Hispanic undergraduate students. This dissertation asked: What is the relation between generation in the United States,…
Farias, Ariel A; Jaksic, Fabian M
2007-03-01
1. Within mainstream ecological literature, functional structure has been viewed as resulting from the interplay of species interactions, resource levels and environmental variability. Classical models state that interspecific competition generates species segregation and guild formation in stable saturated environments, whereas opportunism causes species aggregation on abundant resources in variable unsaturated situations. 2. Nevertheless, intrinsic functional constraints may result in species-specific differences in resource-use capabilities. This could force some degree of functional structure without assuming other putative causes. However, the influence of such constraints has rarely been tested, and their relative contribution to observed patterns has not been quantified. 3. We used a multiple null-model approach to quantify the magnitude and direction (non-random aggregation or divergence) of the functional structure of a vertebrate predator assemblage exposed to variable prey abundance over an 18-year period. Observed trends were contrasted with predictions from null-models designed in an orthogonal fashion to account independently for the effects of functional constraints and opportunism. Subsequently, the unexplained variation was regressed against environmental variables to search for evidence of interspecific competition. 4. Overall, null-models accounting for functional constraints showed the best fit to the observed data, and suggested an effect of this factor in modulating predator opportunistic responses. However, regression models on residual variation indicated that such an effect was dependent on both total and relative abundance of principal (small mammals) and alternative (arthropods, birds, reptiles) prey categories. 5. In addition, no clear evidence for interspecific competition was found, but differential delays in predator functional responses could explain some of the unaccounted variation. Thus, we call for caution when interpreting empirical data in the context of classical models assuming synchronous responses of consumers to resource levels.
A Framework for Dynamic Constraint Reasoning Using Procedural Constraints
NASA Technical Reports Server (NTRS)
Jonsson, Ari K.; Frank, Jeremy D.
1999-01-01
Many complex real-world decision and control problems contain an underlying constraint reasoning problem. This is particularly evident in a recently developed approach to planning, where almost all planning decisions are represented by constrained variables. This translates a significant part of the planning problem into a constraint network whose consistency determines the validity of the plan candidate. Since higher-level choices about control actions can add or remove variables and constraints, the underlying constraint network is invariably highly dynamic. Arbitrary domain-dependent constraints may be added to the constraint network and the constraint reasoning mechanism must be able to handle such constraints effectively. Additionally, real problems often require handling constraints over continuous variables. These requirements present a number of significant challenges for a constraint reasoning mechanism. In this paper, we introduce a general framework for handling dynamic constraint networks with real-valued variables, by using procedures to represent and effectively reason about general constraints. The framework is based on a sound theoretical foundation, and can be proven to be sound and complete under well-defined conditions. Furthermore, the framework provides hybrid reasoning capabilities, as alternative solution methods like mathematical programming can be incorporated into the framework, in the form of procedures.
Landscape Encodings Enhance Optimization
Klemm, Konstantin; Mehta, Anita; Stadler, Peter F.
2012-01-01
Hard combinatorial optimization problems deal with the search for the minimum cost solutions (ground states) of discrete systems under strong constraints. A transformation of state variables may enhance computational tractability. It has been argued that these state encodings are to be chosen invertible to retain the original size of the state space. Here we show how redundant non-invertible encodings enhance optimization by enriching the density of low-energy states. In addition, smooth landscapes may be established on encoded state spaces to guide local search dynamics towards the ground state. PMID:22496860
Centrifugal compressor fault diagnosis based on qualitative simulation and thermal parameters
NASA Astrophysics Data System (ADS)
Lu, Yunsong; Wang, Fuli; Jia, Mingxing; Qi, Yuanchen
2016-12-01
This paper concerns fault diagnosis of centrifugal compressor based on thermal parameters. An improved qualitative simulation (QSIM) based fault diagnosis method is proposed to diagnose the faults of centrifugal compressor in a gas-steam combined-cycle power plant (CCPP). The qualitative models under normal and two faulty conditions have been built through the analysis of the principle of centrifugal compressor. To solve the problem of qualitative description of the observations of system variables, a qualitative trend extraction algorithm is applied to extract the trends of the observations. For qualitative states matching, a sliding window based matching strategy which consists of variables operating ranges constraints and qualitative constraints is proposed. The matching results are used to determine which QSIM model is more consistent with the running state of system. The correct diagnosis of two typical faults: seal leakage and valve stuck in the centrifugal compressor has validated the targeted performance of the proposed method, showing the advantages of fault roots containing in thermal parameters.
NASA Astrophysics Data System (ADS)
Brauer, Uwe; Karp, Lavi
This paper deals with the construction of initial data for the coupled Einstein-Euler system. We consider the condition where the energy density might vanish or tend to zero at infinity, and where the pressure is a fractional power of the energy density. In order to achieve our goals we use a type of weighted Sobolev space of fractional order. The common Lichnerowicz-York scaling method (Choquet-Bruhat and York, 1980 [9]; Cantor, 1979 [7]) for solving the constraint equations cannot be applied here directly. The basic problem is that the matter sources are scaled conformally and the fluid variables have to be recovered from the conformally transformed matter sources. This problem has been addressed, although in a different context, by Dain and Nagy (2002) [11]. We show that if the matter variables are restricted to a certain region, then the Einstein constraint equations have a unique solution in the weighted Sobolev spaces of fractional order. The regularity depends upon the fractional power of the equation of state.
Reinterpreting maximum entropy in ecology: a null hypothesis constrained by ecological mechanism.
O'Dwyer, James P; Rominger, Andrew; Xiao, Xiao
2017-07-01
Simplified mechanistic models in ecology have been criticised for the fact that a good fit to data does not imply the mechanism is true: pattern does not equal process. In parallel, the maximum entropy principle (MaxEnt) has been applied in ecology to make predictions constrained by just a handful of state variables, like total abundance or species richness. But an outstanding question remains: what principle tells us which state variables to constrain? Here we attempt to solve both problems simultaneously, by translating a given set of mechanisms into the state variables to be used in MaxEnt, and then using this MaxEnt theory as a null model against which to compare mechanistic predictions. In particular, we identify the sufficient statistics needed to parametrise a given mechanistic model from data and use them as MaxEnt constraints. Our approach isolates exactly what mechanism is telling us over and above the state variables alone. © 2017 John Wiley & Sons Ltd/CNRS.
Hirayama, Jun-ichiro; Hyvärinen, Aapo; Kiviniemi, Vesa; Kawanabe, Motoaki; Yamashita, Okito
2016-01-01
Characterizing the variability of resting-state functional brain connectivity across subjects and/or over time has recently attracted much attention. Principal component analysis (PCA) serves as a fundamental statistical technique for such analyses. However, performing PCA on high-dimensional connectivity matrices yields complicated “eigenconnectivity” patterns, for which systematic interpretation is a challenging issue. Here, we overcome this issue with a novel constrained PCA method for connectivity matrices by extending the idea of the previously proposed orthogonal connectivity factorization method. Our new method, modular connectivity factorization (MCF), explicitly introduces the modularity of brain networks as a parametric constraint on eigenconnectivity matrices. In particular, MCF analyzes the variability in both intra- and inter-module connectivities, simultaneously finding network modules in a principled, data-driven manner. The parametric constraint provides a compact module-based visualization scheme with which the result can be intuitively interpreted. We develop an optimization algorithm to solve the constrained PCA problem and validate our method in simulation studies and with a resting-state functional connectivity MRI dataset of 986 subjects. The results show that the proposed MCF method successfully reveals the underlying modular eigenconnectivity patterns in more general situations and is a promising alternative to existing methods. PMID:28002474
NASA Astrophysics Data System (ADS)
Hobbs, J.; Turmon, M.; David, C. H.; Reager, J. T., II; Famiglietti, J. S.
2017-12-01
NASA's Western States Water Mission (WSWM) combines remote sensing of the terrestrial water cycle with hydrological models to provide high-resolution state estimates for multiple variables. The effort includes both land surface and river routing models that are subject to several sources of uncertainty, including errors in the model forcing and model structural uncertainty. Computational and storage constraints prohibit extensive ensemble simulations, so this work outlines efficient but flexible approaches for estimating and reporting uncertainty. Calibrated by remote sensing and in situ data where available, we illustrate the application of these techniques in producing state estimates with associated uncertainties at kilometer-scale resolution for key variables such as soil moisture, groundwater, and streamflow.
A heuristic constraint programmed planner for deep space exploration problems
NASA Astrophysics Data System (ADS)
Jiang, Xiao; Xu, Rui; Cui, Pingyuan
2017-10-01
In recent years, the increasing numbers of scientific payloads and growing constraints on the probe have made constraint processing technology a hotspot in the deep space planning field. In the procedure of planning, the ordering of variables and values plays a vital role. This paper we present two heuristic ordering methods for variables and values. On this basis a graphplan-like constraint-programmed planner is proposed. In the planner we convert the traditional constraint satisfaction problem to a time-tagged form with different levels. Inspired by the most constrained first principle in constraint satisfaction problem (CSP), the variable heuristic is designed by the number of unassigned variables in the constraint and the value heuristic is designed by the completion degree of the support set. The simulation experiments show that the planner proposed is effective and its performance is competitive with other kind of planners.
Representativeness-based sampling network design for the State of Alaska
Forrest M. Hoffman; Jitendra Kumar; Richard T. Mills; William W. Hargrove
2013-01-01
Resource and logistical constraints limit the frequency and extent of environmental observations, particularly in the Arctic, necessitating the development of a systematic sampling strategy to maximize coverage and objectively represent environmental variability at desired scales. A quantitative methodology for stratifying sampling domains, informing site selection,...
META II Complex Systems Design and Analysis (CODA)
2011-08-01
37 3.8.7 Variables, Parameters and Constraints ............................................................. 37 3.8.8 Objective...18 Figure 7: Inputs, States, Outputs and Parameters of System Requirements Specifications ......... 19...Design Rule Based on Device Parameter ....................................................... 57 Figure 35: AEE Device Design Rules (excerpt
Ontology and modeling patterns for state-based behavior representation
NASA Technical Reports Server (NTRS)
Castet, Jean-Francois; Rozek, Matthew L.; Ingham, Michel D.; Rouquette, Nicolas F.; Chung, Seung H.; Kerzhner, Aleksandr A.; Donahue, Kenneth M.; Jenkins, J. Steven; Wagner, David A.; Dvorak, Daniel L.;
2015-01-01
This paper provides an approach to capture state-based behavior of elements, that is, the specification of their state evolution in time, and the interactions amongst them. Elements can be components (e.g., sensors, actuators) or environments, and are characterized by state variables that vary with time. The behaviors of these elements, as well as interactions among them are represented through constraints on state variables. This paper discusses the concepts and relationships introduced in this behavior ontology, and the modeling patterns associated with it. Two example cases are provided to illustrate their usage, as well as to demonstrate the flexibility and scalability of the behavior ontology: a simple flashlight electrical model and a more complex spacecraft model involving instruments, power and data behaviors. Finally, an implementation in a SysML profile is provided.
Variable-Metric Algorithm For Constrained Optimization
NASA Technical Reports Server (NTRS)
Frick, James D.
1989-01-01
Variable Metric Algorithm for Constrained Optimization (VMACO) is nonlinear computer program developed to calculate least value of function of n variables subject to general constraints, both equality and inequality. First set of constraints equality and remaining constraints inequalities. Program utilizes iterative method in seeking optimal solution. Written in ANSI Standard FORTRAN 77.
Ahlfeld, David P.; Barlow, Paul M.; Mulligan, Anne E.
2005-01-01
GWM is a Ground?Water Management Process for the U.S. Geological Survey modular three?dimensional ground?water model, MODFLOW?2000. GWM uses a response?matrix approach to solve several types of linear, nonlinear, and mixed?binary linear ground?water management formulations. Each management formulation consists of a set of decision variables, an objective function, and a set of constraints. Three types of decision variables are supported by GWM: flow?rate decision variables, which are withdrawal or injection rates at well sites; external decision variables, which are sources or sinks of water that are external to the flow model and do not directly affect the state variables of the simulated ground?water system (heads, streamflows, and so forth); and binary variables, which have values of 0 or 1 and are used to define the status of flow?rate or external decision variables. Flow?rate decision variables can represent wells that extend over one or more model cells and be active during one or more model stress periods; external variables also can be active during one or more stress periods. A single objective function is supported by GWM, which can be specified to either minimize or maximize the weighted sum of the three types of decision variables. Four types of constraints can be specified in a GWM formulation: upper and lower bounds on the flow?rate and external decision variables; linear summations of the three types of decision variables; hydraulic?head based constraints, including drawdowns, head differences, and head gradients; and streamflow and streamflow?depletion constraints. The Response Matrix Solution (RMS) Package of GWM uses the Ground?Water Flow Process of MODFLOW to calculate the change in head at each constraint location that results from a perturbation of a flow?rate variable; these changes are used to calculate the response coefficients. For linear management formulations, the resulting matrix of response coefficients is then combined with other components of the linear management formulation to form a complete linear formulation; the formulation is then solved by use of the simplex algorithm, which is incorporated into the RMS Package. Nonlinear formulations arise for simulated conditions that include water?table (unconfined) aquifers or head?dependent boundary conditions (such as streams, drains, or evapotranspiration from the water table). Nonlinear formulations are solved by sequential linear programming; that is, repeated linearization of the nonlinear features of the management problem. In this approach, response coefficients are recalculated for each iteration of the solution process. Mixed?binary linear (or mildly nonlinear) formulations are solved by use of the branch and bound algorithm, which is also incorporated into the RMS Package. Three sample problems are provided to demonstrate the use of GWM for typical ground?water flow management problems. These sample problems provide examples of how GWM input files are constructed to specify the decision variables, objective function, constraints, and solution process for a GWM run. The GWM Process runs with the MODFLOW?2000 Global and Ground?Water Flow Processes, but in its current form GWM cannot be used with the Observation, Sensitivity, Parameter?Estimation, or Ground?Water Transport Processes. The GWM Process is written with a modular structure so that new objective functions, constraint types, and solution algorithms can be added.
Role of slack variables in quasi-Newton methods for constrained optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tapia, R.A.
In constrained optimization the technique of converting an inequality constraint into an equality constraint by the addition of a squared slack variable is well known but rarely used. In choosing an active constraint philosophy over the slack variable approach, researchers quickly justify their choice with the standard criticisms: the slack variable approach increases the dimension of the problem, is numerically unstable, and gives rise to singular systems. It is shown that these criticisms of the slack variable approach need not apply and the two seemingly distinct approaches are actually very closely related. In fact, the squared slack variable formulation canmore » be used to develop a superior and more comprehensive active constraint philosophy.« less
Optimal control problems with mixed control-phase variable equality and inequality constraints
NASA Technical Reports Server (NTRS)
Makowski, K.; Neustad, L. W.
1974-01-01
In this paper, necessary conditions are obtained for optimal control problems containing equality constraints defined in terms of functions of the control and phase variables. The control system is assumed to be characterized by an ordinary differential equation, and more conventional constraints, including phase inequality constraints, are also assumed to be present. Because the first-mentioned equality constraint must be satisfied for all t (the independent variable of the differential equation) belonging to an arbitrary (prescribed) measurable set, this problem gives rise to infinite-dimensional equality constraints. To obtain the necessary conditions, which are in the form of a maximum principle, an implicit-function-type theorem in Banach spaces is derived.
Preliminary constraints on variable w dark energy cosmologies from the SNLS
NASA Astrophysics Data System (ADS)
Carlberg, R. G.; Conley, A.; Howell, D. A.; Neill, J. D.; Perrett, K.; Pritchet, C. J.; Sullivan, M.
2005-12-01
The first 71 confirmed Ia supernovae from the Supernova Legacy Survey being conducted with CFHT imaging and Gemini, VLT and Keck spectroscopy set limits on variable dark energy cosmological models. For a generalized Chaplygin gas, in which the dark energy content is (1-Ω M)/ρ a, we find that a is statistically consistent with zero, with a best fit a=-0.2±-0.3 (68 systematic errors requires a further refinement of the photometric calibration and the potential model biases. A variable dark energy equation of state with w=w0+w_1 z shows the expected degeneracy between increasingly positive w0 and negative w1. The existing data rule out the parameters of the Weller & Linder (2002) Super-gravity inspired model cosmology (w0,w_1)=(-0.81,0.31). The full 700 Ia of the completed survey will provide a statistical error limit of w1 of about 0.2 and significant constraints on variable w models. The Canadian NSERC provided funding for the scientific analysis. These results are based on observations obtained at the CFHT, Gemini, VLT and Keck observatories.
Estimate of Shock-Hugoniot Adiabat of Liquids from Hydrodynamics
NASA Astrophysics Data System (ADS)
Bouton, E.; Vidal, P.
2007-12-01
Shock states are generally obtained from shock velocity (D) and material velocity (u) measurements. In this paper, we propose a hydrodynamical method for estimating the (D-u) relation of Nitromethane from easily measured properties of the initial state. The method is based upon the differentiation of the Rankine-Hugoniot jump relations with the initial temperature considered as a variable and under the constraint of a unique nondimensional shock-Hugoniot. We then obtain an ordinary differential equation for the shock velocity D in the variable u. Upon integration, this method predicts the shock Hugoniot of liquid Nitromethane with a 5% accuracy for initial temperatures ranging from 250 K to 360 K.
Optimal orbit transfer suitable for large flexible structures
NASA Technical Reports Server (NTRS)
Chatterjee, Alok K.
1989-01-01
The problem of continuous low-thrust planar orbit transfer of large flexible structures is formulated as an optimal control problem with terminal state constraints. The dynamics of the spacecraft motion are treated as a point-mass central force field problem; the thrust-acceleration magnitude is treated as an additional state variable; and the rate of change of thrust-acceleration is treated as a control variable. To ensure smooth transfer, essential for flexible structures, an additional quadratic term is appended to the time cost functional. This term penalizes any abrupt change in acceleration. Numerical results are presented for the special case of a planar transfer.
Control Augmented Structural Synthesis
NASA Technical Reports Server (NTRS)
Lust, Robert V.; Schmit, Lucien A.
1988-01-01
A methodology for control augmented structural synthesis is proposed for a class of structures which can be modeled as an assemblage of frame and/or truss elements. It is assumed that both the plant (structure) and the active control system dynamics can be adequately represented with a linear model. The structural sizing variables, active control system feedback gains and nonstructural lumped masses are treated simultaneously as independent design variables. Design constraints are imposed on static and dynamic displacements, static stresses, actuator forces and natural frequencies to ensure acceptable system behavior. Multiple static and dynamic loading conditions are considered. Side constraints imposed on the design variables protect against the generation of unrealizable designs. While the proposed approach is fundamentally more general, here the methodology is developed and demonstrated for the case where: (1) the dynamic loading is harmonic and thus the steady state response is of primary interest; (2) direct output feedback is used for the control system model; and (3) the actuators and sensors are collocated.
Discrete optimal control approach to a four-dimensional guidance problem near terminal areas
NASA Technical Reports Server (NTRS)
Nagarajan, N.
1974-01-01
Description of a computer-oriented technique to generate the necessary control inputs to guide an aircraft in a given time from a given initial state to a prescribed final state subject to the constraints on airspeed, acceleration, and pitch and bank angles of the aircraft. A discrete-time mathematical model requiring five state variables and three control variables is obtained, assuming steady wind and zero sideslip. The guidance problem is posed as a discrete nonlinear optimal control problem with a cost functional of Bolza form. A solution technique for the control problem is investigated, and numerical examples are presented. It is believed that this approach should prove to be useful in automated air traffic control schemes near large terminal areas.
Variable spreading layer in 4U 1608-52 during thermonuclear X-ray bursts in the soft state
NASA Astrophysics Data System (ADS)
Kajava, J. J. E.; Koljonen, K. I. I.; Nättilä, J.; Suleimanov, V.; Poutanen, J.
2017-11-01
Thermonuclear (type-I) X-ray bursts, observed from neutron star (NS) low-mass X-ray binaries (LMXB), provide constraints on NS masses and radii and consequently the equation of state of NS cores. In such analyses, various assumptions are made without knowing if they are justified. We have analysed X-ray burst spectra from the LMXB 4U 1608-52, with the aim of studying how the different persistent emission components react to the bursts. During some bursts in the soft spectral state we find that there are two variable components: one corresponding to the burst blackbody component and another optically thick Comptonized component. We interpret the latter as the spreading layer between the NS surface and the accretion disc, which is not present during the hard-state bursts. We propose that the spectral changes during the soft-state bursts are driven by the spreading layer that could cover almost the entire NS in the brightest phases due to the enhanced radiation pressure support provided by the burst, and that the layer subsequently returns to its original state during the burst decay. When deriving the NS mass and radius using the soft-state bursts two assumptions are therefore not met: the NS is not entirely visible and the burst emission is reprocessed in the spreading layer, causing distortions of the emitted spectrum. For these reasons, the NS mass and radius constraints using the soft-state bursts are different compared to the ones derived using the hard-state bursts.
A programing system for research and applications in structural optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Rogers, J. L., Jr.
1981-01-01
The paper describes a computer programming system designed to be used for methodology research as well as applications in structural optimization. The flexibility necessary for such diverse utilizations is achieved by combining, in a modular manner, a state-of-the-art optimization program, a production level structural analysis program, and user supplied and problem dependent interface programs. Standard utility capabilities existing in modern computer operating systems are used to integrate these programs. This approach results in flexibility of the optimization procedure organization and versatility in the formulation of contraints and design variables. Features shown in numerical examples include: (1) variability of structural layout and overall shape geometry, (2) static strength and stiffness constraints, (3) local buckling failure, and (4) vibration constraints. The paper concludes with a review of the further development trends of this programing system.
NASA Astrophysics Data System (ADS)
Kumar, Suresh; Xu, Lixin
2014-10-01
In this paper, we study a cosmological model in general relativity within the framework of spatially flat Friedmann-Robertson-Walker space-time filled with ordinary matter (baryonic), radiation, dark matter and dark energy, where the latter two components are described by Chevallier-Polarski-Linder equation of state parameters. We utilize the observational data sets from SNLS3, BAO and Planck + WMAP9 + WiggleZ measurements of matter power spectrum to constrain the model parameters. We find that the current observational data offer tight constraints on the equation of state parameter of dark matter. We consider the perturbations and study the behavior of dark matter by observing its effects on CMB and matter power spectra. We find that the current observational data favor the cold dark matter scenario with the cosmological constant type dark energy at the present epoch.
Ric, Angel; Torrents, Carlota; Gonçalves, Bruno; Torres-Ronda, Lorena; Sampaio, Jaime; Hristovski, Robert
2017-01-01
The analysis of positional data in association football allows the spatial distribution of players during matches to be described in order to improve the understanding of tactical-related constraints on the behavioural dynamics of players. The aim of this study was to identify how players' spatial restrictions affected the exploratory tactical behaviour and constrained the perceptual-motor workspace of players in possession of the ball, as well as inter-player passing interactions. Nineteen professional outfield male players were divided into two teams of 10 and 9 players, respectively. The game was played under three spatial constraints: a) players were not allowed to move out of their allocated zones, except for the player in possession of the ball; b) players were allowed to move to an adjacent zone, and; c) non-specific spatial constraints. Positional data was captured using a 5 Hz interpolated GPS tracking system and used to define the configuration states of players for each second in time. The configuration state comprised 37 categories derived from tactical actions, distance from the nearest opponent, distance from the target and movement speed. Notational analysis of players in possession of the ball allowed the mean time of ball possession and the probabilities of passing the ball between players to be calculated. The results revealed that the players' long-term exploratory behaviour decreased and their short-term exploration increased when restricting their space of interaction. Relaxing players' positional constraints seemed to increase the speed of ball flow dynamics. Allowing players to move to an adjacent sub-area increased the probabilities of interaction with the full-back during play build-up. The instability of the coordinative state defined by being free from opponents when players had the ball possession was an invariant feature under all three task constraints. By allowing players to move to adjacent sub-areas, the coordinative state became highly unstable when the distance from the target decreased. Ball location relative to the scoring zone and interpersonal distance constitute key environmental information that constrains the players' coordinative behaviour. Based on our results, dynamic overlap is presented as a good option to capture tactical performance. Moreover, the selected collective (i.e. relational) variables would allow coaches to identify the effects of training drills on teams and players' behaviour. More research is needed considering these type variables to understand how the manipulation of constraints induce a more stable or flexible dynamical structure of tactical behaviour.
Torrents, Carlota; Gonçalves, Bruno; Torres-Ronda, Lorena; Sampaio, Jaime; Hristovski, Robert
2017-01-01
The analysis of positional data in association football allows the spatial distribution of players during matches to be described in order to improve the understanding of tactical-related constraints on the behavioural dynamics of players. The aim of this study was to identify how players’ spatial restrictions affected the exploratory tactical behaviour and constrained the perceptual-motor workspace of players in possession of the ball, as well as inter-player passing interactions. Nineteen professional outfield male players were divided into two teams of 10 and 9 players, respectively. The game was played under three spatial constraints: a) players were not allowed to move out of their allocated zones, except for the player in possession of the ball; b) players were allowed to move to an adjacent zone, and; c) non-specific spatial constraints. Positional data was captured using a 5 Hz interpolated GPS tracking system and used to define the configuration states of players for each second in time. The configuration state comprised 37 categories derived from tactical actions, distance from the nearest opponent, distance from the target and movement speed. Notational analysis of players in possession of the ball allowed the mean time of ball possession and the probabilities of passing the ball between players to be calculated. The results revealed that the players’ long-term exploratory behaviour decreased and their short-term exploration increased when restricting their space of interaction. Relaxing players’ positional constraints seemed to increase the speed of ball flow dynamics. Allowing players to move to an adjacent sub-area increased the probabilities of interaction with the full-back during play build-up. The instability of the coordinative state defined by being free from opponents when players had the ball possession was an invariant feature under all three task constraints. By allowing players to move to adjacent sub-areas, the coordinative state became highly unstable when the distance from the target decreased. Ball location relative to the scoring zone and interpersonal distance constitute key environmental information that constrains the players’ coordinative behaviour. Based on our results, dynamic overlap is presented as a good option to capture tactical performance. Moreover, the selected collective (i.e. relational) variables would allow coaches to identify the effects of training drills on teams and players’ behaviour. More research is needed considering these type variables to understand how the manipulation of constraints induce a more stable or flexible dynamical structure of tactical behaviour. PMID:28708868
NASA Technical Reports Server (NTRS)
Mukhopadhyay, V.
1988-01-01
A generic procedure for the parameter optimization of a digital control law for a large-order flexible flight vehicle or large space structure modeled as a sampled data system is presented. A linear quadratic Guassian type cost function was minimized, while satisfying a set of constraints on the steady-state rms values of selected design responses, using a constrained optimization technique to meet multiple design requirements. Analytical expressions for the gradients of the cost function and the design constraints on mean square responses with respect to the control law design variables are presented.
Yu, Huapeng; Zhu, Hai; Gao, Dayuan; Yu, Meng; Wu, Wenqi
2015-01-01
The Kalman filter (KF) has always been used to improve north-finding performance under practical conditions. By analyzing the characteristics of the azimuth rotational inertial measurement unit (ARIMU) on a stationary base, a linear state equality constraint for the conventional KF used in the fine north-finding filtering phase is derived. Then, a constrained KF using the state equality constraint is proposed and studied in depth. Estimation behaviors of the concerned navigation errors when implementing the conventional KF scheme and the constrained KF scheme during stationary north-finding are investigated analytically by the stochastic observability approach, which can provide explicit formulations of the navigation errors with influencing variables. Finally, multiple practical experimental tests at a fixed position are done on a postulate system to compare the stationary north-finding performance of the two filtering schemes. In conclusion, this study has successfully extended the utilization of the stochastic observability approach for analytic descriptions of estimation behaviors of the concerned navigation errors, and the constrained KF scheme has demonstrated its superiority over the conventional KF scheme for ARIMU stationary north-finding both theoretically and practically. PMID:25688588
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adesso, Gerardo; CNR-INFM Coherentia , Naples; Grup d'Informacio Quantica, Universitat Autonoma de Barcelona, E-08193 Bellaterra
2007-08-15
Quantum mechanics imposes 'monogamy' constraints on the sharing of entanglement. We show that, despite these limitations, entanglement can be fully 'promiscuous', i.e., simultaneously present in unlimited two-body and many-body forms in states living in an infinite-dimensional Hilbert space. Monogamy just bounds the divergence rate of the various entanglement contributions. This is demonstrated in simple families of N-mode (N{>=}4) Gaussian states of light fields or atomic ensembles, which therefore enable infinitely more freedom in the distribution of information, as opposed to systems of individual qubits. Such a finding is of importance for the quantification, understanding, and potential exploitation of shared quantummore » correlations in continuous variable systems. We discuss how promiscuity gradually arises when considering simple families of discrete variable states, with increasing Hilbert space dimension towards the continuous variable limit. Such models are somehow analogous to Gaussian states with asymptotically diverging, but finite, squeezing. In this respect, we find that non-Gaussian states (which in general are more entangled than Gaussian states) exhibit also the interesting feature that their entanglement is more shareable: in the non-Gaussian multipartite arena, unlimited promiscuity can be already achieved among three entangled parties, while this is impossible for Gaussian, even infinitely squeezed states.« less
Pey, Jon; Rubio, Angel; Theodoropoulos, Constantinos; Cascante, Marta; Planes, Francisco J
2012-07-01
Constraints-based modeling is an emergent area in Systems Biology that includes an increasing set of methods for the analysis of metabolic networks. In order to refine its predictions, the development of novel methods integrating high-throughput experimental data is currently a key challenge in the field. In this paper, we present a novel set of constraints that integrate tracer-based metabolomics data from Isotope Labeling Experiments and metabolic fluxes in a linear fashion. These constraints are based on Elementary Carbon Modes (ECMs), a recently developed concept that generalizes Elementary Flux Modes at the carbon level. To illustrate the effect of our ECMs-based constraints, a Flux Variability Analysis approach was applied to a previously published metabolic network involving the main pathways in the metabolism of glucose. The addition of our ECMs-based constraints substantially reduced the under-determination resulting from a standard application of Flux Variability Analysis, which shows a clear progress over the state of the art. In addition, our approach is adjusted to deal with combinatorial explosion of ECMs in genome-scale metabolic networks. This extension was applied to infer the maximum biosynthetic capacity of non-essential amino acids in human metabolism. Finally, as linearity is the hallmark of our approach, its importance is discussed at a methodological, computational and theoretical level and illustrated with a practical application in the field of Isotope Labeling Experiments. Copyright © 2012 Elsevier Inc. All rights reserved.
Maximally reliable Markov chains under energy constraints.
Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam
2009-07-01
Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.
NASA Astrophysics Data System (ADS)
Adesso, Gerardo; Serafini, Alessio; Illuminati, Fabrizio
2006-03-01
We present a complete analysis of the multipartite entanglement of three-mode Gaussian states of continuous-variable systems. We derive standard forms which characterize the covariance matrix of pure and mixed three-mode Gaussian states up to local unitary operations, showing that the local entropies of pure Gaussian states are bound to fulfill a relationship which is stricter than the general Araki-Lieb inequality. Quantum correlations can be quantified by a proper convex roof extension of the squared logarithmic negativity, the continuous-variable tangle, or contangle. We review and elucidate in detail the proof that in multimode Gaussian states the contangle satisfies a monogamy inequality constraint [G. Adesso and F. Illuminati, New J. Phys8, 15 (2006)]. The residual contangle, emerging from the monogamy inequality, is an entanglement monotone under Gaussian local operations and classical communications and defines a measure of genuine tripartite entanglements. We determine the analytical expression of the residual contangle for arbitrary pure three-mode Gaussian states and study in detail the distribution of quantum correlations in such states. This analysis yields that pure, symmetric states allow for a promiscuous entanglement sharing, having both maximum tripartite entanglement and maximum couplewise entanglement between any pair of modes. We thus name these states GHZ/W states of continuous-variable systems because they are simultaneous continuous-variable counterparts of both the GHZ and the W states of three qubits. We finally consider the effect of decoherence on three-mode Gaussian states, studying the decay of the residual contangle. The GHZ/W states are shown to be maximally robust against losses and thermal noise.
Evaluating Classified MODIS Satellite Imagery as a Stratification Tool
Greg C. Liknes; Mark D. Nelson; Ronald E. McRoberts
2004-01-01
The Forest Inventory and Analysis (FIA) program of the USDA Forest Service collects forest attribute data on permanent plots arranged on a hexagonal network across all 50 states and Puerto Rico. Due to budget constraints, sample sizes sufficient to satisfy national FIA precision standards are seldom achieved for most inventory variables unless the estimation process is...
Diffusion Processes Satisfying a Conservation Law Constraint
Bakosi, J.; Ristorcelli, J. R.
2014-03-04
We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less
Diffusion Processes Satisfying a Conservation Law Constraint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bakosi, J.; Ristorcelli, J. R.
We investigate coupled stochastic differential equations governing N non-negative continuous random variables that satisfy a conservation principle. In various fields a conservation law requires that a set of fluctuating variables be non-negative and (if appropriately normalized) sum to one. As a result, any stochastic differential equation model to be realizable must not produce events outside of the allowed sample space. We develop a set of constraints on the drift and diffusion terms of such stochastic models to ensure that both the non-negativity and the unit-sum conservation law constraint are satisfied as the variables evolve in time. We investigate the consequencesmore » of the developed constraints on the Fokker-Planck equation, the associated system of stochastic differential equations, and the evolution equations of the first four moments of the probability density function. We show that random variables, satisfying a conservation law constraint, represented by stochastic diffusion processes, must have diffusion terms that are coupled and nonlinear. The set of constraints developed enables the development of statistical representations of fluctuating variables satisfying a conservation law. We exemplify the results with the bivariate beta process and the multivariate Wright-Fisher, Dirichlet, and Lochner’s generalized Dirichlet processes.« less
Estimate of shock-Hugoniot adiabat of liquids from hydrodyamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bouton, E.; Vidal, P.
2007-12-12
Shock states are generally obtained from shock velocity (D) and material velocity (u) measurements. In this paper, we propose a hydrodynamical method for estimating the (D-u) relation of Nitromethane from easily measured properties of the initial state. The method is based upon the differentiation of the Rankine-Hugoniot jump relations with the initial temperature considered as a variable and under the constraint of a unique nondimensional shock-Hugoniot. We then obtain an ordinary differential equation for the shock velocity D in the variable u. Upon integration, this method predicts the shock Hugoniot of liquid Nitromethane with a 5% accuracy for initial temperaturesmore » ranging from 250 K to 360 K.« less
Modeling of an Adjustable Beam Solid State Light Project
NASA Technical Reports Server (NTRS)
Clark, Toni
2015-01-01
This proposal is for the development of a computational model of a prototype variable beam light source using optical modeling software, Zemax Optics Studio. The variable beam light source would be designed to generate flood, spot, and directional beam patterns, while maintaining the same average power usage. The optical model would demonstrate the possibility of such a light source and its ability to address several issues: commonality of design, human task variability, and light source design process improvements. An adaptive lighting solution that utilizes the same electronics footprint and power constraints while addressing variability of lighting needed for the range of exploration tasks can save costs and allow for the development of common avionics for lighting controls.
Yurtkuran, Alkın; Emel, Erdal
2014-01-01
The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms.
Yurtkuran, Alkın
2014-01-01
The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms. PMID:24723834
Stodden, David F; Langendorfer, Stephen J; Fleisig, Glenn S; Andrews, James R
2006-12-01
The purposes of this study were to: (a) examine the differences within 11 specific kinematic variables and an outcome measure (ball velocity) associated with component developmental levels of humerus and forearm action (Roberton & Halverson, 1984), and (b) if the differences in kinematic variables were significantly associated with the differences in component levels, determine potential kinematic constraints associated with skilled throwing acquisition. Significant differences among component levels in five of six humerus kinematic variables (p <.01) and all five forearm kinematic variables (p < .01) were identified using multivariate analysis of variance. These kinematic variables represent potential control parameters and, therefore, constraints on overarm throwing acquisition.
Robust model predictive control for optimal continuous drug administration.
Sopasakis, Pantelis; Patrinos, Panagiotis; Sarimveis, Haralambos
2014-10-01
In this paper the model predictive control (MPC) technology is used for tackling the optimal drug administration problem. The important advantage of MPC compared to other control technologies is that it explicitly takes into account the constraints of the system. In particular, for drug treatments of living organisms, MPC can guarantee satisfaction of the minimum toxic concentration (MTC) constraints. A whole-body physiologically-based pharmacokinetic (PBPK) model serves as the dynamic prediction model of the system after it is formulated as a discrete-time state-space model. Only plasma measurements are assumed to be measured on-line. The rest of the states (drug concentrations in other organs and tissues) are estimated in real time by designing an artificial observer. The complete system (observer and MPC controller) is able to drive the drug concentration to the desired levels at the organs of interest, while satisfying the imposed constraints, even in the presence of modelling errors, disturbances and noise. A case study on a PBPK model with 7 compartments, constraints on 5 tissues and a variable drug concentration set-point illustrates the efficiency of the methodology in drug dosing control applications. The proposed methodology is also tested in an uncertain setting and proves successful in presence of modelling errors and inaccurate measurements. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Demonstration of Monogamy Relations for Einstein-Podolsky-Rosen Steering in Gaussian Cluster States.
Deng, Xiaowei; Xiang, Yu; Tian, Caixing; Adesso, Gerardo; He, Qiongyi; Gong, Qihuang; Su, Xiaolong; Xie, Changde; Peng, Kunchi
2017-06-09
Understanding how quantum resources can be quantified and distributed over many parties has profound applications in quantum communication. As one of the most intriguing features of quantum mechanics, Einstein-Podolsky-Rosen (EPR) steering is a useful resource for secure quantum networks. By reconstructing the covariance matrix of a continuous variable four-mode square Gaussian cluster state subject to asymmetric loss, we quantify the amount of bipartite steering with a variable number of modes per party, and verify recently introduced monogamy relations for Gaussian steerability, which establish quantitative constraints on the security of information shared among different parties. We observe a very rich structure for the steering distribution, and demonstrate one-way EPR steering of the cluster state under Gaussian measurements, as well as one-to-multimode steering. Our experiment paves the way for exploiting EPR steering in Gaussian cluster states as a valuable resource for multiparty quantum information tasks.
Demonstration of Monogamy Relations for Einstein-Podolsky-Rosen Steering in Gaussian Cluster States
NASA Astrophysics Data System (ADS)
Deng, Xiaowei; Xiang, Yu; Tian, Caixing; Adesso, Gerardo; He, Qiongyi; Gong, Qihuang; Su, Xiaolong; Xie, Changde; Peng, Kunchi
2017-06-01
Understanding how quantum resources can be quantified and distributed over many parties has profound applications in quantum communication. As one of the most intriguing features of quantum mechanics, Einstein-Podolsky-Rosen (EPR) steering is a useful resource for secure quantum networks. By reconstructing the covariance matrix of a continuous variable four-mode square Gaussian cluster state subject to asymmetric loss, we quantify the amount of bipartite steering with a variable number of modes per party, and verify recently introduced monogamy relations for Gaussian steerability, which establish quantitative constraints on the security of information shared among different parties. We observe a very rich structure for the steering distribution, and demonstrate one-way EPR steering of the cluster state under Gaussian measurements, as well as one-to-multimode steering. Our experiment paves the way for exploiting EPR steering in Gaussian cluster states as a valuable resource for multiparty quantum information tasks.
NASA Astrophysics Data System (ADS)
Hejazi, Mohamad I.; Cai, Ximing
2011-06-01
In this paper, we promote a novel approach to develop reservoir operation routines by learning from historical hydrologic information and reservoir operations. The proposed framework involves a knowledge discovery step to learn the real drivers of reservoir decision making and to subsequently build a more realistic (enhanced) model formulation using stochastic dynamic programming (SDP). The enhanced SDP model is compared to two classic SDP formulations using Lake Shelbyville, a reservoir on the Kaskaskia River in Illinois, as a case study. From a data mining procedure with monthly data, the past month's inflow ( Qt-1 ), current month's inflow ( Qt), past month's release ( Rt-1 ), and past month's Palmer drought severity index ( PDSIt-1 ) are identified as important state variables in the enhanced SDP model for Shelbyville Reservoir. When compared to a weekly enhanced SDP model of the same case study, a different set of state variables and constraints are extracted. Thus different time scales for the model require different information. We demonstrate that adding additional state variables improves the solution by shifting the Pareto front as expected while using new constraints and the correct objective function can significantly reduce the difference between derived policies and historical practices. The study indicates that the monthly enhanced SDP model resembles historical records more closely and yet provides lower expected average annual costs than either of the two classic formulations (25.4% and 4.5% reductions, respectively). The weekly enhanced SDP model is compared to the monthly enhanced SDP, and it shows that acquiring the correct temporal scale is crucial to model reservoir operation for particular objectives.
NASA Astrophysics Data System (ADS)
Yang, Jin-Wei; Gao, Yi-Tian; Wang, Qi-Min; Su, Chuan-Qi; Feng, Yu-Jie; Yu, Xin
2016-01-01
In this paper, a fourth-order variable-coefficient nonlinear Schrödinger equation is studied, which might describe a one-dimensional continuum anisotropic Heisenberg ferromagnetic spin chain with the octuple-dipole interaction or an alpha helical protein with higher-order excitations and interactions under continuum approximation. With the aid of auxiliary function, we derive the bilinear forms and corresponding constraints on the variable coefficients. Via the symbolic computation, we obtain the Lax pair, infinitely many conservation laws, one-, two- and three-soliton solutions. We discuss the influence of the variable coefficients on the solitons. With different choices of the variable coefficients, we obtain the parabolic, cubic, and periodic solitons, respectively. We analyse the head-on and overtaking interactions between/among the two and three solitons. Interactions between a bound state and a single soliton are displayed with different choices of variable coefficients. We also derive the quasi-periodic formulae for the three cases of the bound states.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adesso, Gerardo; Centre for Quantum Computation, DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA; Serafini, Alessio
2006-03-15
We present a complete analysis of the multipartite entanglement of three-mode Gaussian states of continuous-variable systems. We derive standard forms which characterize the covariance matrix of pure and mixed three-mode Gaussian states up to local unitary operations, showing that the local entropies of pure Gaussian states are bound to fulfill a relationship which is stricter than the general Araki-Lieb inequality. Quantum correlations can be quantified by a proper convex roof extension of the squared logarithmic negativity, the continuous-variable tangle, or contangle. We review and elucidate in detail the proof that in multimode Gaussian states the contangle satisfies a monogamy inequalitymore » constraint [G. Adesso and F. Illuminati, New J. Phys8, 15 (2006)]. The residual contangle, emerging from the monogamy inequality, is an entanglement monotone under Gaussian local operations and classical communications and defines a measure of genuine tripartite entanglements. We determine the analytical expression of the residual contangle for arbitrary pure three-mode Gaussian states and study in detail the distribution of quantum correlations in such states. This analysis yields that pure, symmetric states allow for a promiscuous entanglement sharing, having both maximum tripartite entanglement and maximum couplewise entanglement between any pair of modes. We thus name these states GHZ/W states of continuous-variable systems because they are simultaneous continuous-variable counterparts of both the GHZ and the W states of three qubits. We finally consider the effect of decoherence on three-mode Gaussian states, studying the decay of the residual contangle. The GHZ/W states are shown to be maximally robust against losses and thermal noise.« less
Zouari, Farouk; Ibeas, Asier; Boulkroune, Abdesselem; Cao, Jinde; Mehdi Arefi, Mohammad
2018-06-01
This study addresses the issue of the adaptive output tracking control for a category of uncertain nonstrict-feedback delayed incommensurate fractional-order systems in the presence of nonaffine structures, unmeasured pseudo-states, unknown control directions, unknown actuator nonlinearities and output constraints. Firstly, the mean value theorem and the Gaussian error function are introduced to eliminate the difficulties that arise from the nonaffine structures and the unknown actuator nonlinearities, respectively. Secondly, the immeasurable tracking error variables are suitably estimated by constructing a fractional-order linear observer. Thirdly, the neural network, the Razumikhin Lemma, the variable separation approach, and the smooth Nussbaum-type function are used to deal with the uncertain nonlinear dynamics, the unknown time-varying delays, the nonstrict feedback and the unknown control directions, respectively. Fourthly, asymmetric barrier Lyapunov functions are employed to overcome the violation of the output constraints and to tune online the parameters of the adaptive neural controller. Through rigorous analysis, it is proved that the boundedness of all variables in the closed-loop system and the semi global asymptotic tracking are ensured without transgression of the constraints. The principal contributions of this study can be summarized as follows: (1) based on Caputo's definitions and new lemmas, methods concerning the controllability, observability and stability analysis of integer-order systems are extended to fractional-order ones, (2) the output tracking objective for a relatively large class of uncertain systems is achieved with a simple controller and less tuning parameters. Finally, computer-simulation studies from the robotic field are given to demonstrate the effectiveness of the proposed controller. Copyright © 2018 Elsevier Ltd. All rights reserved.
A hybrid model for traffic flow and crowd dynamics with random individual properties.
Schleper, Veronika
2015-04-01
Based on an established mathematical model for the behavior of large crowds, a new model is derived that is able to take into account the statistical variation of individual maximum walking speeds. The same model is shown to be valid also in traffic flow situations, where for instance the statistical variation of preferred maximum speeds can be considered. The model involves explicit bounds on the state variables, such that a special Riemann solver is derived that is proved to respect the state constraints. Some care is devoted to a valid construction of random initial data, necessary for the use of the new model. The article also includes a numerical method that is shown to respect the bounds on the state variables and illustrative numerical examples, explaining the properties of the new model in comparison with established models.
NASA Astrophysics Data System (ADS)
Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.
2015-12-01
This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or more metrics.
NASA Astrophysics Data System (ADS)
Yuan, Jinlong; Zhang, Xu; Liu, Chongyang; Chang, Liang; Xie, Jun; Feng, Enmin; Yin, Hongchao; Xiu, Zhilong
2016-09-01
Time-delay dynamical systems, which depend on both the current state of the system and the state at delayed times, have been an active area of research in many real-world applications. In this paper, we consider a nonlinear time-delay dynamical system of dha-regulonwith unknown time-delays in batch culture of glycerol bioconversion to 1,3-propanediol induced by Klebsiella pneumonia. Some important properties and strong positive invariance are discussed. Because of the difficulty in accurately measuring the concentrations of intracellular substances and the absence of equilibrium points for the time-delay system, a quantitative biological robustness for the concentrations of intracellular substances is defined by penalizing a weighted sum of the expectation and variance of the relative deviation between system outputs before and after the time-delays are perturbed. Our goal is to determine optimal values of the time-delays. To this end, we formulate an optimization problem in which the time delays are decision variables and the cost function is to minimize the biological robustness. This optimization problem is subject to the time-delay system, parameter constraints, continuous state inequality constraints for ensuring that the concentrations of extracellular and intracellular substances lie within specified limits, a quality constraint to reflect operational requirements and a cost sensitivity constraint for ensuring that an acceptable level of the system performance is achieved. It is approximated as a sequence of nonlinear programming sub-problems through the application of constraint transcription and local smoothing approximation techniques. Due to the highly complex nature of this optimization problem, the computational cost is high. Thus, a parallel algorithm is proposed to solve these nonlinear programming sub-problems based on the filled function method. Finally, it is observed that the obtained optimal estimates for the time-delays are highly satisfactory via numerical simulations.
Merits and limitations of optimality criteria method for structural optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Guptill, James D.; Berke, Laszlo
1993-01-01
The merits and limitations of the optimality criteria (OC) method for the minimum weight design of structures subjected to multiple load conditions under stress, displacement, and frequency constraints were investigated by examining several numerical examples. The examples were solved utilizing the Optimality Criteria Design Code that was developed for this purpose at NASA Lewis Research Center. This OC code incorporates OC methods available in the literature with generalizations for stress constraints, fully utilized design concepts, and hybrid methods that combine both techniques. Salient features of the code include multiple choices for Lagrange multiplier and design variable update methods, design strategies for several constraint types, variable linking, displacement and integrated force method analyzers, and analytical and numerical sensitivities. The performance of the OC method, on the basis of the examples solved, was found to be satisfactory for problems with few active constraints or with small numbers of design variables. For problems with large numbers of behavior constraints and design variables, the OC method appears to follow a subset of active constraints that can result in a heavier design. The computational efficiency of OC methods appears to be similar to some mathematical programming techniques.
Chen, Weisheng
2009-07-01
This paper focuses on the problem of adaptive neural network tracking control for a class of discrete-time pure-feedback systems with unknown control direction under amplitude and rate actuator constraints. Two novel state-feedback and output-feedback dynamic control laws are established where the function tanh(.) is employed to solve the saturation constraint problem. Implicit function theorem and mean value theorem are exploited to deal with non-affine variables that are used as actual control. Radial basis function neural networks are used to approximate the desired input function. Discrete Nussbaum gain is used to estimate the unknown sign of control gain. The uniform boundedness of all closed-loop signals is guaranteed. The tracking error is proved to converge to a small residual set around the origin. A simulation example is provided to illustrate the effectiveness of control schemes proposed in this paper.
Variability, Constraints, and Creativity: Shedding Light on Claude Monet.
ERIC Educational Resources Information Center
Stokes, Patricia D.
2001-01-01
Discusses how creative individuals maintain high levels of variability, examining how Claude Monet's habitually high level of variability in painting was acquired during his childhood and early apprenticeship and maintained throughout his adult career by a continuous series of task constraints imposed by the artist on his own work. For Monet,…
Nenov, Valeriy; Bergsneider, Marvin; Glenn, Thomas C.; Vespa, Paul; Martin, Neil
2007-01-01
Impeded by the rigid skull, assessment of physiological variables of the intracranial system is difficult. A hidden state estimation approach is used in the present work to facilitate the estimation of unobserved variables from available clinical measurements including intracranial pressure (ICP) and cerebral blood flow velocity (CBFV). The estimation algorithm is based on a modified nonlinear intracranial mathematical model, whose parameters are first identified in an offline stage using a nonlinear optimization paradigm. Following the offline stage, an online filtering process is performed using a nonlinear Kalman filter (KF)-like state estimator that is equipped with a new way of deriving the Kalman gain satisfying the physiological constraints on the state variables. The proposed method is then validated by comparing different state estimation methods and input/output (I/O) configurations using simulated data. It is also applied to a set of CBFV, ICP and arterial blood pressure (ABP) signal segments from brain injury patients. The results indicated that the proposed constrained nonlinear KF achieved the best performance among the evaluated state estimators and that the state estimator combined with the I/O configuration that has ICP as the measured output can potentially be used to estimate CBFV continuously. Finally, the state estimator combined with the I/O configuration that has both ICP and CBFV as outputs can potentially estimate the lumped cerebral arterial radii, which are not measurable in a typical clinical environment. PMID:17281533
Namiki, Ryo; Koashi, Masato; Imoto, Nobuyuki
2008-09-05
We generalize the experimental success criterion for quantum teleportation (memory) in continuous-variable quantum systems to be suitable for a non-unit-gain condition by considering attenuation (amplification) of the coherent-state amplitude. The new criterion can be used for a nonideal quantum memory and long distance quantum communication as well as quantum devices with amplification process. It is also shown that the framework to measure the average fidelity is capable of detecting all Gaussian channels in the quantum domain.
Multi-disciplinary optimization of aeroservoelastic systems
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1990-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Multidisciplinary optimization of aeroservoelastic systems using reduced-size models
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1992-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Emergent constraint on equilibrium climate sensitivity from global temperature variability.
Cox, Peter M; Huntingford, Chris; Williamson, Mark S
2018-01-17
Equilibrium climate sensitivity (ECS) remains one of the most important unknowns in climate change science. ECS is defined as the global mean warming that would occur if the atmospheric carbon dioxide (CO 2 ) concentration were instantly doubled and the climate were then brought to equilibrium with that new level of CO 2 . Despite its rather idealized definition, ECS has continuing relevance for international climate change agreements, which are often framed in terms of stabilization of global warming relative to the pre-industrial climate. However, the 'likely' range of ECS as stated by the Intergovernmental Panel on Climate Change (IPCC) has remained at 1.5-4.5 degrees Celsius for more than 25 years. The possibility of a value of ECS towards the upper end of this range reduces the feasibility of avoiding 2 degrees Celsius of global warming, as required by the Paris Agreement. Here we present a new emergent constraint on ECS that yields a central estimate of 2.8 degrees Celsius with 66 per cent confidence limits (equivalent to the IPCC 'likely' range) of 2.2-3.4 degrees Celsius. Our approach is to focus on the variability of temperature about long-term historical warming, rather than on the warming trend itself. We use an ensemble of climate models to define an emergent relationship between ECS and a theoretically informed metric of global temperature variability. This metric of variability can also be calculated from observational records of global warming, which enables tighter constraints to be placed on ECS, reducing the probability of ECS being less than 1.5 degrees Celsius to less than 3 per cent, and the probability of ECS exceeding 4.5 degrees Celsius to less than 1 per cent.
Emergent constraint on equilibrium climate sensitivity from global temperature variability
NASA Astrophysics Data System (ADS)
Cox, Peter M.; Huntingford, Chris; Williamson, Mark S.
2018-01-01
Equilibrium climate sensitivity (ECS) remains one of the most important unknowns in climate change science. ECS is defined as the global mean warming that would occur if the atmospheric carbon dioxide (CO2) concentration were instantly doubled and the climate were then brought to equilibrium with that new level of CO2. Despite its rather idealized definition, ECS has continuing relevance for international climate change agreements, which are often framed in terms of stabilization of global warming relative to the pre-industrial climate. However, the ‘likely’ range of ECS as stated by the Intergovernmental Panel on Climate Change (IPCC) has remained at 1.5-4.5 degrees Celsius for more than 25 years. The possibility of a value of ECS towards the upper end of this range reduces the feasibility of avoiding 2 degrees Celsius of global warming, as required by the Paris Agreement. Here we present a new emergent constraint on ECS that yields a central estimate of 2.8 degrees Celsius with 66 per cent confidence limits (equivalent to the IPCC ‘likely’ range) of 2.2-3.4 degrees Celsius. Our approach is to focus on the variability of temperature about long-term historical warming, rather than on the warming trend itself. We use an ensemble of climate models to define an emergent relationship between ECS and a theoretically informed metric of global temperature variability. This metric of variability can also be calculated from observational records of global warming, which enables tighter constraints to be placed on ECS, reducing the probability of ECS being less than 1.5 degrees Celsius to less than 3 per cent, and the probability of ECS exceeding 4.5 degrees Celsius to less than 1 per cent.
NASA Astrophysics Data System (ADS)
Ricciuto, Daniel M.; King, Anthony W.; Dragoni, D.; Post, Wilfred M.
2011-03-01
Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties are then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.
Probabilistic Reasoning for Plan Robustness
NASA Technical Reports Server (NTRS)
Schaffer, Steve R.; Clement, Bradley J.; Chien, Steve A.
2005-01-01
A planning system must reason about the uncertainty of continuous variables in order to accurately project the possible system state over time. A method is devised for directly reasoning about the uncertainty in continuous activity duration and resource usage for planning problems. By representing random variables as parametric distributions, computing projected system state can be simplified in some cases. Common approximation and novel methods are compared for over-constrained and lightly constrained domains. The system compares a few common approximation methods for an iterative repair planner. Results show improvements in robustness over the conventional non-probabilistic representation by reducing the number of constraint violations witnessed by execution. The improvement is more significant for larger problems and problems with higher resource subscription levels but diminishes as the system is allowed to accept higher risk levels.
Structural Brain Connectivity Constrains within-a-Day Variability of Direct Functional Connectivity
Park, Bumhee; Eo, Jinseok; Park, Hae-Jeong
2017-01-01
The idea that structural white matter connectivity constrains functional connectivity (interactions among brain regions) has widely been explored in studies of brain networks; studies have mostly focused on the “average” strength of functional connectivity. The question of how structural connectivity constrains the “variability” of functional connectivity remains unresolved. In this study, we investigated the variability of resting state functional connectivity that was acquired every 3 h within a single day from 12 participants (eight time sessions within a 24-h period, 165 scans per session). Three different types of functional connectivity (functional connectivity based on Pearson correlation, direct functional connectivity based on partial correlation, and the pseudo functional connectivity produced by their difference) were estimated from resting state functional magnetic resonance imaging data along with structural connectivity defined using fiber tractography of diffusion tensor imaging. Those types of functional connectivity were evaluated with regard to properties of structural connectivity (fiber streamline counts and lengths) and types of structural connectivity such as intra-/inter-hemispheric edges and topological edge types in the rich club organization. We observed that the structural connectivity constrained the variability of direct functional connectivity more than pseudo-functional connectivity and that the constraints depended strongly on structural connectivity types. The structural constraints were greater for intra-hemispheric and heterologous inter-hemispheric edges than homologous inter-hemispheric edges, and feeder and local edges than rich club edges in the rich club architecture. While each edge was highly variable, the multivariate patterns of edge involvement, especially the direct functional connectivity patterns among the rich club brain regions, showed low variability over time. This study suggests that structural connectivity not only constrains the strength of functional connectivity, but also the within-a-day variability of functional connectivity and connectivity patterns, particularly the direct functional connectivity among brain regions. PMID:28848416
State legislative staff influence in health policy making.
Weissert, C S; Weissert, W G
2000-12-01
State legislative staff may influence health policy by gathering intelligence, setting the agenda, and shaping the legislative proposals. But they may also be stymied in their roles by such institutional constraints as hiring practices and by turnover in committee leadership in the legislature. The intervening variable of trust between legislators and their support staff is also key to understanding influence and helps explain how staff-legislator relationships play an important role in designing state health policy. This study of legislative fiscal and health policy committee staff uses data from interviews with key actors in five states to model the factors important in explaining variation in the influence of committee staff on health policy.
GEWEX Water and Energy Budget Study
NASA Technical Reports Server (NTRS)
Roads, J.; Bainto, E.; Masuda, K.; Rodell, Matthew; Rossow, W. B.
2008-01-01
Closing the global water and energy budgets has been an elusive Global Energy and Water-cycle Experiment (GEWEX) goal. It has been difficult to gather many of the needed global water and energy variables and processes, although, because of GEWEX, we now have globally gridded observational estimates for precipitation and radiation and many other relevant variables such as clouds and aerosols. Still, constrained models are required to fill in many of the process and variable gaps. At least there are now several atmospheric reanalyses ranging from the early National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) and NCEP/Department of Energy (DOE) reanalyses to the more recent ERA40 and JRA-25 reanalyses. Atmospheric constraints include requirements that the models state variables remain close to in situ observations or observed satellite radiances. This is usually done by making short-term forecasts from an analyzed initial state; these short-term forecasts provide the next guess, which is corrected by comparison to available observations. While this analysis procedure is likely to result in useful global descriptions of atmospheric temperature, wind and humidity, there is no guarantee that relevant hydroclimate processes like precipitation, which we can observe and evaluate, and evaporation over land, which we cannot, have similar verisimilitude. Alternatively, the Global Land Data Assimilation System (GLDAS), drives uncoupled land surface models with precipitation, surface solar radiation, and surface meteorology (from bias-corrected reanalyses during the study period) to simulate terrestrial states and surface fluxes. Further constraints are made when a tuned water balance model is used to characterize the global runoff observational estimates. We use this disparate mix of observational estimates, reanalyses, GLDAS and calibrated water balance simulations to try to characterize and close global and terrestrial atmospheric and surface water and energy budgets to within 10-20% for long term (1986-1995), large-scale global to regional annual means.
Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems
NASA Astrophysics Data System (ADS)
Watkins, Edward Francis
1995-01-01
A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.
An Integrated Approach to Winds, Jets, and State Transitions
NASA Astrophysics Data System (ADS)
Neilsen, Joseph
2017-09-01
We propose a large multiwavelength campaign (120 ks Chandra HETGS, NuSTAR, INTEGRAL, JVLA/ATCA, Swift, XMM, Gemini) on a black hole transient to study the influence of ionized winds on relativistic jets and state transitions. With a reimagined observing strategy based on new results on integrated RMS variability and a decade of radio/X-ray monitoring, we will search for winds during and after the state transition to test their influence on and track their coevolution with the disk and the jet over the next 2-3 months. Our spectral and timing constraints will provide precise probes of the accretion geometry and accretion/ejection physics.
A projection gradient method for computing ground state of spin-2 Bose–Einstein condensates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hanquan, E-mail: hanquan.wang@gmail.com; Yunnan Tongchang Scientific Computing and Data Mining Research Center, Kunming, Yunnan Province, 650221
In this paper, a projection gradient method is presented for computing ground state of spin-2 Bose–Einstein condensates (BEC). We first propose the general projection gradient method for solving energy functional minimization problem under multiple constraints, in which the energy functional takes real functions as independent variables. We next extend the method to solve a similar problem, where the energy functional now takes complex functions as independent variables. We finally employ the method into finding the ground state of spin-2 BEC. The key of our method is: by constructing continuous gradient flows (CGFs), the ground state of spin-2 BEC can bemore » computed as the steady state solution of such CGFs. We discretized the CGFs by a conservative finite difference method along with a proper way to deal with the nonlinear terms. We show that the numerical discretization is normalization and magnetization conservative and energy diminishing. Numerical results of the ground state and their energy of spin-2 BEC are reported to demonstrate the effectiveness of the numerical method.« less
Bodin, Julie; Garlantézec, Ronan; Costet, Nathalie; Descatha, Alexis; Fouquet, Natacha; Caroly, Sandrine; Roquelaure, Yves
2017-03-01
The aim of this study was to identify forms of work organization in a French region and to study associations with the occurrence of symptomatic and clinically diagnosed shoulder disorders in workers. Workers were randomly included in this cross-sectional study from 2002 to 2005. Sixteen organizational variables were assessed by a self-administered questionnaire: i.e. shift work, job rotation, repetitiveness of tasks, paced work/automatic rate, work pace dependent on quantified targets, permanent controls or surveillance, colleagues' work and customer demand, and eight variables measuring decision latitude. Five forms of work organization were identified using hierarchical cluster analysis (HCA) of variables and HCA of workers: low decision latitude with pace constraints, medium decision latitude with pace constraints, low decision latitude with low pace constraints, high decision latitude with pace constraints and high decision latitude with low pace constraints. There were significant associations between forms of work organization and symptomatic and clinically-diagnosed shoulder disorders. Copyright © 2016 Elsevier Ltd. All rights reserved.
New variables for classical and quantum gravity
NASA Technical Reports Server (NTRS)
Ashtekar, Abhay
1986-01-01
A Hamiltonian formulation of general relativity based on certain spinorial variables is introduced. These variables simplify the constraints of general relativity considerably and enable one to imbed the constraint surface in the phase space of Einstein's theory into that of Yang-Mills theory. The imbedding suggests new ways of attacking a number of problems in both classical and quantum gravity. Some illustrative applications are discussed.
Bayesian Optimization Under Mixed Constraints with A Slack-Variable Augmented Lagrangian
DOE Office of Scientific and Technical Information (OSTI.GOV)
Picheny, Victor; Gramacy, Robert B.; Wild, Stefan M.
An augmented Lagrangian (AL) can convert a constrained optimization problem into a sequence of simpler (e.g., unconstrained) problems, which are then usually solved with local solvers. Recently, surrogate-based Bayesian optimization (BO) sub-solvers have been successfully deployed in the AL framework for a more global search in the presence of inequality constraints; however, a drawback was that expected improvement (EI) evaluations relied on Monte Carlo. Here we introduce an alternative slack variable AL, and show that in this formulation the EI may be evaluated with library routines. The slack variables furthermore facilitate equality as well as inequality constraints, and mixtures thereof.more » We show our new slack “ALBO” compares favorably to the original. Its superiority over conventional alternatives is reinforced on several mixed constraint examples.« less
The artificial-free technique along the objective direction for the simplex algorithm
NASA Astrophysics Data System (ADS)
Boonperm, Aua-aree; Sinapiromsaran, Krung
2014-03-01
The simplex algorithm is a popular algorithm for solving linear programming problems. If the origin point satisfies all constraints then the simplex can be started. Otherwise, artificial variables will be introduced to start the simplex algorithm. If we can start the simplex algorithm without using artificial variables then the simplex iterate will require less time. In this paper, we present the artificial-free technique for the simplex algorithm by mapping the problem into the objective plane and splitting constraints into three groups. In the objective plane, one of variables which has a nonzero coefficient of the objective function is fixed in terms of another variable. Then it can split constraints into three groups: the positive coefficient group, the negative coefficient group and the zero coefficient group. Along the objective direction, some constraints from the positive coefficient group will form the optimal solution. If the positive coefficient group is nonempty, the algorithm starts with relaxing constraints from the negative coefficient group and the zero coefficient group. We guarantee the feasible region obtained from the positive coefficient group to be nonempty. The transformed problem is solved using the simplex algorithm. Additional constraints from the negative coefficient group and the zero coefficient group will be added to the solved problem and use the dual simplex method to determine the new optimal solution. An example shows the effectiveness of our algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, P. T.
1993-09-01
As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Provingmore » this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H 1 Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.« less
Fractional Factorial Experiment Designs to Minimize Configuration Changes in Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
DeLoach, Richard; Cler, Daniel L.; Graham, Albert B.
2002-01-01
This paper serves as a tutorial to introduce the wind tunnel research community to configuration experiment designs that can satisfy resource constraints in a configuration study involving several variables, without arbitrarily eliminating any of them from the experiment initially. The special case of a configuration study featuring variables at two levels is examined in detail. This is the type of study in which each configuration variable has two natural states - 'on or off', 'deployed or not deployed', 'low or high', and so forth. The basic principles are illustrated by results obtained in configuration studies conducted in the Langley National Transonic Facility and in the ViGYAN Low Speed Tunnel in Hampton, Virginia. The crucial role of interactions among configuration variables is highlighted with an illustration of difficulties that can be encountered when they are not properly taken into account.
2013-09-30
productivity. Advanced variational methods for the assimilation of satellite and in situ observations to achieve improved state estimation and subsequent...time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection...South China Sea (SCS) using the Regional Ocean Modeling System (ROMS) with Incremental Strong Constraint 4-Dimensional Variational (IS4DVAR) data
2014-07-01
of models for variable conditions: – Use implicit models to eliminate constraint of sequence of fast time scales: c, ve, – Price to pay: lack...collisions: – Elastic – Bragiinski terms – Inelastic – warning! Rates depend on both T and relative velocity – Multi-fluid CR model from...merge/split for particle management, efficient sampling, inelastic collisions … – Level grouping schemes of electronic states, for dynamical coarse
Exploring the Impact of Early Decisions in Variable Ordering for Constraint Satisfaction Problems.
Ortiz-Bayliss, José Carlos; Amaya, Ivan; Conant-Pablos, Santiago Enrique; Terashima-Marín, Hugo
2018-01-01
When solving constraint satisfaction problems (CSPs), it is a common practice to rely on heuristics to decide which variable should be instantiated at each stage of the search. But, this ordering influences the search cost. Even so, and to the best of our knowledge, no earlier work has dealt with how first variable orderings affect the overall cost. In this paper, we explore the cost of finding high-quality orderings of variables within constraint satisfaction problems. We also study differences among the orderings produced by some commonly used heuristics and the way bad first decisions affect the search cost. One of the most important findings of this work confirms the paramount importance of first decisions. Another one is the evidence that many of the existing variable ordering heuristics fail to appropriately select the first variable to instantiate. Another one is the evidence that many of the existing variable ordering heuristics fail to appropriately select the first variable to instantiate. We propose a simple method to improve early decisions of heuristics. By using it, performance of heuristics increases.
Exploring the Impact of Early Decisions in Variable Ordering for Constraint Satisfaction Problems
Amaya, Ivan
2018-01-01
When solving constraint satisfaction problems (CSPs), it is a common practice to rely on heuristics to decide which variable should be instantiated at each stage of the search. But, this ordering influences the search cost. Even so, and to the best of our knowledge, no earlier work has dealt with how first variable orderings affect the overall cost. In this paper, we explore the cost of finding high-quality orderings of variables within constraint satisfaction problems. We also study differences among the orderings produced by some commonly used heuristics and the way bad first decisions affect the search cost. One of the most important findings of this work confirms the paramount importance of first decisions. Another one is the evidence that many of the existing variable ordering heuristics fail to appropriately select the first variable to instantiate. Another one is the evidence that many of the existing variable ordering heuristics fail to appropriately select the first variable to instantiate. We propose a simple method to improve early decisions of heuristics. By using it, performance of heuristics increases. PMID:29681923
Strong subadditivity for log-determinant of covariance matrices and its applications
NASA Astrophysics Data System (ADS)
Adesso, Gerardo; Simon, R.
2016-08-01
We prove that the log-determinant of the covariance matrix obeys the strong subadditivity inequality for arbitrary tripartite states of multimode continuous variable quantum systems. This establishes general limitations on the distribution of information encoded in the second moments of canonically conjugate operators. The inequality is shown to be stronger than the conventional strong subadditivity inequality for von Neumann entropy in a class of pure tripartite Gaussian states. We finally show that such an inequality implies a strict monogamy-type constraint for joint Einstein-Podolsky-Rosen steerability of single modes by Gaussian measurements performed on multiple groups of modes.
System and method for optimal load and source scheduling in context aware homes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shetty, Pradeep; Foslien Graber, Wendy; Mangsuli, Purnaprajna R.
A controller for controlling energy consumption in a home includes a constraints engine to define variables for multiple appliances in the home corresponding to various home modes and persona of an occupant of the home. A modeling engine models multiple paths of energy utilization of the multiple appliances to place the home into a desired state from a current context. An optimal scheduler receives the multiple paths of energy utilization and generates a schedule as a function of the multiple paths and a selected persona to place the home in a desired state.
New evidence favoring multilevel decomposition and optimization
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Polignone, Debra A.
1990-01-01
The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.
Optimization of Car Body under Constraints of Noise, Vibration, and Harshness (NVH), and Crash
NASA Technical Reports Server (NTRS)
Kodiyalam, Srinivas; Yang, Ren-Jye; Sobieszczanski-Sobieski, Jaroslaw (Editor)
2000-01-01
To be competitive on the today's market, cars have to be as light as possible while meeting the Noise, Vibration, and Harshness (NVH) requirements and conforming to Government-man dated crash survival regulations. The latter are difficult to meet because they involve very compute-intensive, nonlinear analysis, e.g., the code RADIOSS capable of simulation of the dynamics, and the geometrical and material nonlinearities of a thin-walled car structure in crash, would require over 12 days of elapsed time for a single design of a 390K elastic degrees of freedom model, if executed on a single processor of the state-of-the-art SGI Origin2000 computer. Of course, in optimization that crash analysis would have to be invoked many times. Needless to say, that has rendered such optimization intractable until now. The car finite element model is shown. The advent of computers that comprise large numbers of concurrently operating processors has created a new environment wherein the above optimization, and other engineering problems heretofore regarded as intractable may be solved. The procedure, shown, is a piecewise approximation based method and involves using a sensitivity based Taylor series approximation model for NVH and a polynomial response surface model for Crash. In that method the NVH constraints are evaluated using a finite element code (MSC/NASTRAN) that yields the constraint values and their derivatives with respect to design variables. The crash constraints are evaluated using the explicit code RADIOSS on the Origin 2000 operating on 256 processors simultaneously to generate data for a polynomial response surface in the design variable domain. The NVH constraints and their derivatives combined with the response surface for the crash constraints form an approximation to the system analysis (surrogate analysis) that enables a cycle of multidisciplinary optimization within move limits. In the inner loop, the NVH sensitivities are recomputed to update the NVH approximation model while keeping the Crash response surface constant. In every outer loop, the Crash response surface approximation is updated, including a gradual increase in the order of the response surface and the response surface extension in the direction of the search. In this optimization task, the NVH discipline has 30 design variables while the crash discipline has 20 design variables. A subset of these design variables (10) are common to both the NVH and crash disciplines. In order to construct a linear response surface for the Crash discipline constraints, a minimum of 21 design points would have to be analyzed using the RADIOSS code. On a single processor in Origin 2000 that amount of computing would require over 9 months! In this work, these runs were carried out concurrently on the Origin 2000 using multiple processors, ranging from 8 to 16, for each crash (RADIOSS) analysis. Another figure shows the wall time required for a single RADIOSS analysis using varying number of processors, as well as provides a comparison of 2 different common data placement procedures within the allotted memories for each analysis. The initial design is an infeasible design with NVH discipline Static Torsion constraint violations of over 10%. The final optimized design is a feasible design with a weight reduction of 15 kg compared to the initial design. This work demonstrates how advanced methodology for optimization combined with the technology of concurrent processing enables applications that until now were out of reach because of very long time-to-solution.
Wrinkle-free design of thin membrane structures using stress-based topology optimization
NASA Astrophysics Data System (ADS)
Luo, Yangjun; Xing, Jian; Niu, Yanzhuang; Li, Ming; Kang, Zhan
2017-05-01
Thin membrane structures would experience wrinkling due to local buckling deformation when compressive stresses are induced in some regions. Using the stress criterion for membranes in wrinkled and taut states, this paper proposed a new stress-based topology optimization methodology to seek the optimal wrinkle-free design of macro-scale thin membrane structures under stretching. Based on the continuum model and linearly elastic assumption in the taut state, the optimization problem is defined as to maximize the structural stiffness under membrane area and principal stress constraints. In order to make the problem computationally tractable, the stress constraints are reformulated into equivalent ones and relaxed by a cosine-type relaxation scheme. The reformulated optimization problem is solved by a standard gradient-based algorithm with the adjoint-variable sensitivity analysis. Several examples with post-bulking simulations and experimental tests are given to demonstrate the effectiveness of the proposed optimization model for eliminating stress-related wrinkles in the novel design of thin membrane structures.
Exact simulation of integrate-and-fire models with exponential currents.
Brette, Romain
2007-10-01
Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.
State estimation with incomplete nonlinear constraint
NASA Astrophysics Data System (ADS)
Huang, Yuan; Wang, Xueying; An, Wei
2017-10-01
A problem of state estimation with a new constraints named incomplete nonlinear constraint is considered. The targets are often move in the curve road, if the width of road is neglected, the road can be considered as the constraint, and the position of sensors, e.g., radar, is known in advance, this info can be used to enhance the performance of the tracking filter. The problem of how to incorporate the priori knowledge is considered. In this paper, a second-order sate constraint is considered. A fitting algorithm of ellipse is adopted to incorporate the priori knowledge by estimating the radius of the trajectory. The fitting problem is transformed to the nonlinear estimation problem. The estimated ellipse function is used to approximate the nonlinear constraint. Then, the typical nonlinear constraint methods proposed in recent works can be used to constrain the target state. Monte-Carlo simulation results are presented to illustrate the effectiveness proposed method in state estimation with incomplete constraint.
A survey of methods of feasible directions for the solution of optimal control problems
NASA Technical Reports Server (NTRS)
Polak, E.
1972-01-01
Three methods of feasible directions for optimal control are reviewed. These methods are an extension of the Frank-Wolfe method, a dual method devised by Pironneau and Polack, and a Zontendijk method. The categories of continuous optimal control problems are shown as: (1) fixed time problems with fixed initial state, free terminal state, and simple constraints on the control; (2) fixed time problems with inequality constraints on both the initial and the terminal state and no control constraints; (3) free time problems with inequality constraints on the initial and terminal states and simple constraints on the control; and (4) fixed time problems with inequality state space contraints and constraints on the control. The nonlinear programming algorithms are derived for each of the methods in its associated category.
Advanced multivariable control of a turboexpander plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altena, D.; Howard, M.; Bullin, K.
1998-12-31
This paper describes an application of advanced multivariable control on a natural gas plant and compares its performance to the previous conventional feed-back control. This control algorithm utilizes simple models from existing plant data and/or plant tests to hold the process at the desired operating point in the presence of disturbances and changes in operating conditions. The control software is able to accomplish this due to effective handling of process variable interaction, constraint avoidance and feed-forward of measured disturbances. The economic benefit of improved control lies in operating closer to the process constraints while avoiding significant violations. The South Texasmore » facility where this controller was implemented experienced reduced variability in process conditions which increased liquids recovery because the plant was able to operate much closer to the customer specified impurity constraint. An additional benefit of this implementation of multivariable control is the ability to set performance criteria beyond simple setpoints, including process variable constraints, relative variable merit and optimizing use of manipulated variables. The paper also details the control scheme applied to the complex turboexpander process and some of the safety features included to improve reliability.« less
Structure Constraints in a Constraint-Based Planner
NASA Technical Reports Server (NTRS)
Pang, Wan-Lin; Golden, Keith
2004-01-01
In this paper we report our work on a new constraint domain, where variables can take structured values. Earth-science data processing (ESDP) is a planning domain that requires the ability to represent and reason about complex constraints over structured data, such as satellite images. This paper reports on a constraint-based planner for ESDP and similar domains. We discuss our approach for translating a planning problem into a constraint satisfaction problem (CSP) and for representing and reasoning about structured objects and constraints over structures.
Variability, constraints, and creativity. Shedding light on Claude Monet.
Stokes, P D
2001-04-01
Recent experimental research suggests 2 things. The first is that along with learning how to do something, people also learn how variably or differently to continue doing it. The second is that high variability is maintained by constraining, precluding a currently successful, often repetitive solution to a problem. In this view, Claude Monet's habitually high level of variability in painting was acquired during his childhood and early apprenticeship and was maintained throughout his adult career by a continuous series of task constraints imposed by the artist on his own work. For Monet, variability was rewarded and rewarding.
Distance and slope constraints: adaptation and variability in golf putting.
Dias, Gonçalo; Couceiro, Micael S; Barreiros, João; Clemente, Filipe M; Mendes, Rui; Martins, Fernando M
2014-07-01
The main objective of this study is to understand the adaptation to external constraints and the effects of variability in a golf putting task. We describe the adaptation of relevant variables of golf putting to the distance to the hole and to the addition of a slope. The sample consisted of 10 adult male (33.80 ± 11.89 years), volunteers, right handed and highly skilled golfers with an average handicap of 10.82. Each player performed 30 putts at distances of 2, 3 and 4 meters (90 trials in Condition 1). The participants also performed 90 trials, at the same distances, with a constraint imposed by a slope (Condition 2). The results indicate that the players change some parameters to adjust to the task constraints, namely the duration of the backswing phase, the speed of the club head and the acceleration at the moment of impact with the ball. The effects of different golf putting distances in the no-slope condition on different kinematic variables suggest a linear adjustment to distance variation that was not observed when in the slope condition.
A multiple-alignment based primer design algorithm for genetically highly variable DNA targets
2013-01-01
Background Primer design for highly variable DNA sequences is difficult, and experimental success requires attention to many interacting constraints. The advent of next-generation sequencing methods allows the investigation of rare variants otherwise hidden deep in large populations, but requires attention to population diversity and primer localization in relatively conserved regions, in addition to recognized constraints typically considered in primer design. Results Design constraints include degenerate sites to maximize population coverage, matching of melting temperatures, optimizing de novo sequence length, finding optimal bio-barcodes to allow efficient downstream analyses, and minimizing risk of dimerization. To facilitate primer design addressing these and other constraints, we created a novel computer program (PrimerDesign) that automates this complex procedure. We show its powers and limitations and give examples of successful designs for the analysis of HIV-1 populations. Conclusions PrimerDesign is useful for researchers who want to design DNA primers and probes for analyzing highly variable DNA populations. It can be used to design primers for PCR, RT-PCR, Sanger sequencing, next-generation sequencing, and other experimental protocols targeting highly variable DNA samples. PMID:23965160
Constrained Multi-Level Algorithm for Trajectory Optimization
NASA Astrophysics Data System (ADS)
Adimurthy, V.; Tandon, S. R.; Jessy, Antony; Kumar, C. Ravi
The emphasis on low cost access to space inspired many recent developments in the methodology of trajectory optimization. Ref.1 uses a spectral patching method for optimization, where global orthogonal polynomials are used to describe the dynamical constraints. A two-tier approach of optimization is used in Ref.2 for a missile mid-course trajectory optimization. A hybrid analytical/numerical approach is described in Ref.3, where an initial analytical vacuum solution is taken and gradually atmospheric effects are introduced. Ref.4 emphasizes the fact that the nonlinear constraints which occur in the initial and middle portions of the trajectory behave very nonlinearly with respect the variables making the optimization very difficult to solve in the direct and indirect shooting methods. The problem is further made complex when different phases of the trajectory have different objectives of optimization and also have different path constraints. Such problems can be effectively addressed by multi-level optimization. In the multi-level methods reported so far, optimization is first done in identified sub-level problems, where some coordination variables are kept fixed for global iteration. After all the sub optimizations are completed, higher-level optimization iteration with all the coordination and main variables is done. This is followed by further sub system optimizations with new coordination variables. This process is continued until convergence. In this paper we use a multi-level constrained optimization algorithm which avoids the repeated local sub system optimizations and which also removes the problem of non-linear sensitivity inherent in the single step approaches. Fall-zone constraints, structural load constraints and thermal constraints are considered. In this algorithm, there is only a single multi-level sequence of state and multiplier updates in a framework of an augmented Lagrangian. Han Tapia multiplier updates are used in view of their special role in diagonalised methods, being the only single update with quadratic convergence. For a single level, the diagonalised multiplier method (DMM) is described in Ref.5. The main advantage of the two-level analogue of the DMM approach is that it avoids the inner loop optimizations required in the other methods. The scheme also introduces a gradient change measure to reduce the computational time needed to calculate the gradients. It is demonstrated that the new multi-level scheme leads to a robust procedure to handle the sensitivity of the constraints, and the multiple objectives of different trajectory phases. Ref. 1. Fahroo, F and Ross, M., " A Spectral Patching Method for Direct Trajectory Optimization" The Journal of the Astronautical Sciences, Vol.48, 2000, pp.269-286 Ref. 2. Phililps, C.A. and Drake, J.C., "Trajectory Optimization for a Missile using a Multitier Approach" Journal of Spacecraft and Rockets, Vol.37, 2000, pp.663-669 Ref. 3. Gath, P.F., and Calise, A.J., " Optimization of Launch Vehicle Ascent Trajectories with Path Constraints and Coast Arcs", Journal of Guidance, Control, and Dynamics, Vol. 24, 2001, pp.296-304 Ref. 4. Betts, J.T., " Survey of Numerical Methods for Trajectory Optimization", Journal of Guidance, Control, and Dynamics, Vol.21, 1998, pp. 193-207 Ref. 5. Adimurthy, V., " Launch Vehicle Trajectory Optimization", Acta Astronautica, Vol.15, 1987, pp.845-850.
Treatment of constraints in the stochastic quantization method and covariantized Langevin equation
NASA Astrophysics Data System (ADS)
Ikegami, Kenji; Kimura, Tadahiko; Mochizuki, Riuji
1993-04-01
We study the treatment of the constraints in the stochastic quantization method. We improve the treatment of the stochastic consistency condition proposed by Namiki et al. by suitably taking into account the Ito calculus. Then we obtain an improved Langevi equation and the Fokker-Planck equation which naturally leads to the correct path integral quantization of the constrained system as the stochastic equilibrium state. This treatment is applied to an O( N) non-linear α model and it is shown that singular terms appearing in the improved Langevin equation cancel out the σ n(O) divergences in one loop order. We also ascertain that the above Langevin equation, rewritten in terms of idependent variables, is actually equivalent to the one in the general-coordinate transformation covariant and vielbein-rotation invariant formalish.
Econometrics of exhaustible resource supply: a theory and an application. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epple, D.; Hansen, L.P.
1981-12-01
An econometric model of US oil and natural gas discoveries is developed in this study. The econometric model is explicitly derived as the solution to the problem of maximizing the expected discounted after tax present value of revenues net of exploration, development, and production costs. The model contains equations representing producers' formation of price expectations and separate equations giving producers' optimal exploration decisions contingent on expected prices. A procedure is developed for imposing resource base constraints (e.g., ultimate recovery estimates based on geological analysis) when estimating the econometric model. The model is estimated using aggregate post-war data for the Unitedmore » States. Production from a given addition to proved reserves is assumed to follow a negative exponential path, and additions of proved reserves from a given discovery are assumed to follow a negative exponential path. Annual discoveries of oil and natural gas are estimated as latent variables. These latent variables are the endogenous variables in the econometric model of oil and natural gas discoveries. The model is estimated without resource base constraints. The model is also estimated imposing the mean oil and natural gas ultimate recovery estimates of the US Geological Survey. Simulations through the year 2020 are reported for various future price regimes.« less
Comparing mechanistic and empirical approaches to modeling the thermal niche of almond
NASA Astrophysics Data System (ADS)
Parker, Lauren E.; Abatzoglou, John T.
2017-09-01
Delineating locations that are thermally viable for cultivating high-value crops can help to guide land use planning, agronomics, and water management. Three modeling approaches were used to identify the potential distribution and key thermal constraints on on almond cultivation across the southwestern United States (US), including two empirical species distribution models (SDMs)—one using commonly used bioclimatic variables (traditional SDM) and the other using more physiologically relevant climate variables (nontraditional SDM)—and a mechanistic model (MM) developed using published thermal limitations from field studies. While models showed comparable results over the majority of the domain, including over existing croplands with high almond density, the MM suggested the greatest potential for the geographic expansion of almond cultivation, with frost susceptibility and insufficient heat accumulation being the primary thermal constraints in the southwestern US. The traditional SDM over-predicted almond suitability in locations shown by the MM to be limited by frost, whereas the nontraditional SDM showed greater agreement with the MM in these locations, indicating that incorporating physiologically relevant variables in SDMs can improve predictions. Finally, opportunities for geographic expansion of almond cultivation under current climatic conditions in the region may be limited, suggesting that increasing production may rely on agronomical advances and densifying current almond plantations in existing locations.
An Expert System for Managing Storage Space Constraints Aboard United States Naval Vessels
1991-12-01
aiad recommilts finher research to egabliab the CODU Wnuafa. 20. Dl$TRUUTWAVA ABiITY OF ABSTRACT 21Z ABSTRACT SECURI11TY CLASS011CATIO OU AM*kt~t K...concludes that the use of an expert system would provide valuable assistance to the afloat Supply Officer and recommends further research to establish the...APPLICABLE FORECASTING MODELS . .................... 20 C. APPLICABLE OPERATIONS RESEARCH MODELS .o.......24 IV. AN EXPER SYSTEM: VARIABLES TO CONSIDER
Helmet-based physiological signal monitoring system.
Kim, Youn Sung; Baek, Hyun Jae; Kim, Jung Soo; Lee, Haet Bit; Choi, Jong Min; Park, Kwang Suk
2009-02-01
A helmet-based system that was able to monitor the drowsiness of a soldier was developed. The helmet system monitored the electrocardiogram, electrooculogram and electroencephalogram (alpha waves) without constraints. Six dry electrodes were mounted at five locations on the helmet: both temporal sides, forehead region and upper and lower jaw strips. The electrodes were connected to an amplifier that transferred signals to a laptop computer via Bluetooth wireless communication. The system was validated by comparing the signal quality with conventional recording methods. Data were acquired from three healthy male volunteers for 12 min twice a day whilst they were sitting in a chair wearing the sensor-installed helmet. Experimental results showed that physiological signals for the helmet user were measured with acceptable quality without any intrusions on physical activities. The helmet system discriminated between the alert and drowsiness states by detecting blinking and heart rate variability (HRV) parameters extracted from ECG. Blinking duration and eye reopening time were increased during the sleepiness state compared to the alert state. Also, positive peak values of the sleepiness state were much higher, and the negative peaks were much lower than that of the alert state. The LF/HF ratio also decreased during drowsiness. This study shows the feasibility for using this helmet system: the subjects' health status and mental states could be monitored without constraints whilst they were working.
NASA Technical Reports Server (NTRS)
Newman, C. M.
1976-01-01
The constraints and limitations for STS Consumables Management are studied. Variables imposing constraints on the consumables related subsystems are identified, and a method determining constraint violations with the simplified consumables model in the Mission Planning Processor is presented.
A finite element solution algorithm for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Baker, A. J.
1974-01-01
A finite element solution algorithm is established for the two-dimensional Navier-Stokes equations governing the steady-state kinematics and thermodynamics of a variable viscosity, compressible multiple-species fluid. For an incompressible fluid, the motion may be transient as well. The primitive dependent variables are replaced by a vorticity-streamfunction description valid in domains spanned by rectangular, cylindrical and spherical coordinate systems. Use of derived variables provides a uniformly elliptic partial differential equation description for the Navier-Stokes system, and for which the finite element algorithm is established. Explicit non-linearity is accepted by the theory, since no psuedo-variational principles are employed, and there is no requirement for either computational mesh or solution domain closure regularity. Boundary condition constraints on the normal flux and tangential distribution of all computational variables, as well as velocity, are routinely piecewise enforceable on domain closure segments arbitrarily oriented with respect to a global reference frame.
Finite-time stabilisation of a class of switched nonlinear systems with state constraints
NASA Astrophysics Data System (ADS)
Huang, Shipei; Xiang, Zhengrong
2018-06-01
This paper investigates the finite-time stabilisation for a class of switched nonlinear systems with state constraints. Some power orders of the system are allowed to be ratios of positive even integers over odd integers. A Barrier Lyapunov function is introduced to guarantee that the state constraint is not violated at any time. Using the convex combination method and a recursive design approach, a state-dependent switching law and state feedback controllers of individual subsystems are constructed such that the closed-loop system is finite-time stable without violation of the state constraint. Two examples are provided to show the effectiveness of the proposed method.
Stationary properties of maximum-entropy random walks.
Dixit, Purushottam D
2015-10-01
Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.
Uncovering state-dependent relationships in shallow lakes using Bayesian latent variable regression.
Vitense, Kelsey; Hanson, Mark A; Herwig, Brian R; Zimmer, Kyle D; Fieberg, John
2018-03-01
Ecosystems sometimes undergo dramatic shifts between contrasting regimes. Shallow lakes, for instance, can transition between two alternative stable states: a clear state dominated by submerged aquatic vegetation and a turbid state dominated by phytoplankton. Theoretical models suggest that critical nutrient thresholds differentiate three lake types: highly resilient clear lakes, lakes that may switch between clear and turbid states following perturbations, and highly resilient turbid lakes. For effective and efficient management of shallow lakes and other systems, managers need tools to identify critical thresholds and state-dependent relationships between driving variables and key system features. Using shallow lakes as a model system for which alternative stable states have been demonstrated, we developed an integrated framework using Bayesian latent variable regression (BLR) to classify lake states, identify critical total phosphorus (TP) thresholds, and estimate steady state relationships between TP and chlorophyll a (chl a) using cross-sectional data. We evaluated the method using data simulated from a stochastic differential equation model and compared its performance to k-means clustering with regression (KMR). We also applied the framework to data comprising 130 shallow lakes. For simulated data sets, BLR had high state classification rates (median/mean accuracy >97%) and accurately estimated TP thresholds and state-dependent TP-chl a relationships. Classification and estimation improved with increasing sample size and decreasing noise levels. Compared to KMR, BLR had higher classification rates and better approximated the TP-chl a steady state relationships and TP thresholds. We fit the BLR model to three different years of empirical shallow lake data, and managers can use the estimated bifurcation diagrams to prioritize lakes for management according to their proximity to thresholds and chance of successful rehabilitation. Our model improves upon previous methods for shallow lakes because it allows classification and regression to occur simultaneously and inform one another, directly estimates TP thresholds and the uncertainty associated with thresholds and state classifications, and enables meaningful constraints to be built into models. The BLR framework is broadly applicable to other ecosystems known to exhibit alternative stable states in which regression can be used to establish relationships between driving variables and state variables. © 2017 by the Ecological Society of America.
Optimal reactive planning with security constraints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, W.R.; Cheng, D.T.Y.; Dixon, A.M.
1995-12-31
The National Grid Company (NGC) of England and Wales has developed a computer program, SCORPION, to help system planners optimize the location and size of new reactive compensation plant on the transmission system. The reactive power requirements of the NGC system have risen as a result of increased power flows and the shorter timescale on which power stations are commissioned and withdrawn from service. In view of the high costs involved, it is important that reactive compensation be installed as economically as possible, without compromising security. Traditional methods based on iterative use of a load flow program are labor intensivemore » and subjective. SCORPION determines a near-optimal pattern of new reactive sources which are required to satisfy voltage constraints for normal and contingent states of operation of the transmission system. The algorithm processes the system states sequentially, instead of optimizing all of them simultaneously. This allows a large number of system states to be considered with an acceptable run time and computer memory requirement. Installed reactive sources are treated as continuous, rather than discrete, variables. However, the program has a restart facility which enables the user to add realistically sized reactive sources explicitly and thereby work towards a realizable solution to the planning problem.« less
Stability and Variability in Aesthetic Experience: A Review
Jacobsen, Thomas; Beudt, Susan
2017-01-01
Based on psychophysics’ pragmatic dualism, we trace the cognitive neuroscience of stability and variability in aesthetic experience. With regard to different domains of aesthetic processing, we touch upon the relevance of cognitive schemata for aesthetic preference. Attitudes and preferences are explored in detail. Evolutionary constraints on attitude formation or schema generation are elucidated, just as the often seemingly arbitrary influences of social, societal, and cultural nature are. A particular focus is put on the concept of critical periods during an individual’s ontogenesis. The latter contrasting with changes of high frequency, such as fashion influences. Taken together, these analyses document the state of the art in the field and, potentially, highlight avenues for future research. PMID:28223955
Stability and Variability in Aesthetic Experience: A Review.
Jacobsen, Thomas; Beudt, Susan
2017-01-01
Based on psychophysics' pragmatic dualism, we trace the cognitive neuroscience of stability and variability in aesthetic experience. With regard to different domains of aesthetic processing, we touch upon the relevance of cognitive schemata for aesthetic preference. Attitudes and preferences are explored in detail. Evolutionary constraints on attitude formation or schema generation are elucidated, just as the often seemingly arbitrary influences of social, societal, and cultural nature are. A particular focus is put on the concept of critical periods during an individual's ontogenesis. The latter contrasting with changes of high frequency, such as fashion influences. Taken together, these analyses document the state of the art in the field and, potentially, highlight avenues for future research.
NASA Technical Reports Server (NTRS)
Kolb, Mark A.
1990-01-01
Originally, computer programs for engineering design focused on detailed geometric design. Later, computer programs for algorithmically performing the preliminary design of specific well-defined classes of objects became commonplace. However, due to the need for extreme flexibility, it appears unlikely that conventional programming techniques will prove fruitful in developing computer aids for engineering conceptual design. The use of symbolic processing techniques, such as object-oriented programming and constraint propagation, facilitate such flexibility. Object-oriented programming allows programs to be organized around the objects and behavior to be simulated, rather than around fixed sequences of function- and subroutine-calls. Constraint propagation allows declarative statements to be understood as designating multi-directional mathematical relationships among all the variables of an equation, rather than as unidirectional assignments to the variable on the left-hand side of the equation, as in conventional computer programs. The research has concentrated on applying these two techniques to the development of a general-purpose computer aid for engineering conceptual design. Object-oriented programming techniques are utilized to implement a user-extensible database of design components. The mathematical relationships which model both geometry and physics of these components are managed via constraint propagation. In addition, to this component-based hierarchy, special-purpose data structures are provided for describing component interactions and supporting state-dependent parameters. In order to investigate the utility of this approach, a number of sample design problems from the field of aerospace engineering were implemented using the prototype design tool, Rubber Airplane. The additional level of organizational structure obtained by representing design knowledge in terms of components is observed to provide greater convenience to the program user, and to result in a database of engineering information which is easier both to maintain and to extend.
Constraints, Trade-offs and the Currency of Fitness.
Acerenza, Luis
2016-03-01
Understanding evolutionary trajectories remains a difficult task. This is because natural evolutionary processes are simultaneously affected by various types of constraints acting at the different levels of biological organization. Of particular importance are constraints where correlated changes occur in opposite directions, called trade-offs. Here we review and classify the main evolutionary constraints and trade-offs, operating at all levels of trait hierarchy. Special attention is given to life history trade-offs and the conflict between the survival and reproduction components of fitness. Cellular mechanisms underlying fitness trade-offs are described. At the metabolic level, a linear trade-off between growth and flux variability was found, employing bacterial genome-scale metabolic reconstructions. Its analysis indicates that flux variability can be considered as the currency of fitness. This currency is used for fitness transfer between fitness components during adaptations. Finally, a discussion is made regarding the constraints which limit the increase in the amount of fitness currency during evolution, suggesting that occupancy constraints are probably the main restrictions.
Generalized Optimal-State-Constraint Extended Kalman Filter (OSC-EKF)
2017-02-01
ARL-TR-7948• FEB 2017 US Army Research Laboratory GeneralizedOptimal-State-Constraint ExtendedKalman Filter (OSC-EKF) by James M Maley, Kevin...originator. ARL-TR-7948• FEB 2017 US Army Research Laboratory GeneralizedOptimal-State-Constraint ExtendedKalman Filter (OSC-EKF) by James M Maley Weapons and...
Students' daily emotions in the classroom: intra-individual variability and appraisal correlates.
Ahmed, Wondimu; van der Werf, Greetje; Minnaert, Alexander; Kuyper, Hans
2010-12-01
Recent literature on emotions in education has shown that competence- and value-related beliefs are important sources of students' emotions; nevertheless, the role of these antecedents in students' daily functioning in the classroom is not yet well-known. More importantly, to date we know little about intra-individual variability in students' daily emotions. The objectives of the study were (1) to examine within-student variability in emotional experiences and (2) to investigate how competence and value appraisals are associated with emotions. It was hypothesized that emotions would show substantial within-student variability and that there would be within-person associations between competence and value appraisals and the emotions. (s) The sample consisted of 120 grade 7 students (52%, girls) in 5 randomly selected classrooms in a secondary school. A diary method was used to acquire daily process variables of emotions and appraisals. Daily emotions and daily appraisals were assessed using items adapted from existing measures. Multi-level modelling was used to test the hypotheses. As predicted, the within-person variability in emotional states accounted for between 41% (for pride) and 70% (for anxiety) of total variability in the emotional states. Also as hypothesized, the appraisals were generally associated with the emotions. The within-student variability in emotions and appraisals clearly demonstrates the adaptability of students with respect to situational affordances and constraints in their everyday classroom experiences. The significant covariations between the appraisals and emotions suggest that within-student variability in emotions is systematic.
NASA Astrophysics Data System (ADS)
Simon, E.; Bertino, L.; Samuelsen, A.
2011-12-01
Combined state-parameter estimation in ocean biogeochemical models with ensemble-based Kalman filters is a challenging task due to the non-linearity of the models, the constraints of positiveness that apply to the variables and parameters, and the non-Gaussian distribution of the variables in which they result. Furthermore, these models are sensitive to numerous parameters that are poorly known. Previous works [1] demonstrated that the Gaussian anamorphosis extensions of ensemble-based Kalman filters were relevant tools to perform combined state-parameter estimation in such non-Gaussian framework. In this study, we focus on the estimation of the grazing preferences parameters of zooplankton species. These parameters are introduced to model the diet of zooplankton species among phytoplankton species and detritus. They are positive values and their sum is equal to one. Because the sum-to-one constraint cannot be handled by ensemble-based Kalman filters, a reformulation of the parameterization is proposed. We investigate two types of changes of variables for the estimation of sum-to-one constrained parameters. The first one is based on Gelman [2] and leads to the estimation of normal distributed parameters. The second one is based on the representation of the unit sphere in spherical coordinates and leads to the estimation of parameters with bounded distributions (triangular or uniform). These formulations are illustrated and discussed in the framework of twin experiments realized in the 1D coupled model GOTM-NORWECOM with Gaussian anamorphosis extensions of the deterministic ensemble Kalman filter (DEnKF). [1] Simon E., Bertino L. : Gaussian anamorphosis extension of the DEnKF for combined state and parameter estimation : application to a 1D ocean ecosystem model. Journal of Marine Systems, 2011. doi :10.1016/j.jmarsys.2011.07.007 [2] Gelman A. : Method of Moments Using Monte Carlo Simulation. Journal of Computational and Graphical Statistics, 4, 1, 36-54, 1995.
'Constraint consistency' at all orders in cosmological perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandi, Debottam; Shankaranarayanan, S., E-mail: debottam@iisertvm.ac.in, E-mail: shanki@iisertvm.ac.in
2015-08-01
We study the equivalence of two—order-by-order Einstein's equation and Reduced action—approaches to cosmological perturbation theory at all orders for different models of inflation. We point out a crucial consistency check which we refer to as 'Constraint consistency' condition that needs to be satisfied in order for the two approaches to lead to identical single variable equation of motion. The method we propose here is quick and efficient to check the consistency for any model including modified gravity models. Our analysis points out an important feature which is crucial for inflationary model building i.e., all 'constraint' inconsistent models have higher ordermore » Ostrogradsky's instabilities but the reverse is not true. In other words, one can have models with constraint Lapse function and Shift vector, though it may have Ostrogradsky's instabilities. We also obtain single variable equation for non-canonical scalar field in the limit of power-law inflation for the second-order perturbed variables.« less
NASA Technical Reports Server (NTRS)
Schmid, L. A.
1977-01-01
The first and second variations are calculated for the irreducible form of Hamilton's Principle that involves the minimum number of dependent variables necessary to describe the kinetmatics and thermodynamics of inviscid, compressible, baroclinic flow in a specified gravitational field. The form of the second variation shows that, in the neighborhood of a stationary point that corresponds to physically stable flow, the action integral is a complex saddle surface in parameter space. There exists a form of Hamilton's Principle for which a direct solution of a flow problem is possible. This second form is related to the first by a Friedrichs transformation of the thermodynamic variables. This introduces an extra dependent variable, but the first and second variations are shown to have direct physical significance, namely they are equal to the free energy of fluctuations about the equilibrium flow that satisfies the equations of motion. If this equilibrium flow is physically stable, and if a very weak second order integral constraint on the correlation between the fluctuations of otherwise independent variables is satisfied, then the second variation of the action integral for this free energy form of Hamilton's Principle is positive-definite, so the action integral is a minimum, and can serve as the basis for a direct trail and error solution. The second order integral constraint states that the unavailable energy must be maximum at equilibrium, i.e. the fluctuations must be so correlated as to produce a second order decrease in the total unavailable energy.
Distributed Constrained Optimization with Semicoordinate Transformations
NASA Technical Reports Server (NTRS)
Macready, William; Wolpert, David
2006-01-01
Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.
Simplicity constraints: A 3D toy model for loop quantum gravity
NASA Astrophysics Data System (ADS)
Charles, Christoph
2018-05-01
In loop quantum gravity, tremendous progress has been made using the Ashtekar-Barbero variables. These variables, defined in a gauge fixing of the theory, correspond to a parametrization of the solutions of the so-called simplicity constraints. Their geometrical interpretation is however unsatisfactory as they do not constitute a space-time connection. It would be possible to resolve this point by using a full Lorentz connection or, equivalently, by using the self-dual Ashtekar variables. This leads however to simplicity constraints or reality conditions which are notoriously difficult to implement in the quantum theory. We explore in this paper the possibility of using completely degenerate actions to impose such constraints at the quantum level in the context of canonical quantization. To do so, we define a simpler model, in 3D, with similar constraints by extending the phase space to include an independent vielbein. We define the classical model and show that a precise quantum theory by gauge unfixing can be defined out of it, completely equivalent to the standard 3D Euclidean quantum gravity. We discuss possible future explorations around this model as it could help as a stepping stone to define full-fledged covariant loop quantum gravity.
Universal Quantification in a Constraint-Based Planner
NASA Technical Reports Server (NTRS)
Golden, Keith; Frank, Jeremy; Clancy, Daniel (Technical Monitor)
2002-01-01
Constraints and universal quantification are both useful in planning, but handling universally quantified constraints presents some novel challenges. We present a general approach to proving the validity of universally quantified constraints. The approach essentially consists of checking that the constraint is not violated for all members of the universe. We show that this approach can sometimes be applied even when variable domains are infinite, and we present some useful special cases where this can be done efficiently.
Permutation Entropy Applied to Movement Behaviors of Drosophila Melanogaster
NASA Astrophysics Data System (ADS)
Liu, Yuedan; Chon, Tae-Soo; Baek, Hunki; Do, Younghae; Choi, Jin Hee; Chung, Yun Doo
Movement of different strains in Drosophila melanogaster was continuously observed by using computer interfacing techniques and was analyzed by permutation entropy (PE) after exposure to toxic chemicals, toluene (0.1 mg/m3) and formaldehyde (0.01 mg/m3). The PE values based on one-dimensional time series position (vertical) data were variable according to internal constraint (i.e. strains) and accordingly increased in response to external constraint (i.e. chemicals) by reflecting diversity in movement patterns from both normal and intoxicated states. Cross-correlation function revealed temporal associations between the PE values and between the component movement patterns in different chemicals and strains through the period of intoxication. The entropy based on the order of position data could be a useful means for complexity measure in behavioral changes and for monitoring the impact of stressors in environment.
NASA Astrophysics Data System (ADS)
Liu, Chuang; Ye, Dong; Shi, Keke; Sun, Zhaowei
2017-07-01
A novel improved mixed H2/H∞ control technique combined with poles assignment theory is presented to achieve attitude stabilization and vibration suppression simultaneously for flexible spacecraft in this paper. The flexible spacecraft dynamics system is described and transformed into corresponding state space form. Based on linear matrix inequalities (LMIs) scheme and poles assignment theory, the improved mixed H2/H∞ controller does not restrict the equivalence of the two Lyapunov variables involved in H2 and H∞ performance, which can reduce conservatives compared with traditional mixed H2/H∞ controller. Moreover, it can eliminate the coupling of Lyapunov matrix variables and system matrices by introducing slack variable that provides additional degree of freedom. Several simulations are performed to demonstrate the effectiveness and feasibility of the proposed method in this paper.
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization. PMID:27243005
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.
Changes in climate variability with reference to land quality and agriculture in Scotland.
Brown, Iain; Castellazzi, Marie
2015-06-01
Classification and mapping of land capability represents an established format for summarising spatial information on land quality and land-use potential. By convention, this information incorporates bioclimatic constraints through the use of a long-term average. However, climate change means that land capability classification should also have a dynamic temporal component. Using an analysis based upon Land Capability for Agriculture in Scotland, it is shown that this dynamism not only involves the long-term average but also shorter term spatiotemporal patterns, particularly through changes in interannual variability. Interannual and interdecadal variations occur both in the likelihood of land being in prime condition (top three capability class divisions) and in class volatility from year to year. These changing patterns are most apparent in relation to the west-east climatic gradient which is mainly a function of precipitation regime and soil moisture. Analysis is also extended into the future using climate results for the 2050s from a weather generator which show a complex interaction between climate interannual variability and different soil types for land quality. In some locations, variability of land capability is more likely to decrease because the variable climatic constraints are relaxed and the dominant constraint becomes intrinsic soil properties. Elsewhere, climatic constraints will continue to be influential. Changing climate variability has important implications for land-use planning and agricultural management because it modifies local risk profiles in combination with the current trend towards agricultural intensification and specialisation.
Direct handling of equality constraints in multilevel optimization
NASA Technical Reports Server (NTRS)
Renaud, John E.; Gabriele, Gary A.
1990-01-01
In recent years there have been several hierarchic multilevel optimization algorithms proposed and implemented in design studies. Equality constraints are often imposed between levels in these multilevel optimizations to maintain system and subsystem variable continuity. Equality constraints of this nature will be referred to as coupling equality constraints. In many implementation studies these coupling equality constraints have been handled indirectly. This indirect handling has been accomplished using the coupling equality constraints' explicit functional relations to eliminate design variables (generally at the subsystem level), with the resulting optimization taking place in a reduced design space. In one multilevel optimization study where the coupling equality constraints were handled directly, the researchers encountered numerical difficulties which prevented their multilevel optimization from reaching the same minimum found in conventional single level solutions. The researchers did not explain the exact nature of the numerical difficulties other than to associate them with the direct handling of the coupling equality constraints. The coupling equality constraints are handled directly, by employing the Generalized Reduced Gradient (GRG) method as the optimizer within a multilevel linear decomposition scheme based on the Sobieski hierarchic algorithm. Two engineering design examples are solved using this approach. The results show that the direct handling of coupling equality constraints in a multilevel optimization does not introduce any problems when the GRG method is employed as the internal optimizer. The optimums achieved are comparable to those achieved in single level solutions and in multilevel studies where the equality constraints have been handled indirectly.
Pricing of swing options: A Monte Carlo simulation approach
NASA Astrophysics Data System (ADS)
Leow, Kai-Siong
We study the problem of pricing swing options, a class of multiple early exercise options that are traded in energy market, particularly in the electricity and natural gas markets. These contracts permit the option holder to periodically exercise the right to trade a variable amount of energy with a counterparty, subject to local volumetric constraints. In addition, the total amount of energy traded from settlement to expiration with the counterparty is restricted by a global volumetric constraint. Violation of this global volumetric constraint is allowed but would lead to penalty settled at expiration. The pricing problem is formulated as a stochastic optimal control problem in discrete time and state space. We present a stochastic dynamic programming algorithm which is based on piecewise linear concave approximation of value functions. This algorithm yields the value of the swing option under the assumption that the optimal exercise policy is applied by the option holder. We present a proof of an almost sure convergence that the algorithm generates the optimal exercise strategy as the number of iterations approaches to infinity. Finally, we provide a numerical example for pricing a natural gas swing call option.
Feed Forward Neural Network and Optimal Control Problem with Control and State Constraints
NASA Astrophysics Data System (ADS)
Kmet', Tibor; Kmet'ová, Mária
2009-09-01
A feed forward neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The paper extends adaptive critic neural network architecture proposed by [5] to the optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.
NASA Technical Reports Server (NTRS)
Vascik, Parker D.; Jung, Jaewoo
2016-01-01
An economic impact market analysis was conducted for 16 leading sectors of commercial Unmanned Aerial System (UAS) applications predicted to be enabled by 2020 through the NASA UAS Traffic Management (UTM) program. Subject matter experts from seven industries were interviewed to validate concept of operations (ConOps) and market adoption assumptions for each sector. The market analysis was used to estimate direct economic impacts for each sector including serviceable addressable market, capital investment, revenue recovery potential, and operations cost savings. The resultant economic picture distinguishes the agricultural, pipeline and railroad inspection, construction, and maritime sectors of the nascent commercial UAS industry as providing the highest potential economic value in the United States. Sensitivity studies characterized the variability of select UAS sectors economic value to key regulatory or UTM ConOps requirements such as weight, altitude, and flight over populated area constraints. Takeaways from the analysis inform the validation of UTM requirements, technologies and timetables from a commercial market need and value viewpoint. This work concluded in August 2015 and reflects the state of the UAS industry and market projections at that time.
Metastability and emergent performance of dynamic interceptive actions.
Pinder, Ross A; Davids, Keith; Renshaw, Ian
2012-09-01
Adaptive patterning of human movement is context specific and dependent on interacting constraints of the performer-environment relationship. Flexibility of skilled behaviour is predicated on the capacity of performers to move between different states of movement organisation to satisfy dynamic task constraints, previously demonstrated in studies of visual perception, bimanual coordination, and an interceptive combat task. Metastability is a movement system property that helps performers to remain in a state of relative coordination with their performance environments, poised between multiple co-existing states (stable and distinct movement patterns or responses). The aim of this study was to examine whether metastability could be exploited in externally paced interceptive actions in fast ball sports, such as cricket. Here we report data on metastability in performance of multi-articular hitting actions by skilled junior cricket batters (n=5). Participants' batting actions (key movement timings and performance outcomes) were analysed in four distinct performance regions varied by ball pitching (bounce) location. Results demonstrated that, at a pre-determined distance to the ball, participants were forced into a meta-stable region of performance where rich and varied patterns of functional movement behaviours emerged. Participants adapted the organisation of responses, resulting in higher levels of variability in movement timing in this performance region, without detrimental effects on the quality of interceptive performance outcomes. Findings provide evidence for the emergence of metastability in a dynamic interceptive action in cricket batting. Flexibility and diversity of movement responses were optimised using experiential knowledge and careful manipulation of key task constraints of the specific sport context. Copyright © 2012 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Modeling Real-Time Human-Automation Collaborative Scheduling of Unmanned Vehicles
2013-06-01
that they can only take into account those quantifiable variables, parameters, objectives, and constraints identified in the design stages that were... account those quantifiable variables, parameters, objectives, and constraints identified in the design stages that were deemed to be critical. Previous...increased training and operating costs (Haddal & Gertler, 2010) and challenges in meeting the ever-increasing demand for more UV operations (U.S. Air
Information geometry of Gaussian channels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monras, Alex; CNR-INFM Coherentia, Napoli; CNISM Unita di Salerno
2010-06-15
We define a local Riemannian metric tensor in the manifold of Gaussian channels and the distance that it induces. We adopt an information-geometric approach and define a metric derived from the Bures-Fisher metric for quantum states. The resulting metric inherits several desirable properties from the Bures-Fisher metric and is operationally motivated by distinguishability considerations: It serves as an upper bound to the attainable quantum Fisher information for the channel parameters using Gaussian states, under generic constraints on the physically available resources. Our approach naturally includes the use of entangled Gaussian probe states. We prove that the metric enjoys some desirablemore » properties like stability and covariance. As a by-product, we also obtain some general results in Gaussian channel estimation that are the continuous-variable analogs of previously known results in finite dimensions. We prove that optimal probe states are always pure and bounded in the number of ancillary modes, even in the presence of constraints on the reduced state input in the channel. This has experimental and computational implications. It limits the complexity of optimal experimental setups for channel estimation and reduces the computational requirements for the evaluation of the metric: Indeed, we construct a converging algorithm for its computation. We provide explicit formulas for computing the multiparametric quantum Fisher information for dissipative channels probed with arbitrary Gaussian states and provide the optimal observables for the estimation of the channel parameters (e.g., bath couplings, squeezing, and temperature).« less
NASA Astrophysics Data System (ADS)
Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven
2017-04-01
Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to constrain model uncertainty for an - assumed to be - ungauged basin. This shows that our method is promising for reconstructing long-term flow data for ungauged catchments on the Pacific side of Central America, and that similar methods can be developed for ungauged basins in other regions where climate variability exerts a strong control on streamflow variability.
Dynamic Constraint Satisfaction with Reasonable Global Constraints
NASA Technical Reports Server (NTRS)
Frank, Jeremy
2003-01-01
Previously studied theoretical frameworks for dynamic constraint satisfaction problems (DCSPs) employ a small set of primitive operators to modify a problem instance. They do not address the desire to model problems using sophisticated global constraints, and do not address efficiency questions related to incremental constraint enforcement. In this paper, we extend a DCSP framework to incorporate global constraints with flexible scope. A simple approach to incremental propagation after scope modification can be inefficient under some circumstances. We characterize the cases when this inefficiency can occur, and discuss two ways to alleviate this problem: adding rejection variables to the scope of flexible constraints, and adding new features to constraints that permit increased control over incremental propagation.
Li, Yang; Oku, Makito; He, Guoguang; Aihara, Kazuyuki
2017-04-01
In this study, a method is proposed that eliminates spiral waves in a locally connected chaotic neural network (CNN) under some simplified conditions, using a dynamic phase space constraint (DPSC) as a control method. In this method, a control signal is constructed from the feedback internal states of the neurons to detect phase singularities based on their amplitude reduction, before modulating a threshold value to truncate the refractory internal states of the neurons and terminate the spirals. Simulations showed that with appropriate parameter settings, the network was directed from a spiral wave state into either a plane wave (PW) state or a synchronized oscillation (SO) state, where the control vanished automatically and left the original CNN model unaltered. Each type of state had a characteristic oscillation frequency, where spiral wave states had the highest, and the intra-control dynamics was dominated by low-frequency components, thereby indicating slow adjustments to the state variables. In addition, the PW-inducing and SO-inducing control processes were distinct, where the former generally had longer durations but smaller average proportions of affected neurons in the network. Furthermore, variations in the control parameter allowed partial selectivity of the control results, which were accompanied by modulation of the control processes. The results of this study broaden the applicability of DPSC to chaos control and they may also facilitate the utilization of locally connected CNNs in memory retrieval and the exploration of traveling wave dynamics in biological neural networks. Copyright © 2017 Elsevier Ltd. All rights reserved.
Adaptive Aft Signature Shaping of a Low-Boom Supersonic Aircraft Using Off-Body Pressures
NASA Technical Reports Server (NTRS)
Ordaz, Irian; Li, Wu
2012-01-01
The design and optimization of a low-boom supersonic aircraft using the state-of-the- art o -body aerodynamics and sonic boom analysis has long been a challenging problem. The focus of this paper is to demonstrate an e ective geometry parameterization scheme and a numerical optimization approach for the aft shaping of a low-boom supersonic aircraft using o -body pressure calculations. A gradient-based numerical optimization algorithm that models the objective and constraints as response surface equations is used to drive the aft ground signature toward a ramp shape. The design objective is the minimization of the variation between the ground signature and the target signature subject to several geometric and signature constraints. The target signature is computed by using a least-squares regression of the aft portion of the ground signature. The parameterization and the deformation of the geometry is performed with a NASA in- house shaping tool. The optimization algorithm uses the shaping tool to drive the geometric deformation of a horizontal tail with a parameterization scheme that consists of seven camber design variables and an additional design variable that describes the spanwise location of the midspan section. The demonstration cases show that numerical optimization using the state-of-the-art o -body aerodynamic calculations is not only feasible and repeatable but also allows the exploration of complex design spaces for which a knowledge-based design method becomes less effective.
Chang, Su-Chao; Chou, Chi-Min
2012-11-01
The objective of this study was to determine empirically the role of constraint-based and dedication-based influences as drivers of the intention to continue using online shopping websites. Constraint-based influences consist of two variables: trust and perceived switching costs. Dedication-based influences consist of three variables: satisfaction, perceived usefulness, and trust. The current results indicate that both constraint-based and dedication-based influences are important drivers of the intention to continue using online shopping websites. The data also shows that trust has the strongest total effect on online shoppers' intention to continue using online shopping websites. In addition, the results indicate that the antecedents of constraint-based influences, technical bonds (e.g., perceived operational competence and perceived website interactivity) and social bonds (e.g., perceived relationship investment, community building, and intimacy) have indirect positive effects on the intention to continue using online shopping websites. Based on these findings, this research suggests that online shopping websites should build constraint-based and dedication-based influences to enhance user's continued online shopping behaviors simultaneously.
Translating MAPGEN to ASPEN for MER
NASA Technical Reports Server (NTRS)
Rabideau, Gregg R.; Knight, Russell L.; Lenda, Matthew; Maldague, Pierre F.
2013-01-01
This software translates MAPGEN (Europa and APGEN) domains to ASPEN, and the resulting domain can be used to perform planning for the Mars Exploration Rover (MER). In other words, this is a conversion of two distinct planning languages (both declarative and procedural) to a third (declarative) planning language in order to solve the problem of faithful translation from mixed-domain representations into the ASPEN Modeling Language. The MAPGEN planning system is an example of a hybrid procedural/declarative system where the advantages of each are leveraged to produce an effective planner/scheduler for MER tactical planning. The adaptation of the planning system (ASPEN) was investigated, and, with some translation, much of the procedural knowledge encoding is amenable to declarative knowledge encoding. The approach was to compose translators from the core languages used for adapting MAGPEN, which consists of Europa and APGEN. Europa is a constraint- based planner/scheduler where domains are encoded using a declarative model. APGEN is also constraint-based, in that it tracks constraints on resources and states and other variables. Domains are encoded in both constraints and code snippets that execute according to a forward sweep through the plan. Europa and APGEN communicate to each other using proxy activities in APGEN that represent constraints and/or tokens in Europa. The composition of a translator from Europa to ASPEN was fairly straightforward, as ASPEN is also a declarative planning system, and the specific uses of Europa for the MER domain matched ASPEN s native encoding fairly closely. On the other hand, translating from APGEN to ASPEN was considerably more involved. On the surface, the types of activities and resources one encodes in APGEN appear to match oneto- one to the activities, state variables, and resources in ASPEN. But, when looking into the definitions of how resources are profiled and activities are expanded, one sees code snippets that access various information available during planning for the moment in time being planned to decide at the time what the appropriate profile or expansion is. APGEN is actually a forward (in time) sweeping discrete event simulator, where the model is composed of code snippets that are artfully interleaved by the engine to produce a plan/schedule. To solve this problem, representative code is simulated as a declarative series of task expansions. Predominantly, three types of procedural models were translated: loops, if statements, and code blocks. Loops and if statements were handled using controlled task expansion, and code blocks were handled using constraint networks that maintained the generation of results based on what the order of execution would be for a procedural representation. One advantage with respect to performance for MAPGEN is the use of APGEN s GUI. This GUI is written in C++ and Motif, and performs very well for large plans.
Constraining the compressed spectrum of the top squark and chargino along the W corridor
NASA Astrophysics Data System (ADS)
Cheng, Hsin-Chia; Li, Lingfeng; Qin, Qin
2018-03-01
Studying superpartner production together with a hard initial state radiation jet has been a useful strategy for searches of supersymmetry with a compressed spectrum at the LHC. In the case of the top squark (stop), the ratio of the missing transverse momentum from the lightest neutralinos and the initial state radiation momentum, defined as R¯M, turns out to be an effective variable to distinguish the signal from the backgrounds. It has helped to exclude the stop mass below 590 GeV along the top corridor where mt ˜-mχ˜1 0≈mt . On the other hand, the current experimental limit is still rather weak in the W corridor where mt ˜-mχ˜10≈mW+mb. In this work, we extend this strategy to the parameter region around the W corridor by considering the one lepton final state. In this case, the kinematic constraints are insufficient to completely determine the neutrino momentum, which is required to calculate R¯M. However, the minimum value of R¯M consistent with the kinematic constraints still provides a useful discriminating variable, allowing the exclusion reach of the stop mass to be extended to ˜550 GeV based on the current 36 fb-1 LHC data. The same method can also be applied to the chargino search with mχ˜1±-mχ˜10≈mW because the analysis does not rely on b jets. If no excess is present in the current data, a chargino mass of 300 GeV along the W corridor can be excluded, beyond the limit obtained from the multilepton search.
NASA Astrophysics Data System (ADS)
Adem, Abdullahi Rashid; Moawad, Salah M.
2018-05-01
In this paper, the steady-state equations of ideal magnetohydrodynamic incompressible flows in axisymmetric domains are investigated. These flows are governed by a second-order elliptic partial differential equation as a type of generalized Grad-Shafranov equation. The problem of finding exact equilibria to the full governing equations in the presence of incompressible mass flows is considered. Two different types of constraints on position variables are presented to construct exact solution classes for several nonlinear cases of the governing equations. Some of the obtained results are checked for their applications to magnetic confinement plasma. Besides, they cover many previous configurations and include new considerations about the nonlinearity of magnetic flux stream variables.
Ride comfort control in large flexible aircraft. M.S. Thesis
NASA Technical Reports Server (NTRS)
Warren, M. E.
1971-01-01
The problem of ameliorating the discomfort of passengers on a large air transport subject to flight disturbances is examined. The longitudinal dynamics of the aircraft, including effects of body flexing, are developed in terms of linear, constant coefficient differential equations in state variables. A cost functional, penalizing the rigid body displacements and flexure accelerations over the surface of the aircraft is formulated as a quadratic form. The resulting control problem, to minimize the cost subject to the state equation constraints, is of a class whose solutions are well known. The feedback gains for the optimal controller are calculated digitally, and the resulting autopilot is simulated on an analog computer and its performance evaluated.
Integrated stoichiometric, thermodynamic and kinetic modelling of steady state metabolism
Fleming, R.M.T.; Thiele, I.; Provan, G.; Nasheuer, H.P.
2010-01-01
The quantitative analysis of biochemical reactions and metabolites is at frontier of biological sciences. The recent availability of high-throughput technology data sets in biology has paved the way for new modelling approaches at various levels of complexity including the metabolome of a cell or an organism. Understanding the metabolism of a single cell and multi-cell organism will provide the knowledge for the rational design of growth conditions to produce commercially valuable reagents in biotechnology. Here, we demonstrate how equations representing steady state mass conservation, energy conservation, the second law of thermodynamics, and reversible enzyme kinetics can be formulated as a single system of linear equalities and inequalities, in addition to linear equalities on exponential variables. Even though the feasible set is non-convex, the reformulation is exact and amenable to large-scale numerical analysis, a prerequisite for computationally feasible genome scale modelling. Integrating flux, concentration and kinetic variables in a unified constraint-based formulation is aimed at increasing the quantitative predictive capacity of flux balance analysis. Incorporation of experimental and theoretical bounds on thermodynamic and kinetic variables ensures that the predicted steady state fluxes are both thermodynamically and biochemically feasible. The resulting in silico predictions are tested against fluxomic data for central metabolism in E. coli and compare favourably with in silico prediction by flux balance analysis. PMID:20230840
NASA Astrophysics Data System (ADS)
Yeh, G. T.; Tsai, C. H.
2015-12-01
This paper presents the development of a THMC (thermal-hydrology-mechanics-chemistry) process model in variably saturated media. The governing equations for variably saturated flow and reactive chemical transport are obtained based on the mass conservation principle of species transport supplemented with Darcy's law, constraint of species concentration, equation of states, and constitutive law of K-S-P (Conductivity-Degree of Saturation-Capillary Pressure). The thermal transport equation is obtained based on the conservation of energy. The geo-mechanic displacement is obtained based on the assumption of equilibrium. Conventionally, these equations have been implicitly coupled via the calculations of secondary variables based on primary variables. The mechanisms of coupling have not been obvious. In this paper, governing equations are explicitly coupled for all primary variables. The coupling is accomplished via the storage coefficients, transporting velocities, and conduction-dispersion-diffusion coefficient tensor; one set each for every primary variable. With this new system of equations, the coupling mechanisms become clear. Physical interpretations of every term in the coupled equations will be discussed. Examples will be employed to demonstrate the intuition and superiority of these explicit coupling approaches. Keywords: Variably Saturated Flow, Thermal Transport, Geo-mechanics, Reactive Transport.
Gregg, Robert D; Lenzi, Tommaso; Hargrove, Levi J; Sensinger, Jonathon W
2014-12-01
Recent powered (or robotic) prosthetic legs independently control different joints and time periods of the gait cycle, resulting in control parameters and switching rules that can be difficult to tune by clinicians. This challenge might be addressed by a unifying control model used by recent bipedal robots, in which virtual constraints define joint patterns as functions of a monotonic variable that continuously represents the gait cycle phase. In the first application of virtual constraints to amputee locomotion, this paper derives exact and approximate control laws for a partial feedback linearization to enforce virtual constraints on a prosthetic leg. We then encode a human-inspired invariance property called effective shape into virtual constraints for the stance period. After simulating the robustness of the partial feedback linearization to clinically meaningful conditions, we experimentally implement this control strategy on a powered transfemoral leg. We report the results of three amputee subjects walking overground and at variable cadences on a treadmill, demonstrating the clinical viability of this novel control approach.
Lenzi, Tommaso; Hargrove, Levi J.; Sensinger, Jonathon W.
2014-01-01
Recent powered (or robotic) prosthetic legs independently control different joints and time periods of the gait cycle, resulting in control parameters and switching rules that can be difficult to tune by clinicians. This challenge might be addressed by a unifying control model used by recent bipedal robots, in which virtual constraints define joint patterns as functions of a monotonic variable that continuously represents the gait cycle phase. In the first application of virtual constraints to amputee locomotion, this paper derives exact and approximate control laws for a partial feedback linearization to enforce virtual constraints on a prosthetic leg. We then encode a human-inspired invariance property called effective shape into virtual constraints for the stance period. After simulating the robustness of the partial feedback linearization to clinically meaningful conditions, we experimentally implement this control strategy on a powered transfemoral leg. We report the results of three amputee subjects walking overground and at variable cadences on a treadmill, demonstrating the clinical viability of this novel control approach. PMID:25558185
Development of sensor augmented robotic weld systems for aerospace propulsion system fabrication
NASA Technical Reports Server (NTRS)
Jones, C. S.; Gangl, K. J.
1986-01-01
In order to meet stringent performance goals for power and reuseability, the Space Shuttle Main Engine was designed with many complex, difficult welded joints that provide maximum strength and minimum weight. To this end, the SSME requires 370 meters of welded joints. Automation of some welds has improved welding productivity significantly over manual welding. Application has previously been limited by accessibility constraints, requirements for complex process control, low production volumes, high part variability, and stringent quality requirements. Development of robots for welding in this application requires that a unique set of constraints be addressed. This paper shows how robotic welding can enhance production of aerospace components by addressing their specific requirements. A development program at the Marshall Space Flight Center combining industrial robots with state-of-the-art sensor systems and computer simulation is providing technology for the automation of welds in Space Shuttle Main Engine production.
Liu, Qingshan; Guo, Zhishan; Wang, Jun
2012-02-01
In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rowe, H. D.; Dunbar, R. B.
2004-09-01
A basin-scale hydrologic-energy balance model that integrates modern climatological, hydrological, and hypsographic observations was developed for the modern Lake Titicaca watershed (northern Altiplano, South America) and operated under variable conditions to understand controls on post-glacial changes in lake level. The model simulates changes in five environmental variables (air temperature, cloud fraction, precipitation, relative humidity, and land surface albedo). Relatively small changes in three meteorological variables (mean annual precipitation, temperature, and/or cloud fraction) explain the large mid-Holocene lake-level decrease (˜85 m) inferred from seismic reflection profiling and supported by sediment-based paleoproxies from lake sediments. Climatic controls that shape the present-day Altiplano and the sediment-based record of Holocene lake-level change are combined to interpret model-derived lake-level simulations in terms of changes in the mean state of ENSO and its impact on moisture transport to the Altiplano.
Behavioral Dynamics in Swimming: The Appropriate Use of Inertial Measurement Units.
Guignard, Brice; Rouard, Annie; Chollet, Didier; Seifert, Ludovic
2017-01-01
Motor control in swimming can be analyzed using low- and high-order parameters of behavior. Low-order parameters generally refer to the superficial aspects of movement (i.e., position, velocity, acceleration), whereas high-order parameters capture the dynamics of movement coordination. To assess human aquatic behavior, both types have usually been investigated with multi-camera systems, as they offer high three-dimensional spatial accuracy. Research in ecological dynamics has shown that movement system variability can be viewed as a functional property of skilled performers, helping them adapt their movements to the surrounding constraints. Yet to determine the variability of swimming behavior, a large number of stroke cycles (i.e., inter-cyclic variability) has to be analyzed, which is impossible with camera-based systems as they simply record behaviors over restricted volumes of water. Inertial measurement units (IMUs) were designed to explore the parameters and variability of coordination dynamics. These light, transportable and easy-to-use devices offer new perspectives for swimming research because they can record low- to high-order behavioral parameters over long periods. We first review how the low-order behavioral parameters (i.e., speed, stroke length, stroke rate) of human aquatic locomotion and their variability can be assessed using IMUs. We then review the way high-order parameters are assessed and the adaptive role of movement and coordination variability in swimming. We give special focus to the circumstances in which determining the variability between stroke cycles provides insight into how behavior oscillates between stable and flexible states to functionally respond to environmental and task constraints. The last section of the review is dedicated to practical recommendations for coaches on using IMUs to monitor swimming performance. We therefore highlight the need for rigor in dealing with these sensors appropriately in water. We explain the fundamental and mandatory steps to follow for accurate results with IMUs, from data acquisition (e.g., waterproofing procedures) to interpretation (e.g., drift correction).
Behavioral Dynamics in Swimming: The Appropriate Use of Inertial Measurement Units
Guignard, Brice; Rouard, Annie; Chollet, Didier; Seifert, Ludovic
2017-01-01
Motor control in swimming can be analyzed using low- and high-order parameters of behavior. Low-order parameters generally refer to the superficial aspects of movement (i.e., position, velocity, acceleration), whereas high-order parameters capture the dynamics of movement coordination. To assess human aquatic behavior, both types have usually been investigated with multi-camera systems, as they offer high three-dimensional spatial accuracy. Research in ecological dynamics has shown that movement system variability can be viewed as a functional property of skilled performers, helping them adapt their movements to the surrounding constraints. Yet to determine the variability of swimming behavior, a large number of stroke cycles (i.e., inter-cyclic variability) has to be analyzed, which is impossible with camera-based systems as they simply record behaviors over restricted volumes of water. Inertial measurement units (IMUs) were designed to explore the parameters and variability of coordination dynamics. These light, transportable and easy-to-use devices offer new perspectives for swimming research because they can record low- to high-order behavioral parameters over long periods. We first review how the low-order behavioral parameters (i.e., speed, stroke length, stroke rate) of human aquatic locomotion and their variability can be assessed using IMUs. We then review the way high-order parameters are assessed and the adaptive role of movement and coordination variability in swimming. We give special focus to the circumstances in which determining the variability between stroke cycles provides insight into how behavior oscillates between stable and flexible states to functionally respond to environmental and task constraints. The last section of the review is dedicated to practical recommendations for coaches on using IMUs to monitor swimming performance. We therefore highlight the need for rigor in dealing with these sensors appropriately in water. We explain the fundamental and mandatory steps to follow for accurate results with IMUs, from data acquisition (e.g., waterproofing procedures) to interpretation (e.g., drift correction). PMID:28352243
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hempling, S.; Elefant, C.; Cory, K.
State legislatures and state utility commissions trying to attract renewable energy projects are considering feed-in tariffs, which obligate retail utilities to purchase electricity from renewable producers under standard arrangements specifying prices, terms, and conditions. The use of feed-in tariffs simplifies the purchase process, provides revenue certainty to generators, and reduces the cost of financing generating projects. However, some argue that federal law--including the Public Utility Regulatory Policies Act of 1978 (PURPA) and the Federal Power Act of 1935 (FPA)--constrain state-level feed-in tariffs. This report seeks to reduce the legal uncertainties for states contemplating feed-in tariffs by explaining the constraints imposedmore » by federal statutes. It describes the federal constraints, identifies transaction categories that are free of those constraints, and offers ways for state and federal policymakers to interpret or modify existing law to remove or reduce these constraints. This report proposes ways to revise these federal statutes. It creates a broad working definition of a state-level feed-in tariff. Given this definition, this report concludes there are paths to non-preempted, state-level feed-in tariffs under current federal law.« less
Strong monogamy of bipartite and genuine multipartite entanglement: the Gaussian case.
Adesso, Gerardo; Illuminati, Fabrizio
2007-10-12
We demonstrate the existence of general constraints on distributed quantum correlations, which impose a trade-off on bipartite and multipartite entanglement at once. For all N-mode Gaussian states under permutation invariance, we establish exactly a monogamy inequality, stronger than the traditional one, that by recursion defines a proper measure of genuine N-partite entanglement. Strong monogamy holds as well for subsystems of arbitrary size, and the emerging multipartite entanglement measure is found to be scale invariant. We unveil its operational connection with the optimal fidelity of continuous variable teleportation networks.
Schur Complement Inequalities for Covariance Matrices and Monogamy of Quantum Correlations
NASA Astrophysics Data System (ADS)
Lami, Ludovico; Hirche, Christoph; Adesso, Gerardo; Winter, Andreas
2016-11-01
We derive fundamental constraints for the Schur complement of positive matrices, which provide an operator strengthening to recently established information inequalities for quantum covariance matrices, including strong subadditivity. This allows us to prove general results on the monogamy of entanglement and steering quantifiers in continuous variable systems with an arbitrary number of modes per party. A powerful hierarchical relation for correlation measures based on the log-determinant of covariance matrices is further established for all Gaussian states, which has no counterpart among quantities based on the conventional von Neumann entropy.
Schur Complement Inequalities for Covariance Matrices and Monogamy of Quantum Correlations.
Lami, Ludovico; Hirche, Christoph; Adesso, Gerardo; Winter, Andreas
2016-11-25
We derive fundamental constraints for the Schur complement of positive matrices, which provide an operator strengthening to recently established information inequalities for quantum covariance matrices, including strong subadditivity. This allows us to prove general results on the monogamy of entanglement and steering quantifiers in continuous variable systems with an arbitrary number of modes per party. A powerful hierarchical relation for correlation measures based on the log-determinant of covariance matrices is further established for all Gaussian states, which has no counterpart among quantities based on the conventional von Neumann entropy.
NASA Technical Reports Server (NTRS)
Tapia, R. A.; Vanrooy, D. L.
1976-01-01
A quasi-Newton method is presented for minimizing a nonlinear function while constraining the variables to be nonnegative and sum to one. The nonnegativity constraints were eliminated by working with the squares of the variables and the resulting problem was solved using Tapia's general theory of quasi-Newton methods for constrained optimization. A user's guide for a computer program implementing this algorithm is provided.
NASA Technical Reports Server (NTRS)
Broucke, R.; Lass, H.
1975-01-01
It is shown that it is possible to make a change of variables in a Lagrangian in such a way that the number of variables is increased. The Euler-Lagrange equations in the redundant variables are obtained in the standard way (without the use of Lagrange multipliers). These equations are not independent but they are all valid and consistent. In some cases they are simpler than if the minimum number of variables are used. The redundant variables are supposed to be related to each other by several constraints (not necessarily holonomic), but these constraints are not used in the derivation of the equations of motion. The method is illustrated with the well known Kustaanheimo-Stiefel regularization. Some interesting applications to perturbation theory are also described.
Structural tailoring of engine blades (STAEBL) user's manual
NASA Technical Reports Server (NTRS)
Brown, K. W.
1985-01-01
This User's Manual contains instructions and demonstration case to prepare input data, run, and modify the Structural Tailoring of Engine Blades (STAEBL) computer code. STAEBL was developed to perform engine fan and compressor blade numerical optimizations. This blade optimization seeks a minimum weight or cost design that satisfies realistic blade design constraints, by tuning one to twenty design variables. The STAEBL constraint analyses include blade stresses, vibratory response, flutter, and foreign object damage. Blade design variables include airfoil thickness at several locations, blade chord, and construction variables: hole size for hollow blades, and composite material layup for composite blades.
Implications of water constraints for electricity capacity expansion in the United States
NASA Astrophysics Data System (ADS)
Liu, L.; Hejazi, M. I.; Iyer, G.; Forman, B. A.
2017-12-01
U.S. electricity generation is vulnerable to water supply since water is required for cooling. Constraints on the availability of water will therefore necessitate adaptive planning by the power generation sector. Hence, it is important to integrate restrictions in water availability in electricity capacity planning in order to better understand the economic viability of alternative capacity planning options. The study of the implications of water constraints for the U.S. power generation system is limited in terms of scale and robustness. We extend previous studies by including physical water constraints in a state-level model of the U.S. energy system embedded within a global integrated assessment model (GCAM-USA). We focus on the implications of such constraints for the U.S. electricity capacity expansion, integrating both supply and demand effects under a consistent framework. Constraints on the availability of water have two general effects across the U.S. First, water availability constraints increase the cost of electricity generation, resulting in reduced electrification of end-use sectors. Second, water availability constraints result in forced retirements of water-intensive technologies such as thermoelectric coal- and gas- fired technologies before the end of their natural lifetimes. The demand for electricity is then met by an increase in investments in less water-dependent technologies such as wind and solar photovoltaic. Our results show that the regional patterns of the above effects are heterogeneous across the U.S. In general, the impacts of water constraints on electricity capacity expansion are more pronounced in the West than in the East. This is largely because of lower water availability in the West compared to the East due to lower precipitation in the Western states. Constraints on the availability of water might also have important implications for U.S. electricity trade. For example, under severe constraints on the availability of water, some states flip from being net exporters of electricity to becoming net importers and vice versa. Our study demonstrates the impacts of water availability constraints on electricity capacity expansion in the U.S. and highlights the need to integrate such constraints into decision-making so as to better understand state-level challenges.
The importance of environmental variability and management control error to optimal harvest policies
Hunter, C.M.; Runge, M.C.
2004-01-01
State-dependent strategies (SDSs) are the most general form of harvest policy because they allow the harvest rate to depend, without constraint, on the state of the system. State-dependent strategies that provide an optimal harvest rate for any system state can be calculated, and stochasticity can be appropriately accommodated in this optimization. Stochasticity poses 2 challenges to harvest policies: (1) the population will never be at the equilibrium state; and (2) stochasticity induces uncertainty about future states. We investigated the effects of 2 types of stochasticity, environmental variability and management control error, on SDS harvest policies for a white-tailed deer (Odocoileus virginianus) model, and contrasted these with a harvest policy based on maximum sustainable yield (MSY). Increasing stochasticity resulted in more conservative SDSs; that is, higher population densities were required to support the same harvest rate, but these effects were generally small. As stochastic effects increased, SDSs performed much better than MSY. Both deterministic and stochastic SDSs maintained maximum mean annual harvest yield (AHY) and optimal equilibrium population size (Neq) in a stochastic environment, whereas an MSY policy could not. We suggest 3 rules of thumb for harvest management of long-lived vertebrates in stochastic systems: (1) an SDS is advantageous over an MSY policy, (2) using an SDS rather than an MSY is more important than whether a deterministic or stochastic SDS is used, and (3) for SDSs, rankings of the variability in management outcomes (e.g., harvest yield) resulting from parameter stochasticity can be predicted by rankings of the deterministic elasticities.
Loop quantum cosmology with self-dual variables
NASA Astrophysics Data System (ADS)
Wilson-Ewing, Edward
2015-12-01
Using the complex-valued self-dual connection variables, the loop quantum cosmology of a closed Friedmann space-time coupled to a massless scalar field is studied. It is shown how the reality conditions can be imposed in the quantum theory by choosing a particular inner product for the kinematical Hilbert space. While holonomies of the self-dual Ashtekar connection are not well defined in the kinematical Hilbert space, it is possible to introduce a family of generalized holonomylike operators of which some are well defined; these operators in turn are used in the definition of the Hamiltonian constraint operator where the scalar field can be used as a relational clock. The resulting quantum theory is closely related, although not identical, to standard loop quantum cosmology constructed from the Ashtekar-Barbero variables with a real Immirzi parameter. Effective Friedmann equations are derived which provide a good approximation to the full quantum dynamics for sharply peaked states whose volume remains much larger than the Planck volume, and they show that for these states quantum gravity effects resolve the big-bang and big-crunch singularities and replace them by a nonsingular bounce. Finally, the loop quantization in self-dual variables of a flat Friedmann space-time is recovered in the limit of zero spatial curvature and is identical to the standard loop quantization in terms of the real-valued Ashtekar-Barbero variables.
A variable capacitance based modeling and power capability predicting method for ultracapacitor
NASA Astrophysics Data System (ADS)
Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang
2018-01-01
Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.
Multiwavelength Observations of Markarian 421 During a TeV/X-Ray Flare
NASA Technical Reports Server (NTRS)
Bertsch, D. L.; Bruhweiler, F.; Macomb, D. J.; Cheng, K.-P.; Carter-Lewis, D. A.; Akerlof, C. W.; Aller, H. D.; Aller, M. F.; Buckley, J. H.; Cawley, M. F.
1995-01-01
A TeV flare from the BL Lac object Mrk 421 was detected in May of 1994 by the Whipple Observatory air Cherenkov experiment during which the flux above 250 GeV increased by nearly an order of magnitude over a 2-day period. Contemporaneous observations by ASCA showed the X-ray flux to be in a very high state. We present these results, combined with the first ever simultaneous or nearly simultaneous observations at GeV gamma-ray, UV, IR, mm, and radio energies for this nearest BL Lac object. While the GeV gamma-ray flux increased slightly, there is little evidence for variability comparable to that seen at TeV and X-ray energies. Other wavelengths show even less variability. This provides important constraints on the emission mechanisms at work. We present the multiwavelength spectrum of this gamma-ray blazar for both quiescent and flaring states and discuss the data in terms of current models of blazar emission.
Obstacle avoidance handling and mixed integer predictive control for space robots
NASA Astrophysics Data System (ADS)
Zong, Lijun; Luo, Jianjun; Wang, Mingming; Yuan, Jianping
2018-04-01
This paper presents a novel obstacle avoidance constraint and a mixed integer predictive control (MIPC) method for space robots avoiding obstacles and satisfying physical limits during performing tasks. Firstly, a novel kind of obstacle avoidance constraint of space robots, which needs the assumption that the manipulator links and the obstacles can be represented by convex bodies, is proposed by limiting the relative velocity between two closest points which are on the manipulator and the obstacle, respectively. Furthermore, the logical variables are introduced into the obstacle avoidance constraint, which have realized the constraint form is automatically changed to satisfy different obstacle avoidance requirements in different distance intervals between the space robot and the obstacle. Afterwards, the obstacle avoidance constraint and other system physical limits, such as joint angle ranges, the amplitude boundaries of joint velocities and joint torques, are described as inequality constraints of a quadratic programming (QP) problem by using the model predictive control (MPC) method. To guarantee the feasibility of the obtained multi-constraint QP problem, the constraints are treated as soft constraints and assigned levels of priority based on the propositional logic theory, which can realize that the constraints with lower priorities are always firstly violated to recover the feasibility of the QP problem. Since the logical variables have been introduced, the optimization problem including obstacle avoidance and system physical limits as prioritized inequality constraints is termed as MIPC method of space robots, and its computational complexity as well as possible strategies for reducing calculation amount are analyzed. Simulations of the space robot unfolding its manipulator and tracking the end-effector's desired trajectories with the existence of obstacles and physical limits are presented to demonstrate the effectiveness of the proposed obstacle avoidance strategy and MIPC control method of space robots.
Hydropower Modeling Challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoll, Brady; Andrade, Juan; Cohen, Stuart
Hydropower facilities are important assets for the electric power sector and represent a key source of flexibility for electric grids with large amounts of variable generation. As variable renewable generation sources expand, understanding the capabilities and limitations of the flexibility from hydropower resources is important for grid planning. Appropriately modeling these resources, however, is difficult because of the wide variety of constraints these plants face that other generators do not. These constraints can be broadly categorized as environmental, operational, and regulatory. This report highlights several key issues involving incorporating these constraints when modeling hydropower operations in terms of production costmore » and capacity expansion. Many of these challenges involve a lack of data to adequately represent the constraints or issues of model complexity and run time. We present several potential methods for improving the accuracy of hydropower representation in these models to allow for a better understanding of hydropower's capabilities.« less
In search of principles for a Theory of Organisms
Longo, Giuseppe; Montévil, Maël; Sonnenschein, Carlos; Soto, Ana M
2017-01-01
Lacking an operational theory to explain the organization and behaviour of matter in unicellular and multicellular organisms hinders progress in biology. Such a theory should address life cycles from ontogenesis to death. This theory would complement the theory of evolution that addresses phylogenesis, and would posit theoretical extensions to accepted physical principles and default states in order to grasp the living state of matter and define proper biological observables. Thus, we favour adopting the default state implicit in Darwin’s theory, namely, cell proliferation with variation plus motility, and a framing principle, namely, life phenomena manifest themselves as non-identical iterations of morphogenetic processes. From this perspective, organisms become a consequence of the inherent variability generated by proliferation, motility and self-organization. Morphogenesis would then be the result of the default state plus physical constraints, like gravity, and those present in living organisms, like muscular tension. PMID:26648040
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Karpel, Mordechay
1989-01-01
Various control analysis, design, and simulation techniques for aeroelastic applications require the equations of motion to be cast in a linear time-invariant state-space form. Unsteady aerodynamics forces have to be approximated as rational functions of the Laplace variable in order to put them in this framework. For the minimum-state method, the number of denominator roots in the rational approximation. Results are shown of applying various approximation enhancements (including optimization, frequency dependent weighting of the tabular data, and constraint selection) with the minimum-state formulation to the active flexible wing wind-tunnel model. The results demonstrate that good models can be developed which have an order of magnitude fewer augmenting aerodynamic equations more than traditional approaches. This reduction facilitates the design of lower order control systems, analysis of control system performance, and near real-time simulation of aeroservoelastic phenomena.
Adaptive and neuroadaptive control for nonnegative and compartmental dynamical systems
NASA Astrophysics Data System (ADS)
Volyanskyy, Kostyantyn Y.
Neural networks have been extensively used for adaptive system identification as well as adaptive and neuroadaptive control of highly uncertain systems. The goal of adaptive and neuroadaptive control is to achieve system performance without excessive reliance on system models. To improve robustness and the speed of adaptation of adaptive and neuroadaptive controllers several controller architectures have been proposed in the literature. In this dissertation, we develop a new neuroadaptive control architecture for nonlinear uncertain dynamical systems. The proposed framework involves a novel controller architecture with additional terms in the update laws that are constructed using a moving window of the integrated system uncertainty. These terms can be used to identify the ideal system weights of the neural network as well as effectively suppress system uncertainty. Linear and nonlinear parameterizations of the system uncertainty are considered and state and output feedback neuroadaptive controllers are developed. Furthermore, we extend the developed framework to discrete-time dynamical systems. To illustrate the efficacy of the proposed approach we apply our results to an aircraft model with wing rock dynamics, a spacecraft model with unknown moment of inertia, and an unmanned combat aerial vehicle undergoing actuator failures, and compare our results with standard neuroadaptive control methods. Nonnegative systems are essential in capturing the behavior of a wide range of dynamical systems involving dynamic states whose values are nonnegative. A sub-class of nonnegative dynamical systems are compartmental systems. These systems are derived from mass and energy balance considerations and are comprised of homogeneous interconnected microscopic subsystems or compartments which exchange variable quantities of material via intercompartmental flow laws. In this dissertation, we develop direct adaptive and neuroadaptive control framework for stabilization, disturbance rejection and noise suppression for nonnegative and compartmental dynamical systems with noise and exogenous system disturbances. We then use the developed framework to control the infusion of the anesthetic drug propofol for maintaining a desired constant level of depth of anesthesia for surgery in the face of continuing hemorrhage and hemodilution. Critical care patients, whether undergoing surgery or recovering in intensive care units, require drug administration to regulate physiological variables such as blood pressure, cardiac output, heart rate, and degree of consciousness. The rate of infusion of each administered drug is critical, requiring constant monitoring and frequent adjustments. In this dissertation, we develop a neuroadaptive output feedback control framework for nonlinear uncertain nonnegative and compartmental systems with nonnegative control inputs and noisy measurements. The proposed framework is Lyapunov-based and guarantees ultimate boundedness of the error signals. In addition, the neuroadaptive controller guarantees that the physical system states remain in the nonnegative orthant of the state space. Finally, the developed approach is used to control the infusion of the anesthetic drug propofol for maintaining a desired constant level of depth of anesthesia for surgery in the face of noisy electroencephalographic (EEG) measurements. Clinical trials demonstrate excellent regulation of unconsciousness allowing for a safe and effective administration of the anesthetic agent propofol. Furthermore, a neuroadaptive output feedback control architecture for nonlinear nonnegative dynamical systems with input amplitude and integral constraints is developed. Specifically, the neuroadaptive controller guarantees that the imposed amplitude and integral input constraints are satisfied and the physical system states remain in the nonnegative orthant of the state space. The proposed approach is used to control the infusion of the anesthetic drug propofol for maintaining a desired constant level of depth of anesthesia for noncardiac surgery in the face of infusion rate constraints and a drug dosing constraint over a specified period. In addition, the aforementioned control architecture is used to control lung volume and minute ventilation with input pressure constraints that also accounts for spontaneous breathing by the patient. Specifically, we develop a pressure- and work-limited neuroadaptive controller for mechanical ventilation based on a nonlinear multi-compartmental lung model. The control framework does not rely on any averaged data and is designed to automatically adjust the input pressure to the patient's physiological characteristics capturing lung resistance and compliance modeling uncertainty. Moreover, the controller accounts for input pressure constraints as well as work of breathing constraints. The effect of spontaneous breathing is incorporated within the lung model and the control framework. Finally, a neural network hybrid adaptive control framework for nonlinear uncertain hybrid dynamical systems is developed. The proposed hybrid adaptive control framework is Lyapunov-based and guarantees partial asymptotic stability of the closed-loop hybrid system; that is, asymptotic stability with respect to part of the closed-loop system states associated with the hybrid plant states. A numerical example is provided to demonstrate the efficacy of the proposed hybrid adaptive stabilization approach.
Caballero Sánchez, Carla; Barbado Murillo, David; Davids, Keith; Moreno Hernández, Francisco J
2016-06-01
This study investigated the extent to which specific interacting constraints of performance might increase or decrease the emergent complexity in a movement system, and whether this could affect the relationship between observed movement variability and the central nervous system's capacity to adapt to perturbations during balancing. Fifty-two healthy volunteers performed eight trials where different performance constraints were manipulated: task difficulty (three levels) and visual biofeedback conditions (with and without the center of pressure (COP) displacement and a target displayed). Balance performance was assessed using COP-based measures: mean velocity magnitude (MVM) and bivariate variable error (BVE). To assess the complexity of COP, fuzzy entropy (FE) and detrended fluctuation analysis (DFA) were computed. ANOVAs showed that MVM and BVE increased when task difficulty increased. During biofeedback conditions, individuals showed higher MVM but lower BVE at the easiest level of task difficulty. Overall, higher FE and lower DFA values were observed when biofeedback was available. On the other hand, FE reduced and DFA increased as difficulty level increased, in the presence of biofeedback. However, when biofeedback was not available, the opposite trend in FE and DFA values was observed. Regardless of changes to task constraints and the variable investigated, balance performance was positively related to complexity in every condition. Data revealed how specificity of task constraints can result in an increase or decrease in complexity emerging in a neurobiological system during balance performance.
Tracking control of a marine surface vessel with full-state constraints
NASA Astrophysics Data System (ADS)
Yin, Zhao; He, Wei; Yang, Chenguang
2017-02-01
In this paper, a trajectory tracking control law is proposed for a class of marine surface vessels in the presence of full-state constraints and dynamics uncertainties. A barrier Lyapunov function (BLF) based control is employed to prevent states from violating the constraints. Neural networks are used to approximate the system uncertainties in the control design, and the control law is designed by using the Moore-Penrose inverse. The proposed control is able to compensate for the effects of full-state constraints. Meanwhile, the signals in the closed-loop system are guaranteed to be semiglobally uniformly bounded, with the asymptotic tracking being achieved. Finally, the performance of the proposed control has been tested and verified by simulation studies.
NASA Astrophysics Data System (ADS)
Li, Hong; Zhang, Li; Jiao, Yong-Chang
2016-07-01
This paper presents an interactive approach based on a discrete differential evolution algorithm to solve a class of integer bilevel programming problems, in which integer decision variables are controlled by an upper-level decision maker and real-value or continuous decision variables are controlled by a lower-level decision maker. Using the Karush--Kuhn-Tucker optimality conditions in the lower-level programming, the original discrete bilevel formulation can be converted into a discrete single-level nonlinear programming problem with the complementarity constraints, and then the smoothing technique is applied to deal with the complementarity constraints. Finally, a discrete single-level nonlinear programming problem is obtained, and solved by an interactive approach. In each iteration, for each given upper-level discrete variable, a system of nonlinear equations including the lower-level variables and Lagrange multipliers is solved first, and then a discrete nonlinear programming problem only with inequality constraints is handled by using a discrete differential evolution algorithm. Simulation results show the effectiveness of the proposed approach.
Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A
2015-02-01
Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.
Noll, Douglas C.; Fessler, Jeffrey A.
2014-01-01
Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484
Quantum anonymous voting with unweighted continuous-variable graph states
NASA Astrophysics Data System (ADS)
Guo, Ying; Feng, Yanyan; Zeng, Guihua
2016-08-01
Motivated by the revealing topological structures of continuous-variable graph state (CVGS), we investigate the design of quantum voting scheme, which has serious advantages over the conventional ones in terms of efficiency and graphicness. Three phases are included, i.e., the preparing phase, the voting phase and the counting phase, together with three parties, i.e., the voters, the tallyman and the ballot agency. Two major voting operations are performed on the yielded CVGS in the voting process, namely the local rotation transformation and the displacement operation. The voting information is carried by the CVGS established before hand, whose persistent entanglement is deployed to keep the privacy of votes and the anonymity of legal voters. For practical applications, two CVGS-based quantum ballots, i.e., comparative ballot and anonymous survey, are specially designed, followed by the extended ballot schemes for the binary-valued and multi-valued ballots under some constraints for the voting design. Security is ensured by entanglement of the CVGS, the voting operations and the laws of quantum mechanics. The proposed schemes can be implemented using the standard off-the-shelf components when compared to discrete-variable quantum voting schemes attributing to the characteristics of the CV-based quantum cryptography.
Statistical theory on the analytical form of cloud particle size distributions
NASA Astrophysics Data System (ADS)
Wu, Wei; McFarquhar, Greg
2017-11-01
Several analytical forms of cloud particle size distributions (PSDs) have been used in numerical modeling and remote sensing retrieval studies of clouds and precipitation, including exponential, gamma, lognormal, and Weibull distributions. However, there is no satisfying physical explanation as to why certain distribution forms preferentially occur instead of others. Theoretically, the analytical form of a PSD can be derived by directly solving the general dynamic equation, but no analytical solutions have been found yet. Instead of using a process level approach, the use of the principle of maximum entropy (MaxEnt) for determining the analytical form of PSDs from the perspective of system is examined here. Here, the issue of variability under coordinate transformations that arises using the Gibbs/Shannon definition of entropy is identified, and the use of the concept of relative entropy to avoid these problems is discussed. Focusing on cloud physics, the four-parameter generalized gamma distribution is proposed as the analytical form of a PSD using the principle of maximum (relative) entropy with assumptions on power law relations between state variables, scale invariance and a further constraint on the expectation of one state variable (e.g. bulk water mass). DOE ASR.
Optimal control of a variable spin speed CMG system for space vehicles. [Control Moment Gyros
NASA Technical Reports Server (NTRS)
Liu, T. C.; Chubb, W. B.; Seltzer, S. M.; Thompson, Z.
1973-01-01
Many future NASA programs require very high accurate pointing stability. These pointing requirements are well beyond anything attempted to date. This paper suggests a control system which has the capability of meeting these requirements. An optimal control law for the suggested system is specified. However, since no direct method of solution is known for this complicated system, a computation technique using successive approximations is used to develop the required solution. The method of calculus of variations is applied for estimating the changes of index of performance as well as those constraints of inequality of state variables and terminal conditions. Thus, an algorithm is obtained by the steepest descent method and/or conjugate gradient method. Numerical examples are given to show the optimal controls.
Bloom, A. Anthony; Exbrayat, Jean-François; van der Velde, Ivar R.; Feng, Liang; Williams, Mathew
2016-01-01
The terrestrial carbon cycle is currently the least constrained component of the global carbon budget. Large uncertainties stem from a poor understanding of plant carbon allocation, stocks, residence times, and carbon use efficiency. Imposing observational constraints on the terrestrial carbon cycle and its processes is, therefore, necessary to better understand its current state and predict its future state. We combine a diagnostic ecosystem carbon model with satellite observations of leaf area and biomass (where and when available) and soil carbon data to retrieve the first global estimates, to our knowledge, of carbon cycle state and process variables at a 1° × 1° resolution; retrieved variables are independent from the plant functional type and steady-state paradigms. Our results reveal global emergent relationships in the spatial distribution of key carbon cycle states and processes. Live biomass and dead organic carbon residence times exhibit contrasting spatial features (r = 0.3). Allocation to structural carbon is highest in the wet tropics (85–88%) in contrast to higher latitudes (73–82%), where allocation shifts toward photosynthetic carbon. Carbon use efficiency is lowest (0.42–0.44) in the wet tropics. We find an emergent global correlation between retrievals of leaf mass per leaf area and leaf lifespan (r = 0.64–0.80) that matches independent trait studies. We show that conventional land cover types cannot adequately describe the spatial variability of key carbon states and processes (multiple correlation median = 0.41). This mismatch has strong implications for the prediction of terrestrial carbon dynamics, which are currently based on globally applied parameters linked to land cover or plant functional types. PMID:26787856
Xu, Dan; King, Kevin F; Liang, Zhi-Pei
2007-10-01
A new class of spiral trajectories called variable slew-rate spirals is proposed. The governing differential equations for a variable slew-rate spiral are derived, and both numeric and analytic solutions to the equations are given. The primary application of variable slew-rate spirals is peak B(1) amplitude reduction in 2D RF pulse design. The reduction of peak B(1) amplitude is achieved by changing the gradient slew-rate profile, and gradient amplitude and slew-rate constraints are inherently satisfied by the design of variable slew-rate spiral gradient waveforms. A design example of 2D RF pulses is given, which shows that under the same hardware constraints the RF pulse using a properly chosen variable slew-rate spiral trajectory can be much shorter than that using a conventional constant slew-rate spiral trajectory, thus having greater immunity to resonance frequency offsets.
Regression analysis as a design optimization tool
NASA Technical Reports Server (NTRS)
Perley, R.
1984-01-01
The optimization concepts are described in relation to an overall design process as opposed to a detailed, part-design process where the requirements are firmly stated, the optimization criteria are well established, and a design is known to be feasible. The overall design process starts with the stated requirements. Some of the design criteria are derived directly from the requirements, but others are affected by the design concept. It is these design criteria that define the performance index, or objective function, that is to be minimized within some constraints. In general, there will be multiple objectives, some mutually exclusive, with no clear statement of their relative importance. The optimization loop that is given adjusts the design variables and analyzes the resulting design, in an iterative fashion, until the objective function is minimized within the constraints. This provides a solution, but it is only the beginning. In effect, the problem definition evolves as information is derived from the results. It becomes a learning process as we determine what the physics of the system can deliver in relation to the desirable system characteristics. As with any learning process, an interactive capability is a real attriubute for investigating the many alternatives that will be suggested as learning progresses.
Convex relaxations for gas expansion planning
Borraz-Sanchez, Conrado; Bent, Russell Whitford; Backhaus, Scott N.; ...
2016-01-01
Expansion of natural gas networks is a critical process involving substantial capital expenditures with complex decision-support requirements. Here, given the non-convex nature of gas transmission constraints, global optimality and infeasibility guarantees can only be offered by global optimisation approaches. Unfortunately, state-of-the-art global optimisation solvers are unable to scale up to real-world size instances. In this study, we present a convex mixed-integer second-order cone relaxation for the gas expansion planning problem under steady-state conditions. The underlying model offers tight lower bounds with high computational efficiency. In addition, the optimal solution of the relaxation can often be used to derive high-quality solutionsmore » to the original problem, leading to provably tight optimality gaps and, in some cases, global optimal solutions. The convex relaxation is based on a few key ideas, including the introduction of flux direction variables, exact McCormick relaxations, on/off constraints, and integer cuts. Numerical experiments are conducted on the traditional Belgian gas network, as well as other real larger networks. The results demonstrate both the accuracy and computational speed of the relaxation and its ability to produce high-quality solution« less
Analyses of heart rate variability in young soccer players: the effects of sport activity.
Bricout, Véronique-Aurélie; Dechenaud, Simon; Favre-Juvin, Anne
2010-04-19
The use of heart rate variability (HRV) in the management of sport training is a practice which tends to spread, especially in order to prevent the occurrence of states of fatigue. To estimate the HRV parameters obtained using a heart rate recording, according to different loads of sporting activities, and to make the possible link with the appearance of fatigue. Eight young football players, aged 14.6 years+/-2 months, playing at league level in Rhône-Alpes, training for 10 to 20 h per week, were followed over a period of 5 months, allowing to obtain 54 recordings of HRV in three different conditions: (i) after rest (ii) after a day with training and (iii) after a day with a competitive match. Under the effect of a competitive match, the HRV temporal indicators (heart rate, RR interval, and pNN50) were significantly altered compared to the rest day. The analysis of the sympathovagal balance rose significantly as a result of the competitive constraint (0.72+/-0.17 vs. 0.90+/-0.20; p<0.05). The main results obtained show that the HRV is an objective and non-invasive monitoring of management of the training of young sportsmen. HRV analysis allowed to highlight any neurovegetative adjustments according to the physical loads. Thus, under the effect of an increase of physical and psychological constraints that a football match represents, the LF/HF ratio rises significantly; reflecting increased sympathetic stimulation, which beyond certain limits could be relevant to prevent the emergence of a state of fatigue. 2009 Elsevier B.V. All rights reserved.
Optimization techniques using MODFLOW-GWM
Grava, Anna; Feinstein, Daniel T.; Barlow, Paul M.; Bonomi, Tullia; Buarne, Fabiola; Dunning, Charles; Hunt, Randall J.
2015-01-01
An important application of optimization codes such as MODFLOW-GWM is to maximize water supply from unconfined aquifers subject to constraints involving surface-water depletion and drawdown. In optimizing pumping for a fish hatchery in a bedrock aquifer system overlain by glacial deposits in eastern Wisconsin, various features of the GWM-2000 code were used to overcome difficulties associated with: 1) Non-linear response matrices caused by unconfined conditions and head-dependent boundaries; 2) Efficient selection of candidate well and drawdown constraint locations; and 3) Optimizing against water-level constraints inside pumping wells. Features of GWM-2000 were harnessed to test the effects of systematically varying the decision variables and constraints on the optimized solution for managing withdrawals. An important lesson of the procedure, similar to lessons learned in model calibration, is that the optimized outcome is non-unique, and depends on a range of choices open to the user. The modeler must balance the complexity of the numerical flow model used to represent the groundwater-flow system against the range of options (decision variables, objective functions, constraints) available for optimizing the model.
NASA Astrophysics Data System (ADS)
Park, Jong-Yeon; Stock, Charles A.; Yang, Xiaosong; Dunne, John P.; Rosati, Anthony; John, Jasmin; Zhang, Shaoqing
2018-03-01
Reliable estimates of historical and current biogeochemistry are essential for understanding past ecosystem variability and predicting future changes. Efforts to translate improved physical ocean state estimates into improved biogeochemical estimates, however, are hindered by high biogeochemical sensitivity to transient momentum imbalances that arise during physical data assimilation. Most notably, the breakdown of geostrophic constraints on data assimilation in equatorial regions can lead to spurious upwelling, resulting in excessive equatorial productivity and biogeochemical fluxes. This hampers efforts to understand and predict the biogeochemical consequences of El Niño and La Niña. We develop a strategy to robustly integrate an ocean biogeochemical model with an ensemble coupled-climate data assimilation system used for seasonal to decadal global climate prediction. Addressing spurious vertical velocities requires two steps. First, we find that tightening constraints on atmospheric data assimilation maintains a better equatorial wind stress and pressure gradient balance. This reduces spurious vertical velocities, but those remaining still produce substantial biogeochemical biases. The remainder is addressed by imposing stricter fidelity to model dynamics over data constraints near the equator. We determine an optimal choice of model-data weights that removed spurious biogeochemical signals while benefitting from off-equatorial constraints that still substantially improve equatorial physical ocean simulations. Compared to the unconstrained control run, the optimally constrained model reduces equatorial biogeochemical biases and markedly improves the equatorial subsurface nitrate concentrations and hypoxic area. The pragmatic approach described herein offers a means of advancing earth system prediction in parallel with continued data assimilation advances aimed at fully considering equatorial data constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Post, Wilfred M; King, Anthony Wayne; Dragoni, Danilo
Many parameters in terrestrial biogeochemical models are inherently uncertain, leading to uncertainty in predictions of key carbon cycle variables. At observation sites, this uncertainty can be quantified by applying model-data fusion techniques to estimate model parameters using eddy covariance observations and associated biometric data sets as constraints. Uncertainty is reduced as data records become longer and different types of observations are added. We estimate parametric and associated predictive uncertainty at the Morgan Monroe State Forest in Indiana, USA. Parameters in the Local Terrestrial Ecosystem Carbon (LoTEC) are estimated using both synthetic and actual constraints. These model parameters and uncertainties aremore » then used to make predictions of carbon flux for up to 20 years. We find a strong dependence of both parametric and prediction uncertainty on the length of the data record used in the model-data fusion. In this model framework, this dependence is strongly reduced as the data record length increases beyond 5 years. If synthetic initial biomass pool constraints with realistic uncertainties are included in the model-data fusion, prediction uncertainty is reduced by more than 25% when constraining flux records are less than 3 years. If synthetic annual aboveground woody biomass increment constraints are also included, uncertainty is similarly reduced by an additional 25%. When actual observed eddy covariance data are used as constraints, there is still a strong dependence of parameter and prediction uncertainty on data record length, but the results are harder to interpret because of the inability of LoTEC to reproduce observed interannual variations and the confounding effects of model structural error.« less
Method and apparatus for creating time-optimal commands for linear systems
NASA Technical Reports Server (NTRS)
Seering, Warren P. (Inventor); Tuttle, Timothy D. (Inventor)
2004-01-01
A system for and method of determining an input command profile for substantially any dynamic system that can be modeled as a linear system, the input command profile for transitioning an output of the dynamic system from one state to another state. The present invention involves identifying characteristics of the dynamic system, selecting a command profile which defines an input to the dynamic system based on the identified characteristics, wherein the command profile comprises one or more pulses which rise and fall at switch times, imposing a plurality of constraints on the dynamic system, at least one of the constraints being defined in terms of the switch times, and determining the switch times for the input to the dynamic system based on the command profile and the plurality of constraints. The characteristics may be related to poles and zeros of the dynamic system, and the plurality of constraints may include a dynamics cancellation constraint which specifies that the input moves the dynamic system from a first state to a second state such that the dynamic system remains substantially at the second state.
NASA Technical Reports Server (NTRS)
Lucas, S. H.; Scotti, S. J.
1989-01-01
The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.
Determining on-fault earthquake magnitude distributions from integer programming
NASA Astrophysics Data System (ADS)
Geist, Eric L.; Parsons, Tom
2018-02-01
Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106 variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions.
Copernicus observations of C I and CO in diffuse interstellar clouds
NASA Technical Reports Server (NTRS)
Jenkins, E. B.; Jura, M.; Loewenstein, M.
1980-01-01
Copernicus was used to observe absorption lines of C I in its ground state and excited fine structure levels and CO toward 29 stars. We use the C I data to infer densities and pressures within the observed clouds, and because our results are of higher precision than previous work, much more precise estimates of the physical conditions in clouds are obtained. In agreement with previous work, the interstellar thermal pressure appears to be variable, with most clouds having values of p/k between 1000/cu cm K and 10,000/cu cm K, but there are some clouds with p/k as high as 100,000/cu cm K. Our results are consistent with the view that the interstellar thermal pressure is so variable that the gas undergoes continuous dynamic evolution. Our observations provide useful constraints on the physical processes on the surfaces of grains. In particular, we find that grains are efficient catalysts of interstellar H2 in the sense that at least half of the hydrogen atoms that strike grains come off as part of H2. Results place strong constraints on models for the formation and destruction of interstellar CO. In many clouds, an order of magnitude less CO than predicted in some models was found.
BOOK REVIEW: Modern Canonical Quantum General Relativity
NASA Astrophysics Data System (ADS)
Kiefer, Claus
2008-06-01
The open problem of constructing a consistent and experimentally tested quantum theory of the gravitational field has its place at the heart of fundamental physics. The main approaches can be roughly divided into two classes: either one seeks a unified quantum framework of all interactions or one starts with a direct quantization of general relativity. In the first class, string theory (M-theory) is the only known example. In the second class, one can make an additional methodological distinction: while covariant approaches such as path-integral quantization use the four-dimensional metric as an essential ingredient of their formalism, canonical approaches start with a foliation of spacetime into spacelike hypersurfaces in order to arrive at a Hamiltonian formulation. The present book is devoted to one of the canonical approaches—loop quantum gravity. It is named modern canonical quantum general relativity by the author because it uses connections and holonomies as central variables, which are analogous to the variables used in Yang Mills theories. In fact, the canonically conjugate variables are a holonomy of a connection and the flux of a non-Abelian electric field. This has to be contrasted with the older geometrodynamical approach in which the metric of three-dimensional space and the second fundamental form are the fundamental entities, an approach which is still actively being pursued. It is the author's ambition to present loop quantum gravity in a way in which every step is formulated in a mathematically rigorous form. In his own words: 'loop quantum gravity is an attempt to construct a mathematically rigorous, background-independent, non-perturbative quantum field theory of Lorentzian general relativity and all known matter in four spacetime dimensions, not more and not less'. The formal Leitmotiv of loop quantum gravity is background independence. Non-gravitational theories are usually quantized on a given non-dynamical background. In contrast, due to the geometrical nature of gravity, no such background exists in quantum gravity. Instead, the notion of a background is supposed to emerge a posteriori as an approximate notion from quantum states of geometry. As a consequence, the standard ultraviolet divergences of quantum field theory do not show up because there is no limit of Δx → 0 to be taken in a given spacetime. On the other hand, it is open whether the theory is free of any type of divergences and anomalies. A central feature of any canonical approach, independent of the choice of variables, is the existence of constraints. In geometrodynamics, these are the Hamiltonian and diffeomorphism constraints. They also hold in loop quantum gravity, but are supplemented there by the Gauss constraint, which emerges due to the use of triads in the formalism. These constraints capture all the physics of the quantum theory because no spacetime is present anymore (analogous to the absence of trajectories in quantum mechanics), so no additional equations of motion are needed. This book presents a careful and comprehensive discussion of these constraints. In particular, the constraint algebra is calculated in a transparent and explicit way. The author makes the important assumption that a Hilbert-space structure is still needed on the fundamental level of quantum gravity. In ordinary quantum theory, such a structure is needed for the probability interpretation, in particular for the conservation of probability with respect to external time. It is thus interesting to see how far this concept can be extrapolated into the timeless realm of quantum gravity. On the kinematical level, that is, before the constraints are imposed, an essentially unique Hilbert space can be constructed in terms of spin-network states. Potentially problematic features are the implementation of the diffeomorphism and Hamiltonian constraints. The Hilbert space Hdiff defined on the diffeomorphism subspace can throw states out of the kinematical Hilbert space and is thus not contained in it. Moreover, the Hamiltonian constraint does not seem to preserve Hdiff, so its implementation remains open. To avoid some of these problems, the author proposes his 'master constraint programme' in which the infinitely many local Hamiltonian constraints are combined into one master constraint. This is a subject of his current research. With regard to this situation, it is not surprising that the main results in loop quantum gravity are found on the kinematical level. An especially important feature are the discrete spectra of geometric operators such as the area operator. This quantifies the earlier heuristic ideas about a discreteness at the Planck scale. The hope is that these results survive the consistent implementation of all constraints. The status of loop quantum gravity is concisely and competently summarized in this volume, whose author is himself one of the pioneers of this approach. What is the relation of this book to the other monograph on loop quantum gravity, written by Carlo Rovelli and published in 2004 under the title Quantum Gravity with the same company? In the words of the present author: 'the two books are complementary in the sense that they can be regarded almost as volume I ('introduction and conceptual framework') and volume II ('mathematical framework and applications') of a general presentation of quantum general relativity in general and loop quantum gravity in particular'. In fact, the present volume gives a complete and self-contained presentation of the required mathematics, especially on the approximately 200 pages of chapters 18 33. As for the physical applications, the main topic is the microscopic derivation of the black-hole entropy. This is presented in a clear and detailed form. Employing the concept of an isolated horizon (a local generalization of an event horizon), the counting of surface states gives an entropy proportional to the horizon area. It also contains the Barbero Immirzi parameter β, which is a free parameter of the theory. Demanding, on the other hand, that the entropy be equal to the Bekenstein Hawking entropy would fix this parameter. Other applications such as loop quantum cosmology are only briefly touched upon. Since loop quantum gravity is a very active field of research, the author warns that the present book can at best be seen as a snapshot. Part of the overall picture may thus in the future be subject to modifications. For example, recent work by the author using a concept of dust time is not yet covered here. Nevertheless, I expect that this volume will continue to serve as a valuable introduction and reference book. It is essential reading for everyone working on loop quantum gravity.
NASA Astrophysics Data System (ADS)
Champion, Billy Ray
Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. . Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. . The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of "of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency's traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.
Hammerstrom, Donald J.
2013-10-15
A method for managing the charging and discharging of batteries wherein at least one battery is connected to a battery charger, the battery charger is connected to a power supply. A plurality of controllers in communication with one and another are provided, each of the controllers monitoring a subset of input variables. A set of charging constraints may then generated for each controller as a function of the subset of input variables. A set of objectives for each controller may also be generated. A preferred charge rate for each controller is generated as a function of either the set of objectives, the charging constraints, or both, using an algorithm that accounts for each of the preferred charge rates for each of the controllers and/or that does not violate any of the charging constraints. A current flow between the battery and the battery charger is then provided at the actual charge rate.
Linear Quadratic Tracking Design for a Generic Transport Aircraft with Structural Load Constraints
NASA Technical Reports Server (NTRS)
Burken, John J.; Frost, Susan A.; Taylor, Brian R.
2011-01-01
When designing control laws for systems with constraints added to the tracking performance, control allocation methods can be utilized. Control allocations methods are used when there are more command inputs than controlled variables. Constraints that require allocators are such task as; surface saturation limits, structural load limits, drag reduction constraints or actuator failures. Most transport aircraft have many actuated surfaces compared to the three controlled variables (such as angle of attack, roll rate & angle of side slip). To distribute the control effort among the redundant set of actuators a fixed mixer approach can be utilized or online control allocation techniques. The benefit of an online allocator is that constraints can be considered in the design whereas the fixed mixer cannot. However, an online control allocator mixer has a disadvantage of not guaranteeing a surface schedule, which can then produce ill defined loads on the aircraft. The load uncertainty and complexity has prevented some controller designs from using advanced allocation techniques. This paper considers actuator redundancy management for a class of over actuated systems with real-time structural load limits using linear quadratic tracking applied to the generic transport model. A roll maneuver example of an artificial load limit constraint is shown and compared to the same no load limitation maneuver.
The 12-foot pressure wind tunnel restoration project model support systems
NASA Technical Reports Server (NTRS)
Sasaki, Glen E.
1992-01-01
The 12 Foot Pressure Wind Tunnel is a variable density, low turbulence wind tunnel that operates at subsonic speeds, and up to six atmospheres total pressure. The restoration of this facility is of critical importance to the future of the U.S. aerospace industry. As part of this project, several state of the art model support systems are furnished to provide an optimal balance between aerodynamic and operational efficiency parameters. Two model support systems, the Rear Strut Model Support, and the High Angle of Attack Model Support are discussed. This paper covers design parameters, constraints, development, description, and component selection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, P.; Bhattacharyya, D.; Turton, R.
2012-01-01
Future integrated gasification combined cycle (IGCC) power plants with CO{sub 2} capture will face stricter operational and environmental constraints. Accurate values of relevant states/outputs/disturbances are needed to satisfy these constraints and to maximize the operational efficiency. Unfortunately, a number of these process variables cannot be measured while a number of them can be measured, but have low precision, reliability, or signal-to-noise ratio. In this work, a sensor placement (SP) algorithm is developed for optimal selection of sensor location, number, and type that can maximize the plant efficiency and result in a desired precision of the relevant measured/unmeasured states. In thismore » work, an SP algorithm is developed for an selective, dual-stage Selexol-based acid gas removal (AGR) unit for an IGCC plant with pre-combustion CO{sub 2} capture. A comprehensive nonlinear dynamic model of the AGR unit is developed in Aspen Plus Dynamics® (APD) and used to generate a linear state-space model that is used in the SP algorithm. The SP algorithm is developed with the assumption that an optimal Kalman filter will be implemented in the plant for state and disturbance estimation. The algorithm is developed assuming steady-state Kalman filtering and steady-state operation of the plant. The control system is considered to operate based on the estimated states and thereby, captures the effects of the SP algorithm on the overall plant efficiency. The optimization problem is solved by Genetic Algorithm (GA) considering both linear and nonlinear equality and inequality constraints. Due to the very large number of candidate sets available for sensor placement and because of the long time that it takes to solve the constrained optimization problem that includes more than 1000 states, solution of this problem is computationally expensive. For reducing the computation time, parallel computing is performed using the Distributed Computing Server (DCS®) and the Parallel Computing® toolbox from Mathworks®. In this presentation, we will share our experience in setting up parallel computing using GA in the MATLAB® environment and present the overall approach for achieving higher computational efficiency in this framework.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, P.; Bhattacharyya, D.; Turton, R.
2012-01-01
Future integrated gasification combined cycle (IGCC) power plants with CO{sub 2} capture will face stricter operational and environmental constraints. Accurate values of relevant states/outputs/disturbances are needed to satisfy these constraints and to maximize the operational efficiency. Unfortunately, a number of these process variables cannot be measured while a number of them can be measured, but have low precision, reliability, or signal-to-noise ratio. In this work, a sensor placement (SP) algorithm is developed for optimal selection of sensor location, number, and type that can maximize the plant efficiency and result in a desired precision of the relevant measured/unmeasured states. In thismore » work, an SP algorithm is developed for an selective, dual-stage Selexol-based acid gas removal (AGR) unit for an IGCC plant with pre-combustion CO{sub 2} capture. A comprehensive nonlinear dynamic model of the AGR unit is developed in Aspen Plus Dynamics® (APD) and used to generate a linear state-space model that is used in the SP algorithm. The SP algorithm is developed with the assumption that an optimal Kalman filter will be implemented in the plant for state and disturbance estimation. The algorithm is developed assuming steady-state Kalman filtering and steady-state operation of the plant. The control system is considered to operate based on the estimated states and thereby, captures the effects of the SP algorithm on the overall plant efficiency. The optimization problem is solved by Genetic Algorithm (GA) considering both linear and nonlinear equality and inequality constraints. Due to the very large number of candidate sets available for sensor placement and because of the long time that it takes to solve the constrained optimization problem that includes more than 1000 states, solution of this problem is computationally expensive. For reducing the computation time, parallel computing is performed using the Distributed Computing Server (DCS®) and the Parallel Computing® toolbox from Mathworks®. In this presentation, we will share our experience in setting up parallel computing using GA in the MATLAB® environment and present the overall approach for achieving higher computational efficiency in this framework.« less
Global constraints on Z2 fluxes in two different anisotropic limits of a hypernonagon Kitaev model
NASA Astrophysics Data System (ADS)
Kato, Yasuyuki; Kamiya, Yoshitomo; Nasu, Joji; Motome, Yukitoshi
2018-05-01
The Kitaev model is an exactly-soluble quantum spin model, whose ground state provides a canonical example of a quantum spin liquid. Spin excitations from the ground state are fractionalized into emergent matter fermions and Z2 fluxes. The Z2 flux excitation is pointlike in two dimensions, while it comprises a closed loop in three dimensions because of the local constraint for each closed volume. In addition, the fluxes obey global constraints involving (semi)macroscopic number of fluxes. We here investigate such global constraints in the Kitaev model on a three-dimensional lattice composed of nine-site elementary loops, dubbed the hypernonagon lattice, whose ground state is a chiral spin liquid. We consider two different anisotropic limits of the hypernonagon Kitaev model where the low-energy effective models are described solely by the Z2 fluxes. We show that there are two kinds of global constraints in the model defined on a three-dimensional torus, namely, surface and volume constraints: the surface constraint is imposed on the even-odd parity of the total number of fluxes threading a two-dimensional slice of the system, while the volume constraint is for the even-odd parity of the number of the fluxes through specific plaquettes whose total number is proportional to the system volume. In the two anisotropic limits, therefore, the elementary excitation of Z2 fluxes occurs in a pair of closed loops so as to satisfy both two global constraints as well as the local constraints.
Constraining the ensemble Kalman filter for improved streamflow forecasting
NASA Astrophysics Data System (ADS)
Maxwell, Deborah H.; Jackson, Bethanna M.; McGregor, James
2018-05-01
Data assimilation techniques such as the Ensemble Kalman Filter (EnKF) are often applied to hydrological models with minimal state volume/capacity constraints enforced during ensemble generation. Flux constraints are rarely, if ever, applied. Consequently, model states can be adjusted beyond physically reasonable limits, compromising the integrity of model output. In this paper, we investigate the effect of constraining the EnKF on forecast performance. A "free run" in which no assimilation is applied is compared to a completely unconstrained EnKF implementation, a 'typical' hydrological implementation (in which mass constraints are enforced to ensure non-negativity and capacity thresholds of model states are not exceeded), and then to a more tightly constrained implementation where flux as well as mass constraints are imposed to force the rate of water movement to/from ensemble states to be within physically consistent boundaries. A three year period (2008-2010) was selected from the available data record (1976-2010). This was specifically chosen as it had no significant data gaps and represented well the range of flows observed in the longer dataset. Over this period, the standard implementation of the EnKF (no constraints) contained eight hydrological events where (multiple) physically inconsistent state adjustments were made. All were selected for analysis. Mass constraints alone did little to improve forecast performance; in fact, several were significantly degraded compared to the free run. In contrast, the combined use of mass and flux constraints significantly improved forecast performance in six events relative to all other implementations, while the remaining two events showed no significant difference in performance. Placing flux as well as mass constraints on the data assimilation framework encourages physically consistent state estimation and results in more accurate and reliable forward predictions of streamflow for robust decision-making. We also experiment with the observation error, which has a profound effect on filter performance. We note an interesting tension exists between specifying an error which reflects known uncertainties and errors in the measurement versus an error that allows "optimal" filter updating.
Contextual Variability and Exemplar Strength in Phonotactic Learning
ERIC Educational Resources Information Center
Denby, Thomas; Schecter, Jeffrey; Arn, Sean; Dimov, Svetlin; Goldrick, Matthew
2018-01-01
Phonotactics--constraints on the position and combination of speech sounds within syllables--are subject to statistical differences that gradiently affect speaker and listener behavior (e.g., Vitevitch & Luce, 1999). What statistical properties drive the acquisition of such constraints? Because they are naturally highly correlated, previous…
Three dimensional elements with Lagrange multipliers for the modified couple stress theory
NASA Astrophysics Data System (ADS)
Kwon, Young-Rok; Lee, Byung-Chai
2018-07-01
Three dimensional mixed elements for the modified couple stress theory are proposed. The C1 continuity for the displacement field, which is required because of the curvature term in the variational form of the theory, is satisfied weakly by introducing a supplementary rotation as an independent variable and constraining the relation between the rotation and the displacement with a Lagrange multiplier vector. An additional constraint about the deviatoric curvature is also considered for three dimensional problems. Weak forms with one constraint and two constraints are derived, and four elements satisfying convergence criteria are developed by applying different approximations to each field of independent variables. The elements pass a [InlineEquation not available: see fulltext.] patch test for three dimensional problems. Numerical examples show that the additional constraint could be considered essential for the three dimensional elements, and one of the elements is recommended for practical applications via the comparison of the performances of the elements. In addition, all the proposed elements can represent the size effect well.
A Discrete Constraint for Entropy Conservation and Sound Waves in Cloud-Resolving Modeling
NASA Technical Reports Server (NTRS)
Zeng, Xi-Ping; Tao, Wei-Kuo; Simpson, Joanne
2003-01-01
Ideal cloud-resolving models contain little-accumulative errors. When their domain is so large that synoptic large-scale circulations are accommodated, they can be used for the simulation of the interaction between convective clouds and the large-scale circulations. This paper sets up a framework for the models, using moist entropy as a prognostic variable and employing conservative numerical schemes. The models possess no accumulative errors of thermodynamic variables when they comply with a discrete constraint on entropy conservation and sound waves. Alternatively speaking, the discrete constraint is related to the correct representation of the large-scale convergence and advection of moist entropy. Since air density is involved in entropy conservation and sound waves, the challenge is how to compute sound waves efficiently under the constraint. To address the challenge, a compensation method is introduced on the basis of a reference isothermal atmosphere whose governing equations are solved analytically. Stability analysis and numerical experiments show that the method allows the models to integrate efficiently with a large time step.
A dual method for optimal control problems with initial and final boundary constraints.
NASA Technical Reports Server (NTRS)
Pironneau, O.; Polak, E.
1973-01-01
This paper presents two new algorithms belonging to the family of dual methods of centers. The first can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states. The second one can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states and with affine instantaneous inequality constraints on the control. Convergence is established for both algorithms. Qualitative reasoning indicates that the rate of convergence is linear.
Chang, Wen-Jer; Huang, Bo-Jyun
2014-11-01
The multi-constrained robust fuzzy control problem is investigated in this paper for perturbed continuous-time nonlinear stochastic systems. The nonlinear system considered in this paper is represented by a Takagi-Sugeno fuzzy model with perturbations and state multiplicative noises. The multiple performance constraints considered in this paper include stability, passivity and individual state variance constraints. The Lyapunov stability theory is employed to derive sufficient conditions to achieve the above performance constraints. By solving these sufficient conditions, the contribution of this paper is to develop a parallel distributed compensation based robust fuzzy control approach to satisfy multiple performance constraints for perturbed nonlinear systems with multiplicative noises. At last, a numerical example for the control of perturbed inverted pendulum system is provided to illustrate the applicability and effectiveness of the proposed multi-constrained robust fuzzy control method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Infinite horizon problems on stratifiable state-constraints sets
NASA Astrophysics Data System (ADS)
Hermosilla, C.; Zidani, H.
2015-02-01
This paper deals with a state-constrained control problem. It is well known that, unless some compatibility condition between constraints and dynamics holds, the Value Function has not enough regularity, or can fail to be the unique constrained viscosity solution of a Hamilton-Jacobi-Bellman (HJB) equation. Here, we consider the case of a set of constraints having a stratified structure. Under this circumstance, the interior of this set may be empty or disconnected, and the admissible trajectories may have the only option to stay on the boundary without possible approximation in the interior of the constraints. In such situations, the classical pointing qualification hypothesis is not relevant. The discontinuous Value Function is then characterized by means of a system of HJB equations on each stratum that composes the state-constraints. This result is obtained under a local controllability assumption which is required only on the strata where some chattering phenomena could occur.
NASA Astrophysics Data System (ADS)
Noe Dobrea, E. Z.; Bell, J. F., III
2002-03-01
We investigate the spectral variability of Acidalia Planitia using MGS/TES. Atmospheric removal is done by constraining our observations to EPF's. Preliminary analysis show variability of the 6-micron feature attributed water/OH-bearing minerals.
The Anomalous Low State of LMC X-3
NASA Technical Reports Server (NTRS)
Smale, A. P.; Boyd, P. T.; Markwardt, C. B.
2009-01-01
Archival RXTE ASM and PCA observations of the black hole binary LMC X-3 reveal a dramatic and extended low state lasting from December 8, 2003 until March 18, 2004, unprecedented both in its Low luminosity (Lx(2-10keV)=4.2x 1035 ergs s-1, approximately 4 times fainter than ever before seen from LMC X-3 in its low/hard state, and representing 0.15% of its X-ray luminosity during the high/soft state); and Long duration (approximately equal to 100 days, as compared with 5-20 days for 'normal' low/hard state excursions). During this anomalous low state no significant variability is observed on timescales of days-weeks, and the spectrum is well described by a simple power law with index 1.7 plus or minus 0.2. We examine the variability characteristics of LMC X-3 before and after this event using conventional and topological methods, and show that with the exception of the anomalous low state itself the long-term behavior of the source in topological phase space can be completely described in terms of a well-understood nonlinear dynamics system known as the Duffing oscillator, implying that the accretion disk in LMC X-3 is a driven, dissipative system with two solutions competing for control of its time evolution. This work shows that dynamical information and constraints revealed by topological analysis methods can provide a valuable addition to traditional studies of accretion disk behavior.
Condensation with two constraints and disorder
NASA Astrophysics Data System (ADS)
Barré, J.; Mangeolle, L.
2018-04-01
We consider a set of positive random variables obeying two additive constraints, a linear and a quadratic one; these constraints mimic the conservation laws of a dynamical system. In the simplest setting, without disorder, it is known that such a system may undergo a ‘condensation’ transition, whereby one random variable becomes much larger than the others; this transition has been related to the spontaneous appearance of non linear localized excitations in certain nonlinear chains, called breathers. Motivated by the study of breathers in a disordered discrete nonlinear Schrödinger equation, we study different instances of this problem in presence of a quenched disorder. Unless the disorder is too strong, the phase diagram looks like the one without disorder, with a transition separating a fluid phase, where all variables have the same order of magnitude, and a condensed phase, where one variable is much larger than the others. We then show that the condensed phase exhibits various degrees of ‘intermediate symmetry breaking’: the site hosting the condensate is chosen neither uniformly at random, nor is it fixed by the disorder realization. Throughout the article, our heuristic arguments are complemented with direct Monte Carlo simulations.
Multi-objective optimization for model predictive control.
Wojsznis, Willy; Mehta, Ashish; Wojsznis, Peter; Thiele, Dirk; Blevins, Terry
2007-06-01
This paper presents a technique of multi-objective optimization for Model Predictive Control (MPC) where the optimization has three levels of the objective function, in order of priority: handling constraints, maximizing economics, and maintaining control. The greatest weights are assigned dynamically to control or constraint variables that are predicted to be out of their limits. The weights assigned for economics have to out-weigh those assigned for control objectives. Control variables (CV) can be controlled at fixed targets or within one- or two-sided ranges around the targets. Manipulated Variables (MV) can have assigned targets too, which may be predefined values or current actual values. This MV functionality is extremely useful when economic objectives are not defined for some or all the MVs. To achieve this complex operation, handle process outputs predicted to go out of limits, and have a guaranteed solution for any condition, the technique makes use of the priority structure, penalties on slack variables, and redefinition of the constraint and control model. An engineering implementation of this approach is shown in the MPC embedded in an industrial control system. The optimization and control of a distillation column, the standard Shell heavy oil fractionator (HOF) problem, is adequately achieved with this MPC.
Variation in Plant Defense Suppresses Herbivore Performance.
Pearse, Ian S; Paul, Ryan; Ode, Paul J
2018-06-18
Defensive variability of crops and natural systems can alter herbivore communities and reduce herbivory [1, 2]. However, it is still unknown how defense variability translates into herbivore suppression. Nonlinear averaging and constraints in physiological tracking (also more generally called time-dependent effects) are the two mechanisms by which defense variability might impact herbivores [3, 4]. We conducted a set of experiments manipulating the mean and variability of a plant defense, showing that defense variability does suppress herbivore performance and that it does so through physiological tracking effects that cannot be explained by nonlinear averaging. While nonlinear averaging predicted higher or the same herbivore performance on a variable defense than on an invariable defense, we show that variability actually decreased herbivore performance and population growth rate. Defense variability reduces herbivore performance in a way that is more than the average of its parts. This is consistent with constraints in physiological matching of detoxification systems for herbivores experiencing variable toxin levels in their diet and represents a more generalizable way of understanding the impacts of variability on herbivory [5]. Increasing defense variability in croplands at a scale encountered by individual herbivores can suppress herbivory, even if that is not anticipated by nonlinear averaging. Published by Elsevier Ltd.
Variation in plant defense suppresses herbivore performance
Pearse, Ian; Paul, Ryan; Ode, Paul J.
2018-01-01
Defensive variability of crops and natural systems can alter herbivore communities and reduce herbivory. However, it is still unknown how defense variability translates into herbivore suppression. Nonlinear averaging and constraints in physiological tracking (also more generally called time-dependent effects) are the two mechanisms by which defense variability might impact herbivores. We conducted a set of experiments manipulating the mean and variability of a plant defense, showing that defense variability does suppress herbivore performance and that it does so through physiological tracking effects that cannot be explained by nonlinear averaging. While nonlinear averaging predicted higher or the same herbivore performance on a variable defense than on an invariable defense, we show that variability actually decreased herbivore performance and population growth rate. Defense variability reduces herbivore performance in a way that is more than the average of its parts. This is consistent with constraints in physiological matching of detoxification systems for herbivores experiencing variable toxin levels in their diet and represents a more generalizable way of understanding the impacts of variability on herbivory. Increasing defense variability in croplands at a scale encountered by individual herbivores can suppress herbivory, even if that is not anticipated by nonlinear averaging.
Liu, Yan-Jun; Tong, Shaocheng; Chen, C L Philip; Li, Dong-Juan
2017-11-01
A neural network (NN) adaptive control design problem is addressed for a class of uncertain multi-input-multi-output (MIMO) nonlinear systems in block-triangular form. The considered systems contain uncertainty dynamics and their states are enforced to subject to bounded constraints as well as the couplings among various inputs and outputs are inserted in each subsystem. To stabilize this class of systems, a novel adaptive control strategy is constructively framed by using the backstepping design technique and NNs. The novel integral barrier Lyapunov functionals (BLFs) are employed to overcome the violation of the full state constraints. The proposed strategy can not only guarantee the boundedness of the closed-loop system and the outputs are driven to follow the reference signals, but also can ensure all the states to remain in the predefined compact sets. Moreover, the transformed constraints on the errors are used in the previous BLF, and accordingly it is required to determine clearly the bounds of the virtual controllers. Thus, it can relax the conservative limitations in the traditional BLF-based controls for the full state constraints. This conservatism can be solved in this paper and it is for the first time to control this class of MIMO systems with the full state constraints. The performance of the proposed control strategy can be verified through a simulation example.
Stodden, David F; Langendorfer, Stephen J; Fleisig, Glenn S; Andrews, James R
2006-12-01
The purposes of this study were to: (a) examine differences within specific kinematic variables and ball velocity associated with developmental component levels of step and trunk action (Roberton & Halverson, 1984), and (b) if the differences in kinematic variables were significantly associated with the differences in component levels, determine potential kinematic constraints associated with skilled throwing acquisition. Results indicated stride length (69.3 %) and time from stride foot contact to ball release (39. 7%) provided substantial contributions to ball velocity (p < .001). All trunk kinematic measures increased significantly with increasing component levels (p < .001). Results suggest that trunk linear and rotational velocities, degree of trunk tilt, time from stride foot contact to ball release, and ball velocity represented potential control parameters and, therefore, constraints on overarm throwing acquisition.
Lu-Hf constraints on the evolution of lunar basalts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujimaki, H.; Tatsumoto, M.
1984-02-15
Very low Ti basalts andd green glass samples from the moon show high Lu/Hf ratios and low Hf concentrations. Low-Ti lunar basalts show high and variable Lu/Hf ratios and higher Hf concentrations, whereas high-Ti lunar basalts show low Lu/Hf ratios and high Hf concentrations. KREEP basalts have constant Lu/Hf ratios and high but variable Hf concentrations. Using the Lu-Hf behavior as a constraint, we propose a model for the mare basalts evolution. This constraint requires extensive crystallization of the primary lunar magma ocean prior to formation of the lunar mare basalt sources and the KREEP basalts. Mare basalts are producedmore » by the melting of the cumulate rocks, and KREEP basalts represent the residual liquid of the magma ocean.« less
An LMI approach for the Integral Sliding Mode and H∞ State Feedback Control Problem
NASA Astrophysics Data System (ADS)
Bezzaoucha, Souad; Henry, David
2015-11-01
This paper deals with the state feedback control problem for linear uncertain systems subject to both matched and unmatched perturbations. The proposed control law is based on an the Integral Sliding Mode Control (ISMC) approach to tackle matched perturbations as well as the H∞ paradigm for robustness against unmatched perturbations. The proposed method also parallels the work presented in [1] which addressed the same problem and proposed a solution involving an Algebraic Riccati Equation (ARE)-based formulation. The contribution of this paper is concerned by the establishment of a Linear Matrix Inequality (LMI)-based solution which offers the possibility to consider other types of constraints such as 𝓓-stability constraints (pole assignment-like constraints). The proposed methodology is applied to a pilot three-tank system and experiment results illustrate the feasibility. Note that only a few real experiments have been rarely considered using SMC in the past. This is due to the high energetic behaviour of the control signal. It is important to outline that the paper does not aim at proposing a LMI formulation of an ARE. This is done since 1971 [2] and further discussed in [3] where the link between AREs and ARIs (algebraic Riccati inequality) is established for the H∞ control problem. The main contribution of this paper is to establish the adequate LMI-based methodology (changes of matrix variables) so that the ARE that corresponds to the particular structure of the mixed ISMC/H∞ structure proposed by [1] can be re-formulated within the LMI paradigm.
X-RAY SOURCES IN THE DWARF SPHEROIDAL GALAXY DRACO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonbas, E.; Rangelov, B.; Kargaltsev, O.
2016-04-10
We present the spectral analysis of an 87 ks XMM-Newton observation of Draco, a nearby dwarf spheroidal galaxy. Of the approximately 35 robust X-ray source detections, we focus our attention on the brightest of these sources, for which we report X-ray and multiwavelength parameters. While most of the sources exhibit properties consistent with active galactic nuclei, few of them possess the characteristics of low-mass X-ray binaries (LMXBs) and cataclysmic variable (CVs). Our analysis places constraints on the population of X-ray sources with L{sub X} > 3 × 10{sup 33} erg s{sup −1} in Draco, suggesting that there are no actively accreting black hole andmore » neutron star binaries. However, we find four sources that could be quiescent state LMXBs/CVs associated with Draco. We also place constraints on the central black hole luminosity and on a dark matter decay signal around 3.5 keV.« less
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.; ...
2017-01-18
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
Development of an adaptive hp-version finite element method for computational optimal control
NASA Technical Reports Server (NTRS)
Hodges, Dewey H.; Warner, Michael S.
1994-01-01
In this research effort, the usefulness of hp-version finite elements and adaptive solution-refinement techniques in generating numerical solutions to optimal control problems has been investigated. Under NAG-939, a general FORTRAN code was developed which approximated solutions to optimal control problems with control constraints and state constraints. Within that methodology, to get high-order accuracy in solutions, the finite element mesh would have to be refined repeatedly through bisection of the entire mesh in a given phase. In the current research effort, the order of the shape functions in each element has been made a variable, giving more flexibility in error reduction and smoothing. Similarly, individual elements can each be subdivided into many pieces, depending on the local error indicator, while other parts of the mesh remain coarsely discretized. The problem remains to reduce and smooth the error while still keeping computational effort reasonable enough to calculate time histories in a short enough time for on-board applications.
Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.
Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less
Tchamna, Rodrigue; Lee, Moonyong
2018-01-01
This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
T. H. Brikowski; D. L. Norton; D. D. Blackwell
Final project report of natural state modeling effort for The Geysers geothermal field, California. Initial models examined the liquid-dominated state of the system, based on geologic constraints and calibrated to match observed whole rock delta-O18 isotope alteration. These models demonstrated that the early system was of generally low permeability (around 10{sup -12} m{sup 2}), with good hydraulic connectivity at depth (along the intrusive contact) and an intact caprock. Later effort in the project was directed at development of a two-phase, supercritical flow simulation package (EOS1sc) to accompany the Tough2 flow simulator. Geysers models made using this package show that ''simmering'',more » or the transient migration of vapor bubbles through the hydrothermal system, is the dominant transition state as the system progresses to vapor-dominated. Such a system is highly variable in space and time, making the rock record more difficult to interpret, since pressure-temperature indicators likely reflect only local, short duration conditions.« less
The quantum holonomy-diffeomorphism algebra and quantum gravity
NASA Astrophysics Data System (ADS)
Aastrup, Johannes; Grimstrup, Jesper Møller
2016-03-01
We introduce the quantum holonomy-diffeomorphism ∗-algebra, which is generated by holonomy-diffeomorphisms on a three-dimensional manifold and translations on a space of SU(2)-connections. We show that this algebra encodes the canonical commutation relations of canonical quantum gravity formulated in terms of Ashtekar variables. Furthermore, we show that semiclassical states exist on the holonomy-diffeomorphism part of the algebra but that these states cannot be extended to the full algebra. Via a Dirac-type operator we derive a certain class of unbounded operators that act in the GNS construction of the semiclassical states. These unbounded operators are the type of operators, which we have previously shown to entail the spatial three-dimensional Dirac operator and Dirac-Hamiltonian in a semiclassical limit. Finally, we show that the structure of the Hamilton constraint emerges from a Yang-Mills-type operator over the space of SU(2)-connections.
Tran, Tri; Ha, Q P
2018-01-01
A perturbed cooperative-state feedback (PSF) strategy is presented for the control of interconnected systems in this paper. The subsystems of an interconnected system can exchange data via the communication network that has multiple connection topologies. The PSF strategy can resolve both issues, the sensor data losses and the communication network breaks, thanks to the two components of the control including a cooperative-state feedback and a perturbation variable, e.g., u i =K ij x j +w i . The PSF is implemented in a decentralized model predictive control scheme with a stability constraint and a non-monotonic storage function (ΔV(x(k))≥0), derived from the dissipative systems theory. Numerical simulation for the automatic generation control problem in power systems is studied to illustrate the effectiveness of the presented PSF strategy. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Chance-Constrained Guidance With Non-Convex Constraints
NASA Technical Reports Server (NTRS)
Ono, Masahiro
2011-01-01
Missions to small bodies, such as comets or asteroids, require autonomous guidance for descent to these small bodies. Such guidance is made challenging by uncertainty in the position and velocity of the spacecraft, as well as the uncertainty in the gravitational field around the small body. In addition, the requirement to avoid collision with the asteroid represents a non-convex constraint that means finding the optimal guidance trajectory, in general, is intractable. In this innovation, a new approach is proposed for chance-constrained optimal guidance with non-convex constraints. Chance-constrained guidance takes into account uncertainty so that the probability of collision is below a specified threshold. In this approach, a new bounding method has been developed to obtain a set of decomposed chance constraints that is a sufficient condition of the original chance constraint. The decomposition of the chance constraint enables its efficient evaluation, as well as the application of the branch and bound method. Branch and bound enables non-convex problems to be solved efficiently to global optimality. Considering the problem of finite-horizon robust optimal control of dynamic systems under Gaussian-distributed stochastic uncertainty, with state and control constraints, a discrete-time, continuous-state linear dynamics model is assumed. Gaussian-distributed stochastic uncertainty is a more natural model for exogenous disturbances such as wind gusts and turbulence than the previously studied set-bounded models. However, with stochastic uncertainty, it is often impossible to guarantee that state constraints are satisfied, because there is typically a non-zero probability of having a disturbance that is large enough to push the state out of the feasible region. An effective framework to address robustness with stochastic uncertainty is optimization with chance constraints. These require that the probability of violating the state constraints (i.e., the probability of failure) is below a user-specified bound known as the risk bound. An example problem is to drive a car to a destination as fast as possible while limiting the probability of an accident to 10(exp -7). This framework allows users to trade conservatism against performance by choosing the risk bound. The more risk the user accepts, the better performance they can expect.
Reinforcement, Behavior Constraint, and the Overjustification Effect.
ERIC Educational Resources Information Center
Williams, Bruce W.
1980-01-01
Four levels of the behavior constraint-reinforcement variable were manipulated: attractive reward, unattractive reward, request to perform, and a no-reward control. Only the unattractive reward and request groups showed the performance decrements that suggest the overjustification effect. It is concluded that reinforcement does not cause the…
Transoptr — A second order beam transport design code with optimization and constraints
NASA Astrophysics Data System (ADS)
Heighway, E. A.; Hutcheon, R. M.
1981-08-01
This code was written initially to design an achromatic and isochronous reflecting magnet and has been extended to compete in capability (for constrained problems) with TRANSPORT. Its advantage is its flexibility in that the user writes a routine to describe his transport system. The routine allows the definition of general variables from which the system parameters can be derived. Further, the user can write any constraints he requires as algebraic equations relating the parameters. All variables may be used in either a first or second order optimization.
REDUCTION OF CONSTRAINTS FOR COUPLED OPERATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raszewski, F.; Edwards, T.
2009-12-15
The homogeneity constraint was implemented in the Defense Waste Processing Facility (DWPF) Product Composition Control System (PCCS) to help ensure that the current durability models would be applicable to the glass compositions being processed during DWPF operations. While the homogeneity constraint is typically an issue at lower waste loadings (WLs), it may impact the operating windows for DWPF operations, where the glass forming systems may be limited to lower waste loadings based on fissile or heat load limits. In the sludge batch 1b (SB1b) variability study, application of the homogeneity constraint at the measurement acceptability region (MAR) limit eliminated muchmore » of the potential operating window for DWPF. As a result, Edwards and Brown developed criteria that allowed DWPF to relax the homogeneity constraint from the MAR to the property acceptance region (PAR) criterion, which opened up the operating window for DWPF operations. These criteria are defined as: (1) use the alumina constraint as currently implemented in PCCS (Al{sub 2}O{sub 3} {ge} 3 wt%) and add a sum of alkali constraint with an upper limit of 19.3 wt% ({Sigma}M{sub 2}O < 19.3 wt%), or (2) adjust the lower limit on the Al{sub 2}O{sub 3} constraint to 4 wt% (Al{sub 2}O{sub 3} {ge} 4 wt%). Herman et al. previously demonstrated that these criteria could be used to replace the homogeneity constraint for future sludge-only batches. The compositional region encompassing coupled operations flowsheets could not be bounded as these flowsheets were unknown at the time. With the initiation of coupled operations at DWPF in 2008, the need to revisit the homogeneity constraint was realized. This constraint was specifically addressed through the variability study for SB5 where it was shown that the homogeneity constraint could be ignored if the alumina and alkali constraints were imposed. Additional benefit could be gained if the homogeneity constraint could be replaced by the Al{sub 2}O{sub 3} and sum of alkali constraint for future coupled operations processing based on projections from Revision 14 of the High Level Waste (HLW) System Plan. As with the first phase of testing for sludge-only operations, replacement of the homogeneity constraint with the alumina and sum of alkali constraints will ensure acceptable product durability over the compositional region evaluated. Although these study glasses only provide limited data in a large compositional region, the approach and results are consistent with previous studies that challenged the homogeneity constraint for sludge-only operations. That is, minimal benefit is gained by imposing the homogeneity constraint if the other PCCS constraints are satisfied. The normalized boron releases of all of the glasses are well below the Environmental Assessment (EA) glass results, regardless of thermal history. Although one of the glasses had a normalized boron release of approximately 10 g/L and was not predictable, the glass is still considered acceptable. This particular glass has a low Al{sub 2}O{sub 3} concentration, which may have attributed to the anomalous behavior. Given that poor durability has been previously observed in other glasses with low Al{sub 2}O{sub 3} and Fe{sub 2}O{sub 3} concentrations, including the sludge-only reduction of constraints study, further investigations appear to be warranted. Based on the results of this study, it is recommended that the homogeneity constraint (in its entirety with the associated low frit/high frit constraints) be eliminated for coupled operations as defined by Revision 14 of the HLW System Plan with up to 2 wt% TiO{sub 2}. The use of the alumina and sum of alkali constraints should be continued along with the variability study to determine the predictability of the current durability models and/or that the glasses are acceptable with respect to durability. The use of a variability study for each batch is consistent with the glass product control program and it will help to assess new streams or compositional changes. It is also recommended that the influence of alumina and alkali on durability be studied in greater detail. Limited data suggests that there may be a need to adjust the lower Al{sub 2}O{sub 3} limit and/or the upper alkali limit in order to prevent the fabrication of unacceptable glasses. An in-depth evaluation of all previous data as well as any new data would help to better define an alumina and alkali combination that would avoid potential phase separation and ensure glass durability.« less
NASA Astrophysics Data System (ADS)
Brigatti, M. F.; Elmi, C.; Laurora, A.; Malferrari, D.; Medici, L.
2009-04-01
An extremely severe aspect, both from environmental and economic viewpoint, is the management of polluted sediments removed from drainage and irrigation canals. Canals, in order to retain their functionality over the time, need to have their beds, periodically cleaned from sediments there accumulating. The management of removed sediments is extremely demanding, also from an economical perspective, if these latter needs to be treated as dangerous waste materials, as stated in numerous international standards. Furthermore the disposal of such a large amount of material may introduce a significant environmental impact as well. An appealing alternative is the recovery or reuse of these materials, for example in brick and tile industry, after obviously the application of appropriate techniques and protocols that could render these latter no longer a threat for human health. The assessment of the effective potential danger for human health and ecosystem of sediments before and after treatment obviously requires both a careful chemical and mineralogical characterization and, even if not always considered in the international standards, the definition of the coordination shell of heavy metals dangerous for human health, as a function of their oxidation state and coordination (e.g. Cr and Pb), and introducing technological constraints or affecting the features of the end products. Fe is a good representative for this second category, as the features of the end product, such as color, strongly depend not only from Fe concentration but also from its oxidation state, speciation and coordination. This work will first of all provide mineralogical characterization of sediments from various sampling points of irrigation and drainage canals of Po river region in the north-eastern of Italy. Samples were investigated with various approaches including X-ray powder diffraction under non-ambient conditions, thermal analysis and EXAFS spectroscopy. Obtained results, and in particular EXAFS spectra were used to define and optimize the technological variables of the recovery process.
Manning, Kathryn Y; Menon, Ravi S; Gorter, Jan Willem; Mesterman, Ronit; Campbell, Craig; Switzer, Lauren; Fehlings, Darcy
2016-02-01
Using resting state functional magnetic resonance imaging (MRI), we aim to understand the neurologic basis of improved function in children with hemiplegic cerebral palsy treated with constraint-induced movement therapy. Eleven children including 4 untreated comparison subjects diagnosed with hemiplegic cerebral palsy were recruited from 3 clinical centers. MRI and clinical data were gathered at baseline and 1 month for both groups, and 6 months later for the case group only. After constraint therapy, the sensorimotor resting state network became more bilateral, with balanced contributions from each hemisphere, which was sustained 6 months later. Sensorimotor resting state network reorganization after therapy was correlated with a change in the Quality of Upper Extremity Skills Test score at 1 month (r = 0.79, P = .06), and Canadian Occupational Performance Measure scores at 6 months (r = 0.82, P = .05). This clinically correlated resting state network reorganization provides further evidence of the neuroplastic mechanisms underlying constraint-induced movement therapy. © The Author(s) 2015.
Pinning of fermionic occupation numbers.
Schilling, Christian; Gross, David; Christandl, Matthias
2013-01-25
The Pauli exclusion principle is a constraint on the natural occupation numbers of fermionic states. It has been suspected since at least the 1970s, and only proved very recently, that there is a multitude of further constraints on these numbers, generalizing the Pauli principle. Here, we provide the first analytic analysis of the physical relevance of these constraints. We compute the natural occupation numbers for the ground states of a family of interacting fermions in a harmonic potential. Intriguingly, we find that the occupation numbers are almost, but not exactly, pinned to the boundary of the allowed region (quasipinned). The result suggests that the physics behind the phenomenon is richer than previously appreciated. In particular, it shows that for some models, the generalized Pauli constraints play a role for the ground state, even though they do not limit the ground-state energy. Our findings suggest a generalization of the Hartree-Fock approximation.
Magnetospheric Gamma-Ray Emission in Active Galactic Nuclei
NASA Astrophysics Data System (ADS)
Katsoulakos, Grigorios; Rieger, Frank M.
2018-01-01
The rapidly variable, very high-energy (VHE) gamma-ray emission from active galactic nuclei (AGNs) has been frequently associated with non-thermal processes occurring in the magnetospheres of their supermassive black holes. The present work aims to explore the adequacy of different gap-type (unscreened electric field) models to account for the observed characteristics. Based on a phenomenological description of the gap potential, we estimate the maximum extractable gap power L gap for different magnetospheric setups, and study its dependence on the accretion state of the source. L gap is found in general to be proportional to the Blandford–Znajek jet power L BZ and a sensitive function of gap size h, i.e., {L}{gap}∼ {L}{BZ}{(h/{r}g)}β , where the power index β ≥slant 1 is dependent on the respective gap setup. The transparency of the vicinity of the black hole to VHE photons generally requires a radiatively inefficient accretion environment and thereby imposes constraints on possible accretion rates, and correspondingly on L BZ. Similarly, rapid variability, if observed, may allow one to constrain the gap size h∼ c{{Δ }}t. Combining these constraints, we provide a general classification to assess the likelihood that the VHE gamma-ray emission observed from an AGN can be attributed to a magnetospheric origin. When applied to prominent candidate sources these considerations suggest that the variable (day-scale) VHE activity seen in the radio galaxy M87 could be compatible with a magnetospheric origin, while such an origin appears less likely for the (minute-scale) VHE activity in IC 310.
Distributing Earthquakes Among California's Faults: A Binary Integer Programming Approach
NASA Astrophysics Data System (ADS)
Geist, E. L.; Parsons, T.
2016-12-01
Statement of the problem is simple: given regional seismicity specified by a Gutenber-Richter (G-R) relation, how are earthquakes distributed to match observed fault-slip rates? The objective is to determine the magnitude-frequency relation on individual faults. The California statewide G-R b-value and a-value are estimated from historical seismicity, with the a-value accounting for off-fault seismicity. UCERF3 consensus slip rates are used, based on geologic and geodetic data and include estimates of coupling coefficients. The binary integer programming (BIP) problem is set up such that each earthquake from a synthetic catalog spanning millennia can occur at any location along any fault. The decision vector, therefore, consists of binary variables, with values equal to one indicating the location of each earthquake that results in an optimal match of slip rates, in an L1-norm sense. Rupture area and slip associated with each earthquake are determined from a magnitude-area scaling relation. Uncertainty bounds on the UCERF3 slip rates provide explicit minimum and maximum constraints to the BIP model, with the former more important to feasibility of the problem. There is a maximum magnitude limit associated with each fault, based on fault length, providing an implicit constraint. Solution of integer programming problems with a large number of variables (>105 in this study) has been possible only since the late 1990s. In addition to the classic branch-and-bound technique used for these problems, several other algorithms have been recently developed, including pre-solving, sifting, cutting planes, heuristics, and parallelization. An optimal solution is obtained using a state-of-the-art BIP solver for M≥6 earthquakes and California's faults with slip-rates > 1 mm/yr. Preliminary results indicate a surprising diversity of on-fault magnitude-frequency relations throughout the state.
Systems and methods for energy cost optimization in a building system
Turney, Robert D.; Wenzel, Michael J.
2016-09-06
Methods and systems to minimize energy cost in response to time-varying energy prices are presented for a variety of different pricing scenarios. A cascaded model predictive control system is disclosed comprising an inner controller and an outer controller. The inner controller controls power use using a derivative of a temperature setpoint and the outer controller controls temperature via a power setpoint or power deferral. An optimization procedure is used to minimize a cost function within a time horizon subject to temperature constraints, equality constraints, and demand charge constraints. Equality constraints are formulated using system model information and system state information whereas demand charge constraints are formulated using system state information and pricing information. A masking procedure is used to invalidate demand charge constraints for inactive pricing periods including peak, partial-peak, off-peak, critical-peak, and real-time.
A differentiable reformulation for E-optimal design of experiments in nonlinear dynamic biosystems.
Telen, Dries; Van Riet, Nick; Logist, Flip; Van Impe, Jan
2015-06-01
Informative experiments are highly valuable for estimating parameters in nonlinear dynamic bioprocesses. Techniques for optimal experiment design ensure the systematic design of such informative experiments. The E-criterion which can be used as objective function in optimal experiment design requires the maximization of the smallest eigenvalue of the Fisher information matrix. However, one problem with the minimal eigenvalue function is that it can be nondifferentiable. In addition, no closed form expression exists for the computation of eigenvalues of a matrix larger than a 4 by 4 one. As eigenvalues are normally computed with iterative methods, state-of-the-art optimal control solvers are not able to exploit automatic differentiation to compute the derivatives with respect to the decision variables. In the current paper a reformulation strategy from the field of convex optimization is suggested to circumvent these difficulties. This reformulation requires the inclusion of a matrix inequality constraint involving positive semidefiniteness. In this paper, this positive semidefiniteness constraint is imposed via Sylverster's criterion. As a result the maximization of the minimum eigenvalue function can be formulated in standard optimal control solvers through the addition of nonlinear constraints. The presented methodology is successfully illustrated with a case study from the field of predictive microbiology. Copyright © 2015. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.
1971-01-01
An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.
NASA Astrophysics Data System (ADS)
Cabral, Mariza Castanheira De Moura Da Costa
In the fifty-two years since Robert Horton's 1945 pioneering quantitative description of channel network planform (or plan view morphology), no conclusive findings have been presented that permit inference of geomorphological processes from any measures of network planform. All measures of network planform studied exhibit limited geographic variability across different environments. Horton (1945), Langbein et al. (1947), Schumm (1956), Hack (1957), Melton (1958), and Gray (1961) established various "laws" of network planform, that is, statistical relationships between different variables which have limited variability. A wide variety of models which have been proposed to simulate the growth of channel networks in time over a landsurface are generally also in agreement with the above planform laws. An explanation is proposed for the generality of the channel network planform laws. Channel networks must be space filling, that is, they must extend over the landscape to drain every hillslope, leaving no large undrained areas, and with no crossing of channels, often achieving a roughly uniform drainage density in a given environment. It is shown that the space-filling constraint can reduce the sensitivity of planform variables to different network growth models, and it is proposed that this constraint may determine the planform laws. The "Q model" of network growth of Van Pelt and Verwer (1985) is used to generate samples of networks. Sensitivity to the model parameter Q is markedly reduced when the networks generated are required to be space filling. For a wide variety of Q values, the space-filling networks are in approximate agreement with the various channel network planform laws. Additional constraints, including of energy efficiency, were not studied but may further reduce the variability of planform laws. Inference of model parameter Q from network topology is successful only in networks not subject to spatial constraints. In space-filling networks, for a wide range of Q values, the maximal-likelihood Q parameter value is generally in the vicinity of 1/2, which yields topological randomness. It is proposed that space filling originates the appearance of randomness in channel network topology, and may cause difficulties to geomorphological inference from network planform.
A motion-constraint logic for moving-base simulators based on variable filter parameters
NASA Technical Reports Server (NTRS)
Miller, G. K., Jr.
1974-01-01
A motion-constraint logic for moving-base simulators has been developed that is a modification to the linear second-order filters generally employed in conventional constraints. In the modified constraint logic, the filter parameters are not constant but vary with the instantaneous motion-base position to increase the constraint as the system approaches the positional limits. With the modified constraint logic, accelerations larger than originally expected are limited while conventional linear filters would result in automatic shutdown of the motion base. In addition, the modified washout logic has frequency-response characteristics that are an improvement over conventional linear filters with braking for low-frequency pilot inputs. During simulated landing approaches of an externally blown flap short take-off and landing (STOL) transport using decoupled longitudinal controls, the pilots were unable to detect much difference between the modified constraint logic and the logic based on linear filters with braking.
Quantized Average Consensus on Gossip Digraphs with Reduced Computation
NASA Astrophysics Data System (ADS)
Cai, Kai; Ishii, Hideaki
The authors have recently proposed a class of randomized gossip algorithms which solve the distributed averaging problem on directed graphs, with the constraint that each node has an integer-valued state. The essence of this algorithm is to maintain local records, called “surplus”, of individual state updates, thereby achieving quantized average consensus even though the state sum of all nodes is not preserved. In this paper we study a modified version of this algorithm, whose feature is primarily in reducing both computation and communication effort. Concretely, each node needs to update fewer local variables, and can transmit surplus by requiring only one bit. Under this modified algorithm we prove that reaching the average is ensured for arbitrary strongly connected graphs. The condition of arbitrary strong connection is less restrictive than those known in the literature for either real-valued or quantized states; in particular, it does not require the special structure on the network called balanced. Finally, we provide numerical examples to illustrate the convergence result, with emphasis on convergence time analysis.
Ławryńczuk, Maciej
2017-03-01
This paper details development of a Model Predictive Control (MPC) algorithm for a boiler-turbine unit, which is a nonlinear multiple-input multiple-output process. The control objective is to follow set-point changes imposed on two state (output) variables and to satisfy constraints imposed on three inputs and one output. In order to obtain a computationally efficient control scheme, the state-space model is successively linearised on-line for the current operating point and used for prediction. In consequence, the future control policy is easily calculated from a quadratic optimisation problem. For state estimation the extended Kalman filter is used. It is demonstrated that the MPC strategy based on constant linear models does not work satisfactorily for the boiler-turbine unit whereas the discussed algorithm with on-line successive model linearisation gives practically the same trajectories as the truly nonlinear MPC controller with nonlinear optimisation repeated at each sampling instant. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Neurocomputing strategies in decomposition based structural design
NASA Technical Reports Server (NTRS)
Szewczyk, Z.; Hajela, P.
1993-01-01
The present paper explores the applicability of neurocomputing strategies in decomposition based structural optimization problems. It is shown that the modeling capability of a backpropagation neural network can be used to detect weak couplings in a system, and to effectively decompose it into smaller, more tractable, subsystems. When such partitioning of a design space is possible, parallel optimization can be performed in each subsystem, with a penalty term added to its objective function to account for constraint violations in all other subsystems. Dependencies among subsystems are represented in terms of global design variables, and a neural network is used to map the relations between these variables and all subsystem constraints. A vector quantization technique, referred to as a z-Network, can effectively be used for this purpose. The approach is illustrated with applications to minimum weight sizing of truss structures with multiple design constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yuping; Zheng, Qipeng P.; Wang, Jianhui
2014-11-01
tThis paper presents a two-stage stochastic unit commitment (UC) model, which integrates non-generation resources such as demand response (DR) and energy storage (ES) while including riskconstraints to balance between cost and system reliability due to the fluctuation of variable genera-tion such as wind and solar power. This paper uses conditional value-at-risk (CVaR) measures to modelrisks associated with the decisions in a stochastic environment. In contrast to chance-constrained modelsrequiring extra binary variables, risk constraints based on CVaR only involve linear constraints and con-tinuous variables, making it more computationally attractive. The proposed models with risk constraintsare able to avoid over-conservative solutions butmore » still ensure system reliability represented by loss ofloads. Then numerical experiments are conducted to study the effects of non-generation resources ongenerator schedules and the difference of total expected generation costs with risk consideration. Sen-sitivity analysis based on reliability parameters is also performed to test the decision preferences ofconfidence levels and load-shedding loss allowances on generation cost reduction.« less
NASA Astrophysics Data System (ADS)
Jung, Sang-Young
Design procedures for aircraft wing structures with control surfaces are presented using multidisciplinary design optimization. Several disciplines such as stress analysis, structural vibration, aerodynamics, and controls are considered simultaneously and combined for design optimization. Vibration data and aerodynamic data including those in the transonic regime are calculated by existing codes. Flutter analyses are performed using those data. A flutter suppression method is studied using control laws in the closed-loop flutter equation. For the design optimization, optimization techniques such as approximation, design variable linking, temporary constraint deletion, and optimality criteria are used. Sensitivity derivatives of stresses and displacements for static loads, natural frequency, flutter characteristics, and control characteristics with respect to design variables are calculated for an approximate optimization. The objective function is the structural weight. The design variables are the section properties of the structural elements and the control gain factors. Existing multidisciplinary optimization codes (ASTROS* and MSC/NASTRAN) are used to perform single and multiple constraint optimizations of fully built up finite element wing structures. Three benchmark wing models are developed and/or modified for this purpose. The models are tested extensively.
Rezapour, Ehsan; Pettersen, Kristin Y; Liljebäck, Pål; Gravdahl, Jan T; Kelasidi, Eleni
This paper considers path following control of planar snake robots using virtual holonomic constraints. In order to present a model-based path following control design for the snake robot, we first derive the Euler-Lagrange equations of motion of the system. Subsequently, we define geometric relations among the generalized coordinates of the system, using the method of virtual holonomic constraints. These appropriately defined constraints shape the geometry of a constraint manifold for the system, which is a submanifold of the configuration space of the robot. Furthermore, we show that the constraint manifold can be made invariant by a suitable choice of feedback. In particular, we analytically design a smooth feedback control law to exponentially stabilize the constraint manifold. We show that enforcing the appropriately defined virtual holonomic constraints for the configuration variables implies that the robot converges to and follows a desired geometric path. Numerical simulations and experimental results are presented to validate the theoretical approach.
Determining on-fault earthquake magnitude distributions from integer programming
Geist, Eric L.; Parsons, Thomas E.
2018-01-01
Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106 variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions.
Scheduling Results for the THEMIS Observation Scheduling Tool
NASA Technical Reports Server (NTRS)
Mclaren, David; Rabideau, Gregg; Chien, Steve; Knight, Russell; Anwar, Sadaat; Mehall, Greg; Christensen, Philip
2011-01-01
We describe a scheduling system intended to assist in the development of instrument data acquisitions for the THEMIS instrument, onboard the Mars Odyssey spacecraft, and compare results from multiple scheduling algorithms. This tool creates observations of both (a) targeted geographical regions of interest and (b) general mapping observations, while respecting spacecraft constraints such as data volume, observation timing, visibility, lighting, season, and science priorities. This tool therefore must address both geometric and state/timing/resource constraints. We describe a tool that maps geometric polygon overlap constraints to set covering constraints using a grid-based approach. These set covering constraints are then incorporated into a greedy optimization scheduling algorithm incorporating operations constraints to generate feasible schedules. The resultant tool generates schedules of hundreds of observations per week out of potential thousands of observations. This tool is currently under evaluation by the THEMIS observation planning team at Arizona State University.
Reliability Assessment of a Robust Design Under Uncertainty for a 3-D Flexible Wing
NASA Technical Reports Server (NTRS)
Gumbert, Clyde R.; Hou, Gene J. -W.; Newman, Perry A.
2003-01-01
The paper presents reliability assessment results for the robust designs under uncertainty of a 3-D flexible wing previously reported by the authors. Reliability assessments (additional optimization problems) of the active constraints at the various probabilistic robust design points are obtained and compared with the constraint values or target constraint probabilities specified in the robust design. In addition, reliability-based sensitivity derivatives with respect to design variable mean values are also obtained and shown to agree with finite difference values. These derivatives allow one to perform reliability based design without having to obtain second-order sensitivity derivatives. However, an inner-loop optimization problem must be solved for each active constraint to find the most probable point on that constraint failure surface.
Symbolic PathFinder: Symbolic Execution of Java Bytecode
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Rungta, Neha
2010-01-01
Symbolic Pathfinder (SPF) combines symbolic execution with model checking and constraint solving for automated test case generation and error detection in Java programs with unspecified inputs. In this tool, programs are executed on symbolic inputs representing multiple concrete inputs. Values of variables are represented as constraints generated from the analysis of Java bytecode. The constraints are solved using off-the shelf solvers to generate test inputs guaranteed to achieve complex coverage criteria. SPF has been used successfully at NASA, in academia, and in industry.
NASA Astrophysics Data System (ADS)
Ben Achour, Jibril; Brahma, Suddhasattwa
2018-06-01
When applying the techniques of loop quantum gravity (LQG) to symmetry-reduced gravitational systems, one first regularizes the scalar constraint using holonomy corrections, prior to quantization. In inhomogeneous system, where a residual spatial diffeomorphism symmetry survives, such modification of the gauge generator generating time reparametrization can potentially lead to deformations or anomalies in the modified algebra of first-class constraints. When working with self-dual variables, it has already been shown that, for spherically symmetric geometry coupled to a scalar field, the holonomy-modified constraints do not generate any modifications to general covariance, as one faces in the real variables formulation, and can thus accommodate local degrees of freedom in such inhomogeneous models. In this paper, we extend this result to Gowdy cosmologies in the self-dual Ashtekar formulation. Furthermore, we show that the introduction of a μ ¯-scheme in midisuperspace models, as is required in the "improved dynamics" of LQG, is possible in the self-dual formalism while being out of reach in the current effective models using real-valued Ashtekar-Barbero variables. Our results indicate the advantages of using the self-dual variables to obtain a covariant loop regularization prior to quantization in inhomogeneous symmetry-reduced polymer models, additionally implementing the crucial μ ¯-scheme, and thus a consistent semiclassical limit.
Huda, Shamsul; Yearwood, John; Togneri, Roberto
2009-02-01
This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM).
Nesvizhevsky, V V; Protasov, K V
2005-01-01
An upper limit to non-Newtonian attractive forces is obtained from the measurement of quantum states of neutrons in the Earth's gravitational field. This limit improves the existing constraints in the nanometer range.
NASA Astrophysics Data System (ADS)
Martínez, Sonia; Cortés, Jorge; de León, Manuel
2000-04-01
A vakonomic mechanical system can be alternatively described by an extended Lagrangian using the Lagrange multipliers as new variables. Since this extended Lagrangian is singular, the constraint algorithm can be applied and a Dirac bracket giving the evolution of the observables can be constructed.
ERIC Educational Resources Information Center
Mokhtarian, Patricia L.; Bagley, Michael N.; Salomon, Ilan
1998-01-01
Discussion of telecommuting motivations and constraints focuses on a study that analyzed differences in variables due to gender, occupation, and presence of children for 583 employees of the city of San Diego. Research hypotheses are discussed, and implications for forming policies to support telecommuting are suggested. (Author/LRW)
Using neutral models to identify constraints on low-severity fire regimes.
Donald McKenzie; Amy E. Hessl; Lara-Karena B. Kellogg
2006-01-01
Climate, topography, fuel loadings, and human activities all affect spatial and temporal patterns of fire occurrence. Because fire is modeled as a stochastic process, for which each fire history is only one realization, a simulation approach is necessary to understand baseline variability, thereby identifying constraints, or forcing functions, that affect fire regimes...
Creativity from Constraints: What Can We Learn from Motherwell? From Modrian? From Klee?
ERIC Educational Resources Information Center
Stokes, Patricia D.
2008-01-01
This article presents a problem-solving model of variability and creativity built on the classic Reitman and Simon analyses of musical composition and architectural design. The model focuses on paired constraints: one precluding (or limiting search among) reliable, existing solutions, the other promoting (or directing search to) novel, often…
Flux Imbalance Analysis and the Sensitivity of Cellular Growth to Changes in Metabolite Pools
Reznik, Ed; Mehta, Pankaj; Segrè, Daniel
2013-01-01
Stoichiometric models of metabolism, such as flux balance analysis (FBA), are classically applied to predicting steady state rates - or fluxes - of metabolic reactions in genome-scale metabolic networks. Here we revisit the central assumption of FBA, i.e. that intracellular metabolites are at steady state, and show that deviations from flux balance (i.e. flux imbalances) are informative of some features of in vivo metabolite concentrations. Mathematically, the sensitivity of FBA to these flux imbalances is captured by a native feature of linear optimization, the dual problem, and its corresponding variables, known as shadow prices. First, using recently published data on chemostat growth of Saccharomyces cerevisae under different nutrient limitations, we show that shadow prices anticorrelate with experimentally measured degrees of growth limitation of intracellular metabolites. We next hypothesize that metabolites which are limiting for growth (and thus have very negative shadow price) cannot vary dramatically in an uncontrolled way, and must respond rapidly to perturbations. Using a collection of published datasets monitoring the time-dependent metabolomic response of Escherichia coli to carbon and nitrogen perturbations, we test this hypothesis and find that metabolites with negative shadow price indeed show lower temporal variation following a perturbation than metabolites with zero shadow price. Finally, we illustrate the broader applicability of flux imbalance analysis to other constraint-based methods. In particular, we explore the biological significance of shadow prices in a constraint-based method for integrating gene expression data with a stoichiometric model. In this case, shadow prices point to metabolites that should rise or drop in concentration in order to increase consistency between flux predictions and gene expression data. In general, these results suggest that the sensitivity of metabolic optima to violations of the steady state constraints carries biologically significant information on the processes that control intracellular metabolites in the cell. PMID:24009492
Nonparametric instrumental regression with non-convex constraints
NASA Astrophysics Data System (ADS)
Grasmair, M.; Scherzer, O.; Vanhems, A.
2013-03-01
This paper considers the nonparametric regression model with an additive error that is dependent on the explanatory variables. As is common in empirical studies in epidemiology and economics, it also supposes that valid instrumental variables are observed. A classical example in microeconomics considers the consumer demand function as a function of the price of goods and the income, both variables often considered as endogenous. In this framework, the economic theory also imposes shape restrictions on the demand function, such as integrability conditions. Motivated by this illustration in microeconomics, we study an estimator of a nonparametric constrained regression function using instrumental variables by means of Tikhonov regularization. We derive rates of convergence for the regularized model both in a deterministic and stochastic setting under the assumption that the true regression function satisfies a projected source condition including, because of the non-convexity of the imposed constraints, an additional smallness condition.
Airfoil Design and Optimization by the One-Shot Method
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Taasan, Shlomo; Salas, M. D.
1995-01-01
An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.
Airfoil optimization by the one-shot method
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Taasan, Shlomo; Salas, M. D.
1994-01-01
An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (Governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Language multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.
Pareto-optimal estimates that constrain mean California precipitation change
NASA Astrophysics Data System (ADS)
Langenbrunner, B.; Neelin, J. D.
2017-12-01
Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.
Montreuil, Sylvie; Laflamme, Lucie; Brisson, Chantal; Teiger, Catherine
2006-01-01
The goal of this article is to better understand how preventive measures are undertaken after training. It examines how certain variables, such as musculoskeletal pain, participant age and workstation and work content characteristics influence the reduction of postural constraints after office employees working with a computer have received ergonomics training. A pre-test/post-test design was used. The 207 female office workers were given 6 hours of ergonomics training. The variables were determined using a self-administered questionnaire and an observation grid filled out 2 weeks before and 6 months after the training session. The FAC and HAC were used in the data processing. The presence or absence of musculoskeletal pain had no statistically significant influence on whether or not postural constraints were eliminated. The age of the participants and the possibility of adjusting the workstation characteristics and work content produced differentiated results with regard to postural constraint reduction. We concluded that trained people succeed in taking relevant and effective measures to reduce the postural constraints found in VDUs. However other measures than work station adjustments lead to this prevention and such training must be strongly supported by the various hierarchical levels of an enterprise or an institution.
NASA Technical Reports Server (NTRS)
Jaunky, N.; Ambur, D. R.; Knight, N. F., Jr.
1998-01-01
A design strategy for optimal design of composite grid-stiffened cylinders subjected to global and local buckling constraints and strength constraints was developed using a discrete optimizer based on a genetic algorithm. An improved smeared stiffener theory was used for the global analysis. Local buckling of skin segments were assessed using a Rayleigh-Ritz method that accounts for material anisotropy. The local buckling of stiffener segments were also assessed. Constraints on the axial membrane strain in the skin and stiffener segments were imposed to include strength criteria in the grid-stiffened cylinder design. Design variables used in this study were the axial and transverse stiffener spacings, stiffener height and thickness, skin laminate stacking sequence and stiffening configuration, where stiffening configuration is a design variable that indicates the combination of axial, transverse and diagonal stiffener in the grid-stiffened cylinder. The design optimization process was adapted to identify the best suited stiffening configurations and stiffener spacings for grid-stiffened composite cylinder with the length and radius of the cylinder, the design in-plane loads and material properties as inputs. The effect of having axial membrane strain constraints in the skin and stiffener segments in the optimization process is also studied for selected stiffening configurations.
NASA Technical Reports Server (NTRS)
Jaunky, Navin; Knight, Norman F., Jr.; Ambur, Damodar R.
1998-01-01
A design strategy for optimal design of composite grid-stiffened cylinders subjected to global and local buckling constraints and, strength constraints is developed using a discrete optimizer based on a genetic algorithm. An improved smeared stiffener theory is used for the global analysis. Local buckling of skin segments are assessed using a Rayleigh-Ritz method that accounts for material anisotropy. The local buckling of stiffener segments are also assessed. Constraints on the axial membrane strain in the skin and stiffener segments are imposed to include strength criteria in the grid-stiffened cylinder design. Design variables used in this study are the axial and transverse stiffener spacings, stiffener height and thickness, skin laminate stacking sequence, and stiffening configuration, where herein stiffening configuration is a design variable that indicates the combination of axial, transverse, and diagonal stiffener in the grid-stiffened cylinder. The design optimization process is adapted to identify the best suited stiffening configurations and stiffener spacings for grid-stiffened composite cylinder with the length and radius of the cylinder, the design in-plane loads, and material properties as inputs. The effect of having axial membrane strain constraints in the skin and stiffener segments in the optimization process is also studied for selected stiffening configuration.
Sachs' free data in real connection variables
NASA Astrophysics Data System (ADS)
De Paoli, Elena; Speziale, Simone
2017-11-01
We discuss the Hamiltonian dynamics of general relativity with real connection variables on a null foliation, and use the Newman-Penrose formalism to shed light on the geometric meaning of the various constraints. We identify the equivalent of Sachs' constraint-free initial data as projections of connection components related to null rotations, i.e. the translational part of the ISO(2) group stabilising the internal null direction soldered to the hypersurface. A pair of second-class constraints reduces these connection components to the shear of a null geodesic congruence, thus establishing equivalence with the second-order formalism, which we show in details at the level of symplectic potentials. A special feature of the first-order formulation is that Sachs' propagating equations for the shear, away from the initial hypersurface, are turned into tertiary constraints; their role is to preserve the relation between connection and shear under retarded time evolution. The conversion of wave-like propagating equations into constraints is possible thanks to an algebraic Bianchi identity; the same one that allows one to describe the radiative data at future null infinity in terms of a shear of a (non-geodesic) asymptotic null vector field in the physical spacetime. Finally, we compute the modification to the spin coefficients and the null congruence in the presence of torsion.
Constrained State Estimation for Individual Localization in Wireless Body Sensor Networks
Feng, Xiaoxue; Snoussi, Hichem; Liang, Yan; Jiao, Lianmeng
2014-01-01
Wireless body sensor networks based on ultra-wideband radio have recently received much research attention due to its wide applications in health-care, security, sports and entertainment. Accurate localization is a fundamental problem to realize the development of effective location-aware applications above. In this paper the problem of constrained state estimation for individual localization in wireless body sensor networks is addressed. Priori knowledge about geometry among the on-body nodes as additional constraint is incorporated into the traditional filtering system. The analytical expression of state estimation with linear constraint to exploit the additional information is derived. Furthermore, for nonlinear constraint, first-order and second-order linearizations via Taylor series expansion are proposed to transform the nonlinear constraint to the linear case. Examples between the first-order and second-order nonlinear constrained filters based on interacting multiple model extended kalman filter (IMM-EKF) show that the second-order solution for higher order nonlinearity as present in this paper outperforms the first-order solution, and constrained IMM-EKF obtains superior estimation than IMM-EKF without constraint. Another brownian motion individual localization example also illustrates the effectiveness of constrained nonlinear iterative least square (NILS), which gets better filtering performance than NILS without constraint. PMID:25390408
Quasivariational Solutions for First Order Quasilinear Equations with Gradient Constraint
NASA Astrophysics Data System (ADS)
Rodrigues, José Francisco; Santos, Lisa
2012-08-01
We prove the existence of solutions for a quasi-variational inequality of evolution with a first order quasilinear operator and a variable convex set which is characterized by a constraint on the absolute value of the gradient that depends on the solution itself. The only required assumption on the nonlinearity of this constraint is its continuity and positivity. The method relies on an appropriate parabolic regularization and suitable a priori estimates. We also obtain the existence of stationary solutions by studying the asymptotic behaviour in time. In the variational case, corresponding to a constraint independent of the solution, we also give uniqueness results.
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
A lattice approach to spinorial quantum gravity
NASA Technical Reports Server (NTRS)
Renteln, Paul; Smolin, Lee
1989-01-01
A new lattice regularization of quantum general relativity based on Ashtekar's reformulation of Hamiltonian general relativity is presented. In this form, quantum states of the gravitational field are represented within the physical Hilbert space of a Kogut-Susskind lattice gauge theory. The gauge field of the theory is a complexified SU(2) connection which is the gravitational connection for left-handed spinor fields. The physical states of the gravitational field are those which are annihilated by additional constraints which correspond to the four constraints of general relativity. Lattice versions of these constraints are constructed. Those corresponding to the three-dimensional diffeomorphism generators move states associated with Wilson loops around on the lattice. The lattice Hamiltonian constraint has a simple form, and a correspondingly simple interpretation: it is an operator which cuts and joins Wilson loops at points of intersection.
Li, Da-Peng; Li, Dong-Juan; Liu, Yan-Jun; Tong, Shaocheng; Chen, C L Philip
2017-10-01
This paper deals with the tracking control problem for a class of nonlinear multiple input multiple output unknown time-varying delay systems with full state constraints. To overcome the challenges which cause by the appearances of the unknown time-varying delays and full-state constraints simultaneously in the systems, an adaptive control method is presented for such systems for the first time. The appropriate Lyapunov-Krasovskii functions and a separation technique are employed to eliminate the effect of unknown time-varying delays. The barrier Lyapunov functions are employed to prevent the violation of the full state constraints. The singular problems are dealt with by introducing the signal function. Finally, it is proven that the proposed method can both guarantee the good tracking performance of the systems output, all states are remained in the constrained interval and all the closed-loop signals are bounded in the design process based on choosing appropriate design parameters. The practicability of the proposed control technique is demonstrated by a simulation study in this paper.
Development of a nearshore oscillating surge wave energy converter with variable geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom, N. M.; Lawson, M. J.; Yu, Y. H.
This paper presents an analysis of a novel wave energy converter concept that combines an oscillating surge wave energy converter (OSWEC) with control surfaces. The control surfaces allow for a variable device geometry that enables the hydrodynamic properties to be adapted with respect to structural loading, absorption range and power-take-off capability. The device geometry is adjusted on a sea state-to-sea state time scale and combined with wave-to-wave manipulation of the power take-off (PTO) to provide greater control over the capture efficiency, capacity factor, and design loads. This work begins with a sensitivity study of the hydrodynamic coefficients with respect tomore » device width, support structure thickness, and geometry. A linear frequency domain analysis is used to evaluate device performance in terms of absorbed power, foundation loads, and PTO torque. Previous OSWEC studies included nonlinear hydrodynamics, in response a nonlinear model that includes a quadratic viscous damping torque that was linearized via the Lorentz linearization. Inclusion of the quadratic viscous torque led to construction of an optimization problem that incorporated motion and PTO constraints. Results from this study found that, when transitioning from moderate-to-large sea states the novel OSWEC was capable of reducing structural loads while providing a near constant power output.« less
The Effects of Word Frequency and Context Variability in Cued Recall
ERIC Educational Resources Information Center
Criss, Amy H.; Aue, William R.; Smith, Larissa
2011-01-01
Normative word frequency and context variability affect memory in a range of episodic memory tasks and place constraints on theoretical development. In four experiments, we independently manipulated the word frequency and context variability of the targets (to-be-generated items) and cues in a cued recall paradigm. We found that high frequency…
Examining Parallelism of Sets of Psychometric Measures Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Patelis, Thanos; Marcoulides, George A.
2011-01-01
A latent variable modeling approach that can be used to examine whether several psychometric tests are parallel is discussed. The method consists of sequentially testing the properties of parallel measures via a corresponding relaxation of parameter constraints in a saturated model or an appropriately constructed latent variable model. The…
Reducing Cognitive Biases in Probabilistic Reasoning by the Use of Logarithm Formats
ERIC Educational Resources Information Center
Juslin, Peter; Nilsson, Hakan; Winman, Anders; Lindskog, Marcus
2011-01-01
Research on probability judgment has traditionally emphasized that people are susceptible to biases because they rely on "variable substitution": the assessment of normative variables is replaced by assessment of heuristic, subjective variables. A recent proposal is that many of these biases may rather derive from constraints on cognitive…
A comparative approach to the principal mechanisms of different memory systems
NASA Astrophysics Data System (ADS)
Rensing, Ludger; Koch, Michael; Becker, Annette
2009-12-01
The term “memory” applies not only to the preservation of information in neuronal and immune systems but also to phenomena observed for example in plants, single cells, and RNA viruses. We here compare the different forms of information storage with respect to possible common features. The latter may be characterized by (1) selection of pre-existing information, (2) activation of memory systems often including transcriptional, and translational, as well as epigenetic and genetic mechanisms, (3) subsequent consolidation of the activated state in a latent form ( standby mode), and (4) reactivation of the latent state of memory systems when the organism is exposed to the same (or conditioned) signal or to previous selective constraints. These features apparently also exist in the “evolutionary memory,” i.e., in evolving populations which have highly variable mutant spectra.
Zhao, Meng; Ding, Baocang
2015-03-01
This paper considers the distributed model predictive control (MPC) of nonlinear large-scale systems with dynamically decoupled subsystems. According to the coupled state in the overall cost function of centralized MPC, the neighbors are confirmed and fixed for each subsystem, and the overall objective function is disassembled into each local optimization. In order to guarantee the closed-loop stability of distributed MPC algorithm, the overall compatibility constraint for centralized MPC algorithm is decomposed into each local controller. The communication between each subsystem and its neighbors is relatively low, only the current states before optimization and the optimized input variables after optimization are being transferred. For each local controller, the quasi-infinite horizon MPC algorithm is adopted, and the global closed-loop system is proven to be exponentially stable. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Solution and reasoning reuse in space planning and scheduling applications
NASA Technical Reports Server (NTRS)
Verfaillie, Gerard; Schiex, Thomas
1994-01-01
In the space domain, as in other domains, the CSP (Constraint Satisfaction Problems) techniques are increasingly used to represent and solve planning and scheduling problems. But these techniques have been developed to solve CSP's which are composed of fixed sets of variables and constraints, whereas many planning and scheduling problems are dynamic. It is therefore important to develop methods which allow a new solution to be rapidly found, as close as possible to the previous one, when some variables or constraints are added or removed. After presenting some existing approaches, this paper proposes a simple and efficient method, which has been developed on the basis of the dynamic backtracking algorithm. This method allows previous solution and reasoning to be reused in the framework of a CSP which is close to the previous one. Some experimental results on general random CSPs and on operation scheduling problems for remote sensing satellites are given.
New dynamic variables for rotating spacecraft
NASA Technical Reports Server (NTRS)
Markley, F. Landis
1993-01-01
This paper introduces two new seven-parameter representations for spacecraft attitude dynamics modeling. The seven parameters are the three components of the total system angular momentum in the spacecraft body frame; the three components of the angular momentum in the inertial reference frame; and an angle variable. These obey a single constraint as do parameterizations that include a quaternion; in this case the constraint is the equality of the sum of the squares of the angular momentum components in the two frames. The two representations are nonsingular if the system angular momentum is non-zero and obeys certain orientation constraints. The new parameterizations of the attitude matrix, the equations of motion, and the relation of the solution of these equations to Euler angles for torque-free motion are developed and analyzed. The superiority of the new parameterizations for numerical integration is shown in a specific example.
Ensemble Kalman filtering in presence of inequality constraints
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2009-04-01
Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.
Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment
NASA Astrophysics Data System (ADS)
Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit
2010-10-01
The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.
A sequential solution for anisotropic total variation image denoising with interval constraints
NASA Astrophysics Data System (ADS)
Xu, Jingyan; Noo, Frédéric
2017-09-01
We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.
Spherically symmetric Einstein-aether perfect fluid models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coley, Alan A.; Latta, Joey; Leon, Genly
We investigate spherically symmetric cosmological models in Einstein-aether theory with a tilted (non-comoving) perfect fluid source. We use a 1+3 frame formalism and adopt the comoving aether gauge to derive the evolution equations, which form a well-posed system of first order partial differential equations in two variables. We then introduce normalized variables. The formalism is particularly well-suited for numerical computations and the study of the qualitative properties of the models, which are also solutions of Horava gravity. We study the local stability of the equilibrium points of the resulting dynamical system corresponding to physically realistic inhomogeneous cosmological models and astrophysicalmore » objects with values for the parameters which are consistent with current constraints. In particular, we consider dust models in (β−) normalized variables and derive a reduced (closed) evolution system and we obtain the general evolution equations for the spatially homogeneous Kantowski-Sachs models using appropriate bounded normalized variables. We then analyse these models, with special emphasis on the future asymptotic behaviour for different values of the parameters. Finally, we investigate static models for a mixture of a (necessarily non-tilted) perfect fluid with a barotropic equations of state and a scalar field.« less
A Partial Test of Agnew's General Theory of Crime and Delinquency
ERIC Educational Resources Information Center
Zhang, Yan; Day, George; Cao, Liqun
2012-01-01
In 2005, Agnew introduced a new integrated theory, which he labels a general theory of crime and delinquency. He proposes that delinquency is more likely to occur when constraints against delinquency are low and motivations for delinquency are high. In addition, he argues that constraints and motivations are influenced by variables in five life…
Weak constrained localized ensemble transform Kalman filter for radar data assimilation
NASA Astrophysics Data System (ADS)
Janjic, Tijana; Lange, Heiner
2015-04-01
The applications on convective scales require data assimilation with a numerical model with single digit horizontal resolution in km and time evolving error covariances. The ensemble Kalman filter (EnKF) algorithm incorporates these two requirements. However, some challenges for the convective scale applications remain unresolved when using the EnKF approach. These include a need on convective scale to estimate fields that are nonnegative (as rain, graupel, snow) and use of data sets as radar reflectivity or cloud products that have the same property. What underlines these examples are errors that are non-Gaussian in nature causing a problem with EnKF, which uses Gaussian error assumptions to produce the estimates from the previous forecast and the incoming data. Since the proper estimates of hydrometeors are crucial for prediction on convective scales, question arises whether EnKF method can be modified to improve these estimates and whether there is a way of optimizing use of radar observations to initialize NWP models due to importance of this data set for prediction of connective storms. In order to deal with non-Gaussian errors different approaches can be taken in the EnKF framework. For example, variables can be transformed by assuming the relevant state variables follow an appropriate pre-specified non-Gaussian distribution, such as the lognormal and truncated Gaussian distribution or, more generally, by carrying out a parameterized change of state variables known as Gaussian anamorphosis. In a recent work by Janjic et al. 2014, it was shown on a simple example how conservation of mass could be beneficial for assimilation of positive variables. The method developed in the paper outperformed the EnKF as well as the EnKF with the lognormal change of variables. As argued in the paper the reason for this, is that each of these methods preserves mass (EnKF) or positivity (lognormal EnKF) but not both. Only once both positivity and mass were preserved in a new algorithm, the good estimates of the fields were obtained. The alternative to strong constraint formulation in Janjic et al. 2014 is to modify LETKF algorithm to take into the account physical properties only approximately. In this work we will include the weak constraints in the LETKF algorithm for estimation of hydrometers. The benefit on prediction is illustrated in an idealized setup (Lange and Craig, 2013). This setup uses the non hydrostatic COSMO model with a 2 km horizontal resolution, and the LETKF as implemented in KENDA (Km-scale Ensemble Data Assimilation) system of German Weather Service (Reich et al. 2011). Due to the Gaussian assumptions that underline the LETKF algorithm, the analyses of water species will become negative in some grid points of the COSMO model. These values are set to zero currently in KENDA after the LETKF analysis step. The tests done within this setup show that such a procedure introduces a bias in the analysis ensemble with respect to the true, that increases in time due to the cycled data assimilation. The benefits of including the constraints in LETKF are illustrated on the bias values during assimilation and the prediction.
Fiedler, Anna; Raeth, Sebastian; Theis, Fabian J; Hausser, Angelika; Hasenauer, Jan
2016-08-22
Ordinary differential equation (ODE) models are widely used to describe (bio-)chemical and biological processes. To enhance the predictive power of these models, their unknown parameters are estimated from experimental data. These experimental data are mostly collected in perturbation experiments, in which the processes are pushed out of steady state by applying a stimulus. The information that the initial condition is a steady state of the unperturbed process provides valuable information, as it restricts the dynamics of the process and thereby the parameters. However, implementing steady-state constraints in the optimization often results in convergence problems. In this manuscript, we propose two new methods for solving optimization problems with steady-state constraints. The first method exploits ideas from optimization algorithms on manifolds and introduces a retraction operator, essentially reducing the dimension of the optimization problem. The second method is based on the continuous analogue of the optimization problem. This continuous analogue is an ODE whose equilibrium points are the optima of the constrained optimization problem. This equivalence enables the use of adaptive numerical methods for solving optimization problems with steady-state constraints. Both methods are tailored to the problem structure and exploit the local geometry of the steady-state manifold and its stability properties. A parameterization of the steady-state manifold is not required. The efficiency and reliability of the proposed methods is evaluated using one toy example and two applications. The first application example uses published data while the second uses a novel dataset for Raf/MEK/ERK signaling. The proposed methods demonstrated better convergence properties than state-of-the-art methods employed in systems and computational biology. Furthermore, the average computation time per converged start is significantly lower. In addition to the theoretical results, the analysis of the dataset for Raf/MEK/ERK signaling provides novel biological insights regarding the existence of feedback regulation. Many optimization problems considered in systems and computational biology are subject to steady-state constraints. While most optimization methods have convergence problems if these steady-state constraints are highly nonlinear, the methods presented recover the convergence properties of optimizers which can exploit an analytical expression for the parameter-dependent steady state. This renders them an excellent alternative to methods which are currently employed in systems and computational biology.
Generalized Pauli constraints in small atoms
NASA Astrophysics Data System (ADS)
Schilling, Christian; Altunbulak, Murat; Knecht, Stefan; Lopes, Alexandre; Whitfield, James D.; Christandl, Matthias; Gross, David; Reiher, Markus
2018-05-01
The natural occupation numbers of fermionic systems are subject to nontrivial constraints, which include and extend the original Pauli principle. A recent mathematical breakthrough has clarified their mathematical structure and has opened up the possibility of a systematic analysis. Early investigations have found evidence that these constraints are exactly saturated in several physically relevant systems, e.g., in a certain electronic state of the beryllium atom. It has been suggested that, in such cases, the constraints, rather than the details of the Hamiltonian, dictate the system's qualitative behavior. Here, we revisit this question with state-of-the-art numerical methods for small atoms. We find that the constraints are, in fact, not exactly saturated, but that they lie much closer to the surface defined by the constraints than the geometry of the problem would suggest. While the results seem incompatible with the statement that the generalized Pauli constraints drive the behavior of these systems, they suggest that the qualitatively correct wave-function expansions can in some systems already be obtained on the basis of a limited number of Slater determinants, which is in line with numerical evidence from quantum chemistry.
A new implementation of the programming system for structural synthesis (PROSSS-2)
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.
1984-01-01
This new implementation of the PROgramming System for Structural Synthesis (PROSSS-2) combines a general-purpose finite element computer program for structural analysis, a state-of-the-art optimization program, and several user-supplied, problem-dependent computer programs. The results are flexibility of the optimization procedure, organization, and versatility of the formulation of constraints and design variables. The analysis-optimization process results in a minimized objective function, typically the mass. The analysis and optimization programs are executed repeatedly by looping through the system until the process is stopped by a user-defined termination criterion. However, some of the analysis, such as model definition, need only be one time and the results are saved for future use. The user must write some small, simple FORTRAN programs to interface between the analysis and optimization programs. One of these programs, the front processor, converts the design variables output from the optimizer into the suitable format for input into the analyzer. Another, the end processor, retrieves the behavior variables and, optionally, their gradients from the analysis program and evaluates the objective function and constraints and optionally their gradients. These quantities are output in a format suitable for input into the optimizer. These user-supplied programs are problem-dependent because they depend primarily upon which finite elements are being used in the model. PROSSS-2 differs from the original PROSSS in that the optimizer and front and end processors have been integrated into the finite element computer program. This was done to reduce the complexity and increase portability of the system, and to take advantage of the data handling features found in the finite element program.
Time is an affliction: Why ecology cannot be as predictive as physics and why it needs time series
NASA Astrophysics Data System (ADS)
Boero, F.; Kraberg, A. C.; Krause, G.; Wiltshire, K. H.
2015-07-01
Ecological systems depend on both constraints and historical contingencies, both of which shape their present observable system state. In contrast to ahistorical systems, which are governed solely by constraints (i.e. laws), historical systems and their dynamics can be understood only if properly described, in the course of time. Describing these dynamics and understanding long-term variability can be seen as the mission of long time series measuring not only simple abiotic features but also complex biological variables, such as species diversity and abundances, allowing deep insights in the functioning of food webs and ecosystems in general. Long time-series are irreplaceable for understanding change, and crucially inherent system variability and thus envisaging future scenarios. This notwithstanding current policies in funding and evaluating scientific research discourage the maintenance of long term series, despite a clear need for long-term strategies to cope with climate change. Time series are crucial for a pursuit of the much invoked Ecosystem Approach and to the passage from simple monitoring programs of large-scale and long-term Earth observatories - thus promoting a better understanding of the causes and effects of change in ecosystems. The few ongoing long time series in European waters must be integrated and networked so as to facilitate the formation of nodes of a series of observatories which, together, should allow the long-term management of the features and characteristics of European waters. Human capacity building in this region of expertise and a stronger societal involvement are also urgently needed, since the expertise in recognizing and describing species and therefore recording them reliably in the context of time series is rapidly vanishing from the European Scientific community.
How players exploit variability and regularity of game actions in female volleyball teams.
Ramos, Ana; Coutinho, Patrícia; Silva, Pedro; Davids, Keith; Mesquita, Isabel
2017-05-01
Variability analysis has been used to understand how competitive constraints shape different behaviours in team sports. In this study, we analysed and compared variability of tactical performance indices in players within complex I at two different competitive levels in volleyball. We also examined whether variability was influenced by set type and period. Eight matches from the 2012 Olympics competition and from the Portuguese national league in the 2014-2015 season were analysed (1496 rallies). Variability of setting conditions, attack zone, attack tempo and block opposition was assessed using Shannon entropy measures. Magnitude-based inferences were used to analyse the practical significance of compared values of selected variables. Results showed differences between elite and national teams for all variables, which were co-adapted to the competitive constraints of set type and set periods. Elite teams exploited system stability in setting conditions and block opposition, but greater unpredictability in zone and tempo of attack. These findings suggest that uncertainty in attacking actions was a key factor that could only be achieved with greater performance stability in other game actions. Data suggested how coaches could help setters develop the capacity to play at faster tempos, diversifying attack zones, especially at critical moments in competition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hempling, Scott; Elefant, Carolyn; Cory, Karlynn
2010-01-01
This report details how state feed-in tariff (FIT) programs can be legally implemented and how they can comply with federal requirements. The report describes the federal constraints on FIT programs and identifies legal methods that are free of those constrains.
The state of the "state" debate in hypnosis: a view from the cognitive-behavioral perspective.
Chaves, J F
1997-07-01
For most of the past 50 years, hypnosis research has been driven by a debate about whether hypnotic phenomena can be best described and understood as the product of an altered state of consciousness. The meanings of some of the pivotal concepts in this debate and the nature of the phenomena that gave rise to them were ambiguous at the outset and led to misconceptions and surplus meanings that have obscured the debate through most of its history. The nature of the posited hypnotic state and its assumed consequences have changed during this period, reflecting the abandonment of untenable versions of hypnotic state theory. Carefully conducted studies in laboratories around the world have refined our understanding of hypnotic phenomena and helped identify the critical variables that interact to elicit them. With the maturation of the cognitive-behavioral perspective and the growing refinement of state conceptions of hypnosis, questions arise whether the state debate is still the axis about which hypnosis research and theory pivots. Although heuristic value of this debate has been enormous, we must guard against the cognitive constraints of our own metaphors and conceptual frameworks.
Juth, Vanessa; Smyth, Joshua M; Carey, Michael P; Lepore, Stephen J
2015-07-01
Losing a loved one is a normative life event, yet there is great variability in subsequent interpersonal experiences and adjustment. The Social-Cognitive Processing (SCP) model suggests that social constraints (i.e. limited opportunities to disclose thoughts and feelings in a supportive context) impede emotional and cognitive processing of stressful life events, which may lead to maladjustment. This study investigates personal and loss-related correlates of social constraints during bereavement, the links between social constraints and post-loss adjustment, and whether social constraints moderate the relations between loss-related intrusive thoughts and adjustment. A community sample of bereaved individuals (n = 238) provided demographic and loss-related information and reported on their social constraints, loss-related intrusions, and psychological and physical adjustment. Women, younger people, and those with greater financial concerns reported more social constraints. Social constraints were significantly associated with more depressive symptoms, perceived stress, somatic symptoms, and worse global health. Individuals with high social constraints and high loss-related intrusions had the highest depressive symptoms and perceived life stress. Consistent with the SCP model, loss-related social constraints are associated with poorer adjustment, especially psychological adjustment. In particular, experiencing social constraints in conjunction with loss-related intrusions may heighten the risk for poor psychological health. © 2015 The International Association of Applied Psychology.
The free energy of a reaction coordinate at multiple constraints: a concise formulation
NASA Astrophysics Data System (ADS)
Schlitter, Jürgen; Klähn, Marco
The free energy as a function of the reaction coordinate (rc) is the key quantity for the computation of equilibrium and kinetic quantities. When it is considered as the potential of mean force, the problem is the calculation of the mean force for given values of the rc. We reinvestigate the PMCF (potential of mean constraint force) method which applies a constraint to the rc to compute the mean force as the mean negative constraint force and a metric tensor correction. The latter allows for the constraint imposed to the rc and possible artefacts due to multiple constraints of other variables which for practical reasons are often used in numerical simulations. Two main results are obtained that are of theoretical and practical interest. First, the correction term is given a very concise and simple shape which facilitates its interpretation and evaluation. Secondly, a theorem describes various rcs and possible combinations with constraints that can be used without introducing any correction to the constraint force. The results facilitate the computation of free energy by molecular dynamics simulations.
Multi-point Adjoint-Based Design of Tilt-Rotors in a Noninertial Reference Frame
NASA Technical Reports Server (NTRS)
Jones, William T.; Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Acree, Cecil W.
2014-01-01
Optimization of tilt-rotor systems requires the consideration of performance at multiple design points. In the current study, an adjoint-based optimization of a tilt-rotor blade is considered. The optimization seeks to simultaneously maximize the rotorcraft figure of merit in hover and the propulsive efficiency in airplane-mode for a tilt-rotor system. The design is subject to minimum thrust constraints imposed at each design point. The rotor flowfields at each design point are cast as steady-state problems in a noninertial reference frame. Geometric design variables used in the study to control blade shape include: thickness, camber, twist, and taper represented by as many as 123 separate design variables. Performance weighting of each operational mode is considered in the formulation of the composite objective function, and a build up of increasing geometric degrees of freedom is used to isolate the impact of selected design variables. In all cases considered, the resulting designs successfully increase both the hover figure of merit and the airplane-mode propulsive efficiency for a rotor designed with classical techniques.
Weeden, Clare; Lester, Jo-Anne; Jarvis, Nigel
2016-08-01
This study explores the push-pull vacation motivations of gay male and lesbian consumers and examines how these underpin their perceptions and purchase constraints of a mainstream and LGBT(1) cruise. Findings highlight a complex vacation market. Although lesbians and gay men share many of the same travel motivations as their heterosexual counterparts, the study reveals sexuality is a significant variable in their perception of cruise vacations, which further influences purchase constraints and destination choice. Gay men have more favorable perceptions than lesbians of both mainstream and LGBT cruises. The article recommends further inquiry into the multifaceted nature of motivations, perception, and constraints within the LGBT market in relation to cruise vacations.
Safety analysis of discrete event systems using a simplified Petri net controller.
Zareiee, Meysam; Dideban, Abbas; Asghar Orouji, Ali
2014-01-01
This paper deals with the problem of forbidden states in discrete event systems based on Petri net models. So, a method is presented to prevent the system from entering these states by constructing a small number of generalized mutual exclusion constraints. This goal is achieved by solving three types of Integer Linear Programming problems. The problems are designed to verify the constraints that some of them are related to verifying authorized states and the others are related to avoiding forbidden states. The obtained constraints can be enforced on the system using a small number of control places. Moreover, the number of arcs related to these places is small, and the controller after connecting them is maximally permissive. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Optimization of an Aeroservoelastic Wing with Distributed Multiple Control Surfaces
NASA Technical Reports Server (NTRS)
Stanford, Bret K.
2015-01-01
This paper considers the aeroelastic optimization of a subsonic transport wingbox under a variety of static and dynamic aeroelastic constraints. Three types of design variables are utilized: structural variables (skin thickness, stiffener details), the quasi-steady deflection scheduling of a series of control surfaces distributed along the trailing edge for maneuver load alleviation and trim attainment, and the design details of an LQR controller, which commands oscillatory hinge moments into those same control surfaces. Optimization problems are solved where a closed loop flutter constraint is forced to satisfy the required flight margin, and mass reduction benefits are realized by relaxing the open loop flutter requirements.
NASA Technical Reports Server (NTRS)
Tumer, Irem; Mehr, Ali Farhang
2005-01-01
In this paper, a two-level multidisciplinary design approach is described to optimize the effectiveness of ISHM s. At the top level, the overall safety of the mission consists of system-level variables, parameters, objectives, and constraints that are shared throughout the system and by all subsystems. Each subsystem level will then comprise of these shared values in addition to subsystem-specific variables, parameters, objectives and constraints. A hierarchical structure will be established to pass up or down shared values between the two levels with system-level and subsystem-level optimization routines.
Reliability Based Design for a Raked Wing Tip of an Airframe
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2011-01-01
A reliability-based optimization methodology has been developed to design the raked wing tip of the Boeing 767-400 extended range airliner made of composite and metallic materials. Design is formulated for an accepted level of risk or reliability. The design variables, weight and the constraints became functions of reliability. Uncertainties in the load, strength and the material properties, as well as the design variables, were modeled as random parameters with specified distributions, like normal, Weibull or Gumbel functions. The objective function and constraint, or a failure mode, became derived functions of the risk-level. Solution to the problem produced the optimum design with weight, variables and constraints as a function of the risk-level. Optimum weight versus reliability traced out an inverted-S shaped graph. The center of the graph corresponded to a 50 percent probability of success, or one failure in two samples. Under some assumptions, this design would be quite close to the deterministic optimum solution. The weight increased when reliability exceeded 50 percent, and decreased when the reliability was compromised. A design could be selected depending on the level of risk acceptable to a situation. The optimization process achieved up to a 20-percent reduction in weight over traditional design.
Integrated Control Using the SOFFT Control Structure
NASA Technical Reports Server (NTRS)
Halyo, Nesim
1996-01-01
The need for integrated/constrained control systems has become clearer as advanced aircraft introduced new coupled subsystems such as new propulsion subsystems with thrust vectoring and new aerodynamic designs. In this study, we develop an integrated control design methodology which accomodates constraints among subsystem variables while using the Stochastic Optimal Feedforward/Feedback Control Technique (SOFFT) thus maintaining all the advantages of the SOFFT approach. The Integrated SOFFT Control methodology uses a centralized feedforward control and a constrained feedback control law. The control thus takes advantage of the known coupling among the subsystems while maintaining the identity of subsystems for validation purposes and the simplicity of the feedback law to understand the system response in complicated nonlinear scenarios. The Variable-Gain Output Feedback Control methodology (including constant gain output feedback) is extended to accommodate equality constraints. A gain computation algorithm is developed. The designer can set the cross-gains between two variables or subsystems to zero or another value and optimize the remaining gains subject to the constraint. An integrated control law is designed for a modified F-15 SMTD aircraft model with coupled airframe and propulsion subsystems using the Integrated SOFFT Control methodology to produce a set of desired flying qualities.
Digital robust control law synthesis using constrained optimization
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivekananda
1989-01-01
Development of digital robust control laws for active control of high performance flexible aircraft and large space structures is a research area of significant practical importance. The flexible system is typically modeled by a large order state space system of equations in order to accurately represent the dynamics. The active control law must satisy multiple conflicting design requirements and maintain certain stability margins, yet should be simple enough to be implementable on an onboard digital computer. Described here is an application of a generic digital control law synthesis procedure for such a system, using optimal control theory and constrained optimization technique. A linear quadratic Gaussian type cost function is minimized by updating the free parameters of the digital control law, while trying to satisfy a set of constraints on the design loads, responses and stability margins. Analytical expressions for the gradients of the cost function and the constraints with respect to the control law design variables are used to facilitate rapid numerical convergence. These gradients can be used for sensitivity study and may be integrated into a simultaneous structure and control optimization scheme.
Flight control with adaptive critic neural network
NASA Astrophysics Data System (ADS)
Han, Dongchen
2001-10-01
In this dissertation, the adaptive critic neural network technique is applied to solve complex nonlinear system control problems. Based on dynamic programming, the adaptive critic neural network can embed the optimal solution into a neural network. Though trained off-line, the neural network forms a real-time feedback controller. Because of its general interpolation properties, the neurocontroller has inherit robustness. The problems solved here are an agile missile control for U.S. Air Force and a midcourse guidance law for U.S. Navy. In the first three papers, the neural network was used to control an air-to-air agile missile to implement a minimum-time heading-reverse in a vertical plane corresponding to following conditions: a system without constraint, a system with control inequality constraint, and a system with state inequality constraint. While the agile missile is a one-dimensional problem, the midcourse guidance law is the first test-bed for multiple-dimensional problem. In the fourth paper, the neurocontroller is synthesized to guide a surface-to-air missile to a fixed final condition, and to a flexible final condition from a variable initial condition. In order to evaluate the adaptive critic neural network approach, the numerical solutions for these cases are also obtained by solving two-point boundary value problem with a shooting method. All of the results showed that the adaptive critic neural network could solve complex nonlinear system control problems.
Airborne Laser Altimetry Mapping of the Greenland Ice Sheet: Application to Mass Balance Assessment
NASA Technical Reports Server (NTRS)
Abdalati, W.; Krabill, W.; Frederick, E.; Manizade, S.; Martin, C.; Sonntag, J.; Swift, R.; Thomas, R.; Wright, W.; Yungel, J.
2000-01-01
In 1998 and '99, the Arctic Ice Mapping (AIM) program completed resurveys of lines occupied 5 years earlier revealing elevation changes of the Greenland ice sheet and identifying areas of significant thinning, thickening and balance. In planning these surveys, consideration had to be given to the spatial constraints associated with aircraft operation, the spatial nature of ice sheet behavior, and limited resources, as well as temporal issues, such as seasonal and interannual variability in the context of measurement accuracy. This paper examines the extent to which the sampling and survey strategy is valid for drawing conclusions on the current state of balance of the Greenland ice sheet. The surveys covered the entire ice sheet with an average distance of 21.4 km between each location on the ice sheet and the nearest flight line. For most of the ice sheet, the elevation changes show relatively little spatial variability, and their magnitudes are significantly smaller than the observed elevation change signal. As a result, we conclude that the density of the sampling and the accuracy of the measurements are sufficient to draw meaningful conclusions on the state of balance of the entire ice sheet over the five-year survey period. Outlet glaciers, however, show far more spatial and temporal variability, and each of the major ones is likely to require individual surveys in order to determine its balance.
Static Analysis Numerical Algorithms
2016-04-01
represented by a collection of intervals (one for each variable) or a convex polyhedron (each dimension of the affine space representing a program variable...Another common abstract domain uses a set of linear constraints (i.e. an enclosing polyhedron ) to over-approximate the joint values of several
Air emissions due to wind and solar power.
Katzenstein, Warren; Apt, Jay
2009-01-15
Renewables portfolio standards (RPS) encourage large-scale deployment of wind and solar electric power. Their power output varies rapidly, even when several sites are added together. In many locations, natural gas generators are the lowest cost resource available to compensate for this variability, and must ramp up and down quickly to keep the grid stable, affecting their emissions of NOx and CO2. We model a wind or solar photovoltaic plus gas system using measured 1-min time-resolved emissions and heat rate data from two types of natural gas generators, and power data from four wind plants and one solar plant. Over a wide range of renewable penetration, we find CO2 emissions achieve approximately 80% of the emissions reductions expected if the power fluctuations caused no additional emissions. Using steam injection, gas generators achieve only 30-50% of expected NOx emissions reductions, and with dry control NOx emissions increase substantially. We quantify the interaction between state RPSs and NOx constraints, finding that states with substantial RPSs could see significant upward pressure on NOx permit prices, if the gas turbines we modeled are representative of the plants used to mitigate wind and solar power variability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji Zhengfeng; Feng Yuan; Ying Mingsheng
Local quantum operations and classical communication (LOCC) put considerable constraints on many quantum information processing tasks such as cloning and discrimination. Surprisingly, however, discrimination of any two pure states survives such constraints in some sense. We show that cloning is not that lucky; namely, probabilistic LOCC cloning of two product states is strictly less efficient than global cloning. We prove our result by giving explicitly the efficiency formula of local cloning of any two product states.
Active Tension Network model reveals an exotic mechanical state realized in epithelial tissues
NASA Astrophysics Data System (ADS)
Noll, Nicholas; Mani, Madhav; Heemskerk, Idse; Streicha, Sebastian; Shraiman, Boris
Mechanical interactions play a crucial role in epithelial morphogenesis, yet understanding the complex mechanisms through which stress and deformation affect cell behavior remains an open problem. Here we formulate and analyze the Active Tension Network (ATN) model, which assumes that mechanical balance of cells is dominated by cortical tension and introduces tension dependent active remodeling of the cortex. We find that ATNs exhibit unusual mechanical properties: i) ATN behaves as a fluid at short times, but at long times it supports external tension, like a solid; ii) its mechanical equilibrium state has extensive degeneracy associated with a discrete conformal - ''isogonal'' - deformation of cells. ATN model predicts a constraint on equilibrium cell geometry, which we demonstrate to hold in certain epithelial tissues. We further show that isogonal modes are observed in a fruit fly embryo, accounting for the striking variability of apical area of ventral cells and helping understand the early phase of gastrulation. Living matter realizes new and exotic mechanical states, understanding which helps understand biological phenomena.
Analytical investigations in aircraft and spacecraft trajectory optimization and optimal guidance
NASA Technical Reports Server (NTRS)
Markopoulos, Nikos; Calise, Anthony J.
1995-01-01
A collection of analytical studies is presented related to unconstrained and constrained aircraft (a/c) energy-state modeling and to spacecraft (s/c) motion under continuous thrust. With regard to a/c unconstrained energy-state modeling, the physical origin of the singular perturbation parameter that accounts for the observed 2-time-scale behavior of a/c during energy climbs is identified and explained. With regard to the constrained energy-state modeling, optimal control problems are studied involving active state-variable inequality constraints. Departing from the practical deficiencies of the control programs for such problems that result from the traditional formulations, a complete reformulation is proposed for these problems which, in contrast to the old formulation, will presumably lead to practically useful controllers that can track an inequality constraint boundary asymptotically, and even in the presence of 2-sided perturbations about it. Finally, with regard to s/c motion under continuous thrust, a thrust program is proposed for which the equations of 2-dimensional motion of a space vehicle in orbit, viewed as a point mass, afford an exact analytic solution. The thrust program arises under the assumption of tangential thrust from the costate system corresponding to minimum-fuel, power-limited, coplanar transfers between two arbitrary conics. The thrust program can be used not only with power-limited propulsion systems, but also with any propulsion system capable of generating continuous thrust of controllable magnitude, and, for propulsion types and classes of transfers for which it is sufficiently optimal the results of this report suggest a method of maneuvering during planetocentric or heliocentric orbital operations, requiring a minimum amount of computation; thus uniquely suitable for real-time feedback guidance implementations.
The Ostrogradsky Prescription for BFV Formalism
NASA Astrophysics Data System (ADS)
Nirov, Khazret S.
Gauge-invariant systems of a general form with higher order time derivatives of gauge parameters are investigated within the framework of the BFV formalism. Higher order terms of the BRST charge and BRST-invariant Hamiltonian are obtained. It is shown that the identification rules for Lagrangian and Hamiltonian BRST ghost variables depend on the choice of the extension of constraints from the primary constraint surface.
Quantifying the Hydrologic Effect of Climate Variability in the Lower Colorado Basin
NASA Astrophysics Data System (ADS)
Switanek, M.; Troch, P. A.
2007-12-01
Regional climate patterns are driven in large part by ocean states and associated atmospheric circulations, but modified through feedbacks from land surface conditions. The latter defines the climate elasticity of a river basin. Many regions that lie between semi-arid and semi-humid zones with seasonal rainfall, for instance, experience prolonged periods of wet and dry spells. Understanding the triggers that bring a river basin from one state (e.g. wet period of late 90s in the Colorado basin) abruptly to another state (multi-year drought initiated in 2001 to present) is what motivates the present study. Our research methodology investigates the causes of regional climate variability and its effect on hydrologic response. By correlating, using different monthly time lags, sea surface temperatures (SST) and sea level pressures (SLP) with basin averaged precipitation and surface temperature, we determine the most influential regions of the Pacific Ocean on lower Colorado climate variability. Using the most correlated data for each month, we derive precipitation and temperature distributions under similar conditions to that of the El Niño Southern Oscillation (ENSO). We compare the distributions of the climatic data, given ENSO constraints on SST and SLP, to the distributions considering non-ENSO years. Finally, we use observed stream flows and climatic data to determine the basin's climate elasticity. This allows us to quantitatively translate the predicted regional climate effects of ENSO on hydrologic response. Our presentation will use data for the Little Colorado as an example to demonstrate the procedure and produce preliminary results.
New Techniques in Numerical Analysis and Their Application to Aerospace Systems.
1979-01-01
employment of the sequential gradient-restoration algorithm and the modified quasilineari- zation algorithm in some problems of structural analysis (Refs. 6...and a state inequa - lity constraint. The state inequality constraint is of a special type, namely, it is linear in some or all of the com- ponents of
NASA Technical Reports Server (NTRS)
Thareja, R.; Haftka, R. T.
1986-01-01
There has been recent interest in multidisciplinary multilevel optimization applied to large engineering systems. The usual approach is to divide the system into a hierarchy of subsystems with ever increasing detail in the analysis focus. Equality constraints are usually placed on various design quantities at every successive level to ensure consistency between levels. In many previous applications these equality constraints were eliminated by reducing the number of design variables. In complex systems this may not be possible and these equality constraints may have to be retained in the optimization process. In this paper the impact of such a retention is examined for a simple portal frame problem. It is shown that the equality constraints introduce numerical difficulties, and that the numerical solution becomes very sensitive to optimization parameters for a wide range of optimization algorithms.
Route constraints model based on polychromatic sets
NASA Astrophysics Data System (ADS)
Yin, Xianjun; Cai, Chao; Wang, Houjun; Li, Dongwu
2018-03-01
With the development of unmanned aerial vehicle (UAV) technology, the fields of its application are constantly expanding. The mission planning of UAV is especially important, and the planning result directly influences whether the UAV can accomplish the task. In order to make the results of mission planning for unmanned aerial vehicle more realistic, it is necessary to consider not only the physical properties of the aircraft, but also the constraints among the various equipment on the UAV. However, constraints among the equipment of UAV are complex, and the equipment has strong diversity and variability, which makes these constraints difficult to be described. In order to solve the above problem, this paper, referring to the polychromatic sets theory used in the advanced manufacturing field to describe complex systems, presents a mission constraint model of UAV based on polychromatic sets.
Multi-period equilibrium/near-equilibrium in electricity markets based on locational marginal prices
NASA Astrophysics Data System (ADS)
Garcia Bertrand, Raquel
In this dissertation we propose an equilibrium procedure that coordinates the point of view of every market agent resulting in an equilibrium that simultaneously maximizes the independent objective of every market agent and satisfies network constraints. Therefore, the activities of the generating companies, consumers and an independent system operator are modeled: (1) The generating companies seek to maximize profits by specifying hourly step functions of productions and minimum selling prices, and bounds on productions. (2) The goals of the consumers are to maximize their economic utilities by specifying hourly step functions of demands and maximum buying prices, and bounds on demands. (3) The independent system operator then clears the market taking into account consistency conditions as well as capacity and line losses so as to achieve maximum social welfare. Then, we approach this equilibrium problem using complementarity theory in order to have the capability of imposing constraints on dual variables, i.e., on prices, such as minimum profit conditions for the generating units or maximum cost conditions for the consumers. In this way, given the form of the individual optimization problems, the Karush-Kuhn-Tucker conditions for the generating companies, the consumers and the independent system operator are both necessary and sufficient. The simultaneous solution to all these conditions constitutes a mixed linear complementarity problem. We include minimum profit constraints imposed by the units in the market equilibrium model. These constraints are added as additional constraints to the equivalent quadratic programming problem of the mixed linear complementarity problem previously described. For the sake of clarity, the proposed equilibrium or near-equilibrium is first developed for the particular case considering only one time period. Afterwards, we consider an equilibrium or near-equilibrium applied to a multi-period framework. This model embodies binary decisions, i.e., on/off status for the units, and therefore optimality conditions cannot be directly applied. To avoid limitations provoked by binary variables, while retaining the advantages of using optimality conditions, we define the multi-period market equilibrium using Benders decomposition, which allows computing binary variables through the master problem and continuous variables through the subproblem. Finally, we illustrate these market equilibrium concepts through several case studies.
Robust synergetic control design under inputs and states constraints
NASA Astrophysics Data System (ADS)
Rastegar, Saeid; Araújo, Rui; Sadati, Jalil
2018-03-01
In this paper, a novel robust-constrained control methodology for discrete-time linear parameter-varying (DT-LPV) systems is proposed based on a synergetic control theory (SCT) approach. It is shown that in DT-LPV systems without uncertainty, and for any unmeasured bounded additive disturbance, the proposed controller accomplishes the goal of stabilising the system by asymptotically driving the error of the controlled variable to a bounded set containing the origin and then maintaining it there. Moreover, given an uncertain DT-LPV system jointly subject to unmeasured and constrained additive disturbances, and constraints in states, input commands and reference signals (set points), then invariant set theory is used to find an appropriate polyhedral robust invariant region in which the proposed control framework is guaranteed to robustly stabilise the closed-loop system. Furthermore, this is achieved even for the case of varying non-zero control set points in such uncertain DT-LPV systems. The controller is characterised to have a simple structure leading to an easy implementation, and a non-complex design process. The effectiveness of the proposed method and the implications of the controller design on feasibility and closed-loop performance are demonstrated through application examples on the temperature control on a continuous-stirred tank reactor plant, on the control of a real-coupled DC motor plant, and on an open-loop unstable system example.
The Variable Hard X-Ray Emission of NGC4945 as Observed by NuSTAR
NASA Technical Reports Server (NTRS)
Puccetti, Simonetta; Comastri, Andrea; Fiore, Fabrizio; Arevalo, Patricia; Risaliti, Guido; Bauer, Franz E.; Brandt, William N.; Stern, Daniel; Harrison, Fiona A.; Alexander, David M.;
2014-01-01
We present a broadband (approx. 0.5 - 79 keV) spectral and temporal analysis of multiple NuSTAR observations combined with archival Suzaku and Chandra data of NGC4945, the brightest extragalactic source at 100 keV. We observe hard X-ray (> 10 keV) flux and spectral variability, with flux variations of a factor 2 on timescales of 20 ksec. A variable primary continuum dominates the high energy spectrum (> 10 keV) in all the states, while the reflected/scattered flux which dominates at E< 10 keV stays approximately constant. From modelling the complex reflection/transmission spectrum we derive a Compton depth along the line of sight of Thomson approx.2.9, and a global covering factor for the circumnuclear gas of approx. 0.15. This agrees with the constraints derived from the high energy variability, which implies that most of the high energy flux is transmitted, rather that Compton-scattered. This demonstrates the effectiveness of spectral analysis in constraining the geometric properties of the circumnuclear gas, and validates similar methods used for analyzing the spectra of other bright, Compton-thick AGN. The lower limits on the e-folding energy are between 200 - 300 keV, consistent with previous BeppoSAX, Suzaku and Swift BAT observations. The accretion rate, estimated from the X-ray luminosity and assuming a bolometric correction typical of type 2 AGN, is in the range approx. 0.1 - 0.3 lambda(sub Edd) depending on the flux state. The substantial observed X-ray luminosity variability of NGC4945 implies that large errors can arise from using single-epoch X-ray data to derive L/L(sub Edd) values for obscured AGNs.
The Variable Hard X-Ray Emission of NGC 4945 as Observed by NUSTAR
Puccetti, Simonetta; Comastri, Andrea; Fiore, Fabrizio; ...
2014-09-02
Here, we present a broadband (~0.5-79 keV) spectral and temporal analysis of multiple NuSTAR observations combined with archival Suzaku and Chandra data of NGC 4945, the brightest extragalactic source at 100 keV. We observe hard X-ray (>10 keV) flux and spectral variability, with flux variations of a factor of two on timescales of 20 ks. A variable primary continuum dominates the high-energy spectrum (>10 keV) in all states, while the reflected/scattered flux that dominates at E <10 keV stays approximately constant. From modeling the complex reflection/transmission spectrum, we derive a Compton depth along the line of sight of τThomson ~more » 2.9, and a global covering factor for the circumnuclear gas of ~0.15. This agrees with the constraints derived from the high-energy variability, which implies that most of the high-energy flux is transmitted rather than Compton-scattered. This demonstrates the effectiveness of spectral analysis at constraining the geometric properties of the circumnuclear gas, and validates similar methods used for analyzing the spectra of other bright, Compton-thick active galactic nuclei (AGNs). The lower limits on the e-folding energy are between 200 and 300 keV, consistent with previous BeppoSAX, Suzaku, and Swift Burst Alert Telescope observations. The accretion rate, estimated from the X-ray luminosity and assuming a bolometric correction typical of type 2 AGN, is in the range ~0.1-0.3 λEdd depending on the flux state. As a result, the substantial observed X-ray luminosity variability of NGC 4945 implies that large errors can arise from using single-epoch X-ray data to derive L/L Edd values for obscured AGNs.« less
The Soft State of Cygnus X-1 Observed with NuSTAR: A Variable Corona and a Stable Inner Disk
NASA Technical Reports Server (NTRS)
Walton, D. J.; Tomsick, J. A.; Madsen, K. K.; Grinberg, V.; Barret, D.; Boggs, S. E.; Christensen, F. E.; Clavel, M.; Craig, W. W.; Fabian, A. C.;
2016-01-01
We present a multi-epoch hard X-ray analysis of Cygnus X-1 in its soft state based on four observations with the Nuclear Spectroscopic Telescope Array (NuSTAR). Despite the basic similarity of the observed spectra, there is clear spectral variability between epochs. To investigate this variability, we construct a model incorporating both the standard disk-corona continuum and relativistic reflection from the accretion disk, based on prior work on Cygnus X-1, and apply this model to each epoch independently. We find excellent consistency for the black hole spin and the iron abundance of the accretion disk, which are expected to remain constant on observational timescales. In particular, we confirm that Cygnus X-1 hosts a rapidly rotating black hole, 0.93 < approx. a* < approx. 0.96, in broad agreement with the majority of prior studies of the relativistic disk reflection and constraints on the spin obtained through studies of the thermal accretion disk continuum. Our work also confirms the apparent misalignment between the inner disk and the orbital plane of the binary system reported previously, finding the magnitude of this warp to be approx.10deg-15deg. This level of misalignment does not significantly change (and may even improve) the agreement between our reflection results and the thermal continuum results regarding the black hole spin. The spectral variability observed by NuSTAR is dominated by the primary continuum, implying variability in the temperature of the scattering electron plasma. Finally, we consistently observe absorption from ionized iron at approx. 6.7 keV, which varies in strength as a function of orbital phase in a manner consistent with the absorbing material being an ionized phase of the focused stellar wind from the supergiant companion star.
The Soft State of Cygnus X-1 Observed with NuSTAR: A Variable Corona and a Stable Inner Disk
NASA Astrophysics Data System (ADS)
Walton, D. J.; Tomsick, J. A.; Madsen, K. K.; Grinberg, V.; Barret, D.; Boggs, S. E.; Christensen, F. E.; Clavel, M.; Craig, W. W.; Fabian, A. C.; Fuerst, F.; Hailey, C. J.; Harrison, F. A.; Miller, J. M.; Parker, M. L.; Rahoui, F.; Stern, D.; Tao, L.; Wilms, J.; Zhang, W.
2016-07-01
We present a multi-epoch hard X-ray analysis of Cygnus X-1 in its soft state based on four observations with the Nuclear Spectroscopic Telescope Array (NuSTAR). Despite the basic similarity of the observed spectra, there is clear spectral variability between epochs. To investigate this variability, we construct a model incorporating both the standard disk-corona continuum and relativistic reflection from the accretion disk, based on prior work on Cygnus X-1, and apply this model to each epoch independently. We find excellent consistency for the black hole spin and the iron abundance of the accretion disk, which are expected to remain constant on observational timescales. In particular, we confirm that Cygnus X-1 hosts a rapidly rotating black hole, 0.93≲ {a}* ≲ 0.96, in broad agreement with the majority of prior studies of the relativistic disk reflection and constraints on the spin obtained through studies of the thermal accretion disk continuum. Our work also confirms the apparent misalignment between the inner disk and the orbital plane of the binary system reported previously, finding the magnitude of this warp to be ˜10°-15°. This level of misalignment does not significantly change (and may even improve) the agreement between our reflection results and the thermal continuum results regarding the black hole spin. The spectral variability observed by NuSTAR is dominated by the primary continuum, implying variability in the temperature of the scattering electron plasma. Finally, we consistently observe absorption from ionized iron at ˜6.7 keV, which varies in strength as a function of orbital phase in a manner consistent with the absorbing material being an ionized phase of the focused stellar wind from the supergiant companion star.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gohar, Y.; Nuclear Engineering Division
2005-05-01
In fusion reactors, the blanket design and its characteristics have a major impact on the reactor performance, size, and economics. The selection and arrangement of the blanket materials, dimensions of the different blanket zones, and different requirements of the selected materials for a satisfactory performance are the main parameters, which define the blanket performance. These parameters translate to a large number of variables and design constraints, which need to be simultaneously considered in the blanket design process. This represents a major design challenge because of the lack of a comprehensive design tool capable of considering all these variables to definemore » the optimum blanket design and satisfying all the design constraints for the adopted figure of merit and the blanket design criteria. The blanket design capabilities of the First Wall/Blanket/Shield Design and Optimization System (BSDOS) have been developed to overcome this difficulty and to provide the state-of-the-art research and design tool for performing blanket design analyses. This paper describes some of the BSDOS capabilities and demonstrates its use. In addition, the use of the optimization capability of the BSDOS can result in a significant blanket performance enhancement and cost saving for the reactor design under consideration. In this paper, examples are presented, which utilize an earlier version of the ITER solid breeder blanket design and a high power density self-cooled lithium blanket design for demonstrating some of the BSDOS blanket design capabilities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gohar, Yousry
2005-05-15
In fusion reactors, the blanket design and its characteristics have a major impact on the reactor performance, size, and economics. The selection and arrangement of the blanket materials, dimensions of the different blanket zones, and different requirements of the selected materials for a satisfactory performance are the main parameters, which define the blanket performance. These parameters translate to a large number of variables and design constraints, which need to be simultaneously considered in the blanket design process. This represents a major design challenge because of the lack of a comprehensive design tool capable of considering all these variables to definemore » the optimum blanket design and satisfying all the design constraints for the adopted figure of merit and the blanket design criteria. The blanket design capabilities of the First Wall/Blanket/Shield Design and Optimization System (BSDOS) have been developed to overcome this difficulty and to provide the state-of-the-art research and design tool for performing blanket design analyses. This paper describes some of the BSDOS capabilities and demonstrates its use. In addition, the use of the optimization capability of the BSDOS can result in a significant blanket performance enhancement and cost saving for the reactor design under consideration. In this paper, examples are presented, which utilize an earlier version of the ITER solid breeder blanket design and a high power density self-cooled lithium blanket design for demonstrating some of the BSDOS blanket design capabilities.« less
Sulfate burial constraints on the Phanerozoic sulfur cycle.
Halevy, Itay; Peters, Shanan E; Fischer, Woodward W
2012-07-20
The sulfur cycle influences the respiration of sedimentary organic matter, the oxidation state of the atmosphere and oceans, and the composition of seawater. However, the factors governing the major sulfur fluxes between seawater and sedimentary reservoirs remain incompletely understood. Using macrostratigraphic data, we quantified sulfate evaporite burial fluxes through Phanerozoic time. Approximately half of the modern riverine sulfate flux comes from weathering of recently deposited evaporites. Rates of sulfate burial are unsteady and linked to changes in the area of marine environments suitable for evaporite formation and preservation. By contrast, rates of pyrite burial and weathering are higher, less variable, and largely balanced, highlighting a greater role of the sulfur cycle in regulating atmospheric oxygen.
NASA Technical Reports Server (NTRS)
Macready, William; Wolpert, David
2005-01-01
We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.
Constraint algebra in Smolin's G →0 limit of 4D Euclidean gravity
NASA Astrophysics Data System (ADS)
Varadarajan, Madhavan
2018-05-01
Smolin's generally covariant GNewton→0 limit of 4d Euclidean gravity is a useful toy model for the study of the constraint algebra in loop quantum gravity (LQG). In particular, the commutator between its Hamiltonian constraints has a metric dependent structure function. While a prior LQG-like construction of nontrivial anomaly free constraint commutators for the model exists, that work suffers from two defects. First, Smolin's remarks on the inability of the quantum dynamics to generate propagation effects apply. Second, the construction only yields the action of a single Hamiltonian constraint together with the action of its commutator through a continuum limit of corresponding discrete approximants; the continuum limit of a product of two or more constraints does not exist. Here, we incorporate changes in the quantum dynamics through structural modifications in the choice of discrete approximants to the quantum Hamiltonian constraint. The new structure is motivated by that responsible for propagation in an LQG-like quantization of paramatrized field theory and significantly alters the space of physical states. We study the off shell constraint algebra of the model in the context of these structural changes and show that the continuum limit action of multiple products of Hamiltonian constraints is (a) supported on an appropriate domain of states, (b) yields anomaly free commutators between pairs of Hamiltonian constraints, and (c) is diffeomorphism covariant. Many of our considerations seem robust enough to be applied to the setting of 4d Euclidean gravity.
Dynamic mortar finite element method for modeling of shear rupture on frictional rough surfaces
NASA Astrophysics Data System (ADS)
Tal, Yuval; Hager, Bradford H.
2017-09-01
This paper presents a mortar-based finite element formulation for modeling the dynamics of shear rupture on rough interfaces governed by slip-weakening and rate and state (RS) friction laws, focusing on the dynamics of earthquakes. The method utilizes the dual Lagrange multipliers and the primal-dual active set strategy concepts, together with a consistent discretization and linearization of the contact forces and constraints, and the friction laws to obtain a semi-smooth Newton method. The discretization of the RS friction law involves a procedure to condense out the state variables, thus eliminating the addition of another set of unknowns into the system. Several numerical examples of shear rupture on frictional rough interfaces demonstrate the efficiency of the method and examine the effects of the different time discretization schemes on the convergence, energy conservation, and the time evolution of shear traction and slip rate.
Low-noise encoding of active touch by layer 4 in the somatosensory cortex.
Hires, Samuel Andrew; Gutnisky, Diego A; Yu, Jianing; O'Connor, Daniel H; Svoboda, Karel
2015-08-06
Cortical spike trains often appear noisy, with the timing and number of spikes varying across repetitions of stimuli. Spiking variability can arise from internal (behavioral state, unreliable neurons, or chaotic dynamics in neural circuits) and external (uncontrolled behavior or sensory stimuli) sources. The amount of irreducible internal noise in spike trains, an important constraint on models of cortical networks, has been difficult to estimate, since behavior and brain state must be precisely controlled or tracked. We recorded from excitatory barrel cortex neurons in layer 4 during active behavior, where mice control tactile input through learned whisker movements. Touch was the dominant sensorimotor feature, with >70% spikes occurring in millisecond timescale epochs after touch onset. The variance of touch responses was smaller than expected from Poisson processes, often reaching the theoretical minimum. Layer 4 spike trains thus reflect the millisecond-timescale structure of tactile input with little noise.
NASA Astrophysics Data System (ADS)
Hoheisel, C.
1988-09-01
Equilibrium molecular dynamics calculations with constraints have been performed for model liquids SF6 and CF4. The computations were carried out with four- and six-center Lennard-Jones potentials and up to 2×105 integration steps. Shear, bulk viscosity and the thermal conductivity have been calculated with use of Green-Kubo relations in the formulation of ``molecule variables.'' Various thermodynamic states were investigated. For SF6, a detailed comparison with experimental data was possible. For CF4, the MD results could only be compared with experiment for one liquid state. For the latter liquid, a complementary comparison was performed using MD results obtained with a one-center Lennard-Jones potential. A limited test of the particle number dependence of the results is presented. Partial and total correlations functions are shown and discussed with respect to findings obtained for the one-center Lennard-Jones liquid.
What physicians need to know about dreams and dreaming.
Pagel, James F
2012-11-01
An overview of the current status of dream science is given, designed to provide a basic background of this field for the sleep-interested physician. No cognitive state has been more extensively studied and is yet more misunderstood than dreaming. Much older work is methodologically limited by lack of definitions, small sample size, and constraints of theoretical perspective, with evidence equivocal as to whether any special relationship exists between rapid eye movement (REM) sleep and dreaming. As the relationship between dreams and REM sleep is so poorly defined, evidence-based studies of dreaming require a dream report. The different aspects of dreaming that can be studied include dream and nightmare recall frequency, dream content, dreaming effect on waking behaviors, dream/nightmare associated medications, and pathophysiology affecting dreaming. Whether studied from behavioral, neuroanatomical, neurochemical, pathophysiological or electrophysiological perspectives, dreaming reveals itself to be a complex cognitive state affected by a wide variety of medical, psychological, sleep and social variables.
A mesh gradient technique for numerical optimization
NASA Technical Reports Server (NTRS)
Willis, E. A., Jr.
1973-01-01
A class of successive-improvement optimization methods in which directions of descent are defined in the state space along each trial trajectory are considered. The given problem is first decomposed into two discrete levels by imposing mesh points. Level 1 consists of running optimal subarcs between each successive pair of mesh points. For normal systems, these optimal two-point boundary value problems can be solved by following a routine prescription if the mesh spacing is sufficiently close. A spacing criterion is given. Under appropriate conditions, the criterion value depends only on the coordinates of the mesh points, and its gradient with respect to those coordinates may be defined by interpreting the adjoint variables as partial derivatives of the criterion value function. In level 2, the gradient data is used to generate improvement steps or search directions in the state space which satisfy the boundary values and constraints of the given problem.
NASA Astrophysics Data System (ADS)
Hand, J. L.; Schichtel, B. A.; Malm, W. C.; Pitchford, M.; Frank, N. H.
2014-11-01
Monthly, seasonal, and annual mean estimates of urban influence on regional concentrations of major aerosol species were computed using speciated aerosol data from the rural IMPROVE network (Interagency Monitoring of Protected Visual Environments) and the United States Environmental Protection Agency's urban Chemical Speciation Network for the 2008 through 2011 period. Aggregated for sites across the continental United States, the annual mean and one standard error in urban excess (defined as the ratio of urban to nearby rural concentrations) was highest for elemental carbon (3.3 ± 0.2), followed by ammonium nitrate (2.5 ± 0.2), particulate organic matter (1.78 ± 0.08), and ammonium sulfate (1.23 ± 0.03). The seasonal variability in urban excess was significant for carbonaceous aerosols and ammonium nitrate in the West, in contrast to the low seasonal variability in the urban influence of ammonium sulfate. Generally for all species, higher excess values in the West were associated with localized urban sources while in the East excess was more regional in extent. In addition, higher excess values in the western United States in winter were likely influenced not only by differences in sources but also by combined meteorological and topographic effects. This work has implications for understanding the spatial heterogeneity of major aerosol species near the interface of urban and rural regions and therefore for designing appropriate air quality management strategies. In addition, the spatial patterns in speciated mass concentrations provide constraints for regional and global models.
Hard and Soft Constraints in Reliability-Based Design Optimization
NASA Technical Reports Server (NTRS)
Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.
NASA Technical Reports Server (NTRS)
Pendergrass, J. R.; Walsh, R. L.
1975-01-01
An examination of the factors which modify the simulation of a constraint in the motion of the aft attach points of the orbiter and external tank during separation has been made. The factors considered were both internal (spring and damper constants) and external (friction coefficient and dynamic pressure). The results show that an acceptable choice of spring/damper constant combinations exist over the expected range of the external factors and that the choice is consistent with a practical integration interval. The constraint model is shown to produce about a 10 percent increase in the relative body pitch angles over the unconstrained case whereas the MDC-STL constraint model is shown to produce about a 38 percent increase.
NASA Astrophysics Data System (ADS)
Nijzink, Remko C.; Samaniego, Luis; Mai, Juliane; Kumar, Rohini; Thober, Stephan; Zink, Matthias; Schäfer, David; Savenije, Hubert H. G.; Hrachowitz, Markus
2016-03-01
Heterogeneity of landscape features like terrain, soil, and vegetation properties affects the partitioning of water and energy. However, it remains unclear to what extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated into the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge-based model constraints reduces model uncertainty, and whether (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge-based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as an overall measure of model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19 %, respectively, compared to the base case of the unconstrained mHM. Most significant improvements in signature representations were, in particular, achieved for low flow statistics. The application of prior semi-quantitative constraints further improved the partitioning between runoff and evaporative fluxes. In addition, it was shown that suitable semi-quantitative prior constraints in combination with the transfer-function-based regularization approach of mHM can be beneficial for spatial model transferability as the Euclidian distances for the signatures improved on average by 2 %. The effect of semi-quantitative prior constraints combined with topography-guided sub-grid heterogeneity on transferability showed a more variable picture of improvements and deteriorations, but most improvements were observed for low flow statistics.
CONSTRAINTS ON VARIABLES IN SYNTAX.
ERIC Educational Resources Information Center
ROSS, JOHN ROBERT
IN ATTEMPTING TO DEFINE "SYNTACTIC VARIABLE," THE AUTHOR BASES HIS DISCUSSION ON THE ASSUMPTION THAT SYNTACTIC FACTS ARE A COLLECTION OF TWO TYPES OF RULES--CONTEXT-FREE PHRASE STRUCTURE RULES (GENERATING UNDERLYING OR DEEP PHRASE MARKERS) AND GRAMMATICAL TRANSFORMATIONS, WHICH MAP UNDERLYING PHRASE MARKERS ONTO SUPERFICIAL (OR SURFACE) PHRASE…
Data Combination and Instrumental Variables in Linear Models
ERIC Educational Resources Information Center
Khawand, Christopher
2012-01-01
Instrumental variables (IV) methods allow for consistent estimation of causal effects, but suffer from poor finite-sample properties and data availability constraints. IV estimates also tend to have relatively large standard errors, often inhibiting the interpretability of differences between IV and non-IV point estimates. Lastly, instrumental…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Kyri; Toomey, Bridget
Evolving power systems with increasing levels of stochasticity call for a need to solve optimal power flow problems with large quantities of random variables. Weather forecasts, electricity prices, and shifting load patterns introduce higher levels of uncertainty and can yield optimization problems that are difficult to solve in an efficient manner. Solution methods for single chance constraints in optimal power flow problems have been considered in the literature, ensuring single constraints are satisfied with a prescribed probability; however, joint chance constraints, ensuring multiple constraints are simultaneously satisfied, have predominantly been solved via scenario-based approaches or by utilizing Boole's inequality asmore » an upper bound. In this paper, joint chance constraints are used to solve an AC optimal power flow problem while preventing overvoltages in distribution grids under high penetrations of photovoltaic systems. A tighter version of Boole's inequality is derived and used to provide a new upper bound on the joint chance constraint, and simulation results are shown demonstrating the benefit of the proposed upper bound. The new framework allows for a less conservative and more computationally efficient solution to considering joint chance constraints, specifically regarding preventing overvoltages.« less
Panel flutter optimization by gradient projection
NASA Technical Reports Server (NTRS)
Pierson, B. L.
1975-01-01
A gradient projection optimal control algorithm incorporating conjugate gradient directions of search is described and applied to several minimum weight panel design problems subject to a flutter speed constraint. New numerical solutions are obtained for both simply-supported and clamped homogeneous panels of infinite span for various levels of inplane loading and minimum thickness. The minimum thickness inequality constraint is enforced by a simple transformation of variables.
NASA Astrophysics Data System (ADS)
Stephens, G. L.; Webster, P. J.; OBrien, D. M.
2013-12-01
We currently lack a quantitative understanding of how the Earth's energy balance and the poleward energy transport adjust to different forcings that determine climate change. Currently, there are no constraints that guide this understanding. We will demonstrate that the Earth's energy balance exhibits a remarkable symmetry about the equator, and that this symmetry is a necessary condition of a steady state climate. Our analysis points to clouds as the principal agent that highly regulates this symmetry and sets the steady state. The existence of this thermodynamic steady-state constraint on climate and the symmetry required to sustain it leads to important inferences about the synchronous nature of climate changes between hemispheres, offering for example insights on mechanisms that can sustain global ice ages forced by asymmetric hemispheric solar radiation variations or how climate may respond to increases in greenhouse gas concentration. Further inferences regarding cloud effects on climate can also be deduced without resorting to the complex and intricate processes of cloud formation, whose representation continues to challenge the climate modeling community. The constraint suggests cloud feedbacks must be negative buffering the system against change. We will show that this constraint doesn't exist in the current CMIP5 model experiments and the lack of such a constraint suggests there is insufficient buffering in models in response to external forcings
Manning, Kathryn Y; Fehlings, Darcy; Mesterman, Ronit; Gorter, Jan Willem; Switzer, Lauren; Campbell, Craig; Menon, Ravi S
2015-10-01
The aim was to identify neuroimaging predictors of clinical improvements following constraint-induced movement therapy. Resting state functional magnetic resonance and diffusion tensor imaging data was acquired in 7 children with hemiplegic cerebral palsy. Clinical and magnetic resonance imaging (MRI) data were acquired at baseline and 1 month later following a 3-week constraint therapy regimen. A more negative baseline laterality index characterizing an atypical unilateral sensorimotor resting state network significantly correlated with an improvement in the Canadian Occupational Performance Measure score (r = -0.81, P = .03). A more unilateral network with decreased activity in the affected hemisphere was associated with greater improvements in clinical scores. Higher mean diffusivity in the posterior limb of the internal capsule of the affect tract correlated significantly with improvements in the Jebsen-Taylor score (r = -0.83, P = .02). Children with more compromised networks and tracts improved the most following constraint therapy. © The Author(s) 2015.
Integrated optimization of nonlinear R/C frames with reliability constraints
NASA Technical Reports Server (NTRS)
Soeiro, Alfredo; Hoit, Marc
1989-01-01
A structural optimization algorithm was researched including global displacements as decision variables. The algorithm was applied to planar reinforced concrete frames with nonlinear material behavior submitted to static loading. The flexural performance of the elements was evaluated as a function of the actual stress-strain diagrams of the materials. Formation of rotational hinges with strain hardening were allowed and the equilibrium constraints were updated accordingly. The adequacy of the frames was guaranteed by imposing as constraints required reliability indices for the members, maximum global displacements for the structure and a maximum system probability of failure.
On Reformulating Planning as Dynamic Constraint Satisfaction
NASA Technical Reports Server (NTRS)
Frank, Jeremy; Jonsson, Ari K.; Morris, Paul; Koga, Dennis (Technical Monitor)
2000-01-01
In recent years, researchers have reformulated STRIPS planning problems as SAT problems or CSPs. In this paper, we discuss the Constraint-Based Interval Planning (CBIP) paradigm, which can represent planning problems incorporating interval time and resources. We describe how to reformulate mutual exclusion constraints for a CBIP-based system, the Extendible Uniform Remote Operations Planner Architecture (EUROPA). We show that reformulations involving dynamic variable domains restrict the algorithms which can be used to solve the resulting DCSP. We present an alternative formulation which does not employ dynamic domains, and describe the relative merits of the different reformulations.
Scheduling the resident 80-hour work week: an operations research algorithm.
Day, T Eugene; Napoli, Joseph T; Kuo, Paul C
2006-01-01
The resident 80-hour work week requires that programs now schedule duty hours. Typically, scheduling is performed in an empirical "trial-and-error" fashion. However, this is a classic "scheduling" problem from the field of operations research (OR). It is similar to scheduling issues that airlines must face with pilots and planes routing through various airports at various times. The authors hypothesized that an OR approach using iterative computer algorithms could provide a rational scheduling solution. Institution-specific constraints of the residency problem were formulated. A total of 56 residents are rotating through 4 hospitals. Additional constraints were dictated by the Residency Review Committee (RRC) rules or the specific surgical service. For example, at Hospital 1, during the weekday hours between 6 am and 6 pm, there will be a PGY4 or PGY5 and a PGY2 or PGY3 on-duty to cover Service "A." A series of equations and logic statements was generated to satisfy all constraints and requirements. These were restated in the Optimization Programming Language used by the ILOG software suite for solving mixed integer programming problems. An integer programming solution was generated to this resource-constrained assignment problem. A total of 30,900 variables and 12,443 constraints were required. A total of man-hours of programming were used; computer run-time was 25.9 hours. A weekly schedule was generated for each resident that satisfied the RRC regulations while fulfilling all stated surgical service requirements. Each required between 64 and 80 weekly resident duty hours. The authors conclude that OR is a viable approach to schedule resident work hours. This technique is sufficiently robust to accommodate changes in resident numbers, service requirements, and service and hospital rotations.
Direct SQP-methods for solving optimal control problems with delays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goellmann, L.; Bueskens, C.; Maurer, H.
The maximum principle for optimal control problems with delays leads to a boundary value problem (BVP) which is retarded in the state and advanced in the costate function. Based on shooting techniques, solution methods for this type of BVP have been proposed. In recent years, direct optimization methods have been favored for solving control problems without delays. Direct methods approximate the control and the state over a fixed mesh and solve the resulting NLP-problem with SQP-methods. These methods dispense with the costate function and have shown to be robust and efficient. In this paper, we propose a direct SQP-method formore » retarded control problems. In contrast to conventional direct methods, only the control variable is approximated by e.g. spline-functions. The state is computed via a high order Runge-Kutta type algorithm and does not enter explicitly the NLP-problem through an equation. This approach reduces the number of optimization variables considerably and is implementable even on a PC. Our method is illustrated by the numerical solution of retarded control problems with constraints. In particular, we consider the control of a continuous stirred tank reactor which has been solved by dynamic programming. This example illustrates the robustness and efficiency of the proposed method. Open questions concerning sufficient conditions and convergence of discretized NLP-problems are discussed.« less
A chance-constrained stochastic approach to intermodal container routing problems.
Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.
A chance-constrained stochastic approach to intermodal container routing problems
Zhao, Yi; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost. PMID:29438389
A generalized simplest equation method and its application to the Boussinesq-Burgers equation.
Sudao, Bilige; Wang, Xiaomin
2015-01-01
In this paper, a generalized simplest equation method is proposed to seek exact solutions of nonlinear evolution equations (NLEEs). In the method, we chose a solution expression with a variable coefficient and a variable coefficient ordinary differential auxiliary equation. This method can yield a Bäcklund transformation between NLEEs and a related constraint equation. By dealing with the constraint equation, we can derive infinite number of exact solutions for NLEEs. These solutions include the traveling wave solutions, non-traveling wave solutions, multi-soliton solutions, rational solutions, and other types of solutions. As applications, we obtained wide classes of exact solutions for the Boussinesq-Burgers equation by using the generalized simplest equation method.
A Generalized Simplest Equation Method and Its Application to the Boussinesq-Burgers Equation
Sudao, Bilige; Wang, Xiaomin
2015-01-01
In this paper, a generalized simplest equation method is proposed to seek exact solutions of nonlinear evolution equations (NLEEs). In the method, we chose a solution expression with a variable coefficient and a variable coefficient ordinary differential auxiliary equation. This method can yield a Bäcklund transformation between NLEEs and a related constraint equation. By dealing with the constraint equation, we can derive infinite number of exact solutions for NLEEs. These solutions include the traveling wave solutions, non-traveling wave solutions, multi-soliton solutions, rational solutions, and other types of solutions. As applications, we obtained wide classes of exact solutions for the Boussinesq-Burgers equation by using the generalized simplest equation method. PMID:25973605
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Bhat, R. B.
1979-01-01
A finite element program is linked with a general purpose optimization program in a 'programing system' which includes user supplied codes that contain problem dependent formulations of the design variables, objective function and constraints. The result is a system adaptable to a wide spectrum of structural optimization problems. In a sample of numerical examples, the design variables are the cross-sectional dimensions and the parameters of overall shape geometry, constraints are applied to stresses, displacements, buckling and vibration characteristics, and structural mass is the objective function. Thin-walled, built-up structures and frameworks are included in the sample. Details of the system organization and characteristics of the component programs are given.
2011-01-01
Background A gene's position in regulatory, protein interaction or metabolic networks can be predictive of the strength of purifying selection acting on it, but these relationships are neither universal nor invariably strong. Following work in bacteria, fungi and invertebrate animals, we explore the relationship between selective constraint and metabolic function in mammals. Results We measure the association between selective constraint, estimated by the ratio of nonsynonymous (Ka) to synonymous (Ks) substitutions, and several, primarily metabolic, measures of gene function. We find significant differences between the selective constraints acting on enzyme-coding genes from different cellular compartments, with the nucleus showing higher constraint than genes from either the cytoplasm or the mitochondria. Among metabolic genes, the centrality of an enzyme in the metabolic network is significantly correlated with Ka/Ks. In contrast to yeasts, gene expression magnitude does not appear to be the primary predictor of selective constraint in these organisms. Conclusions Our results imply that the relationship between selective constraint and enzyme centrality is complex: the strength of selective constraint acting on mammalian genes is quite variable and does not appear to exclusively follow patterns seen in other organisms. PMID:21470417
Acceleration constraints in modeling and control of nonholonomic systems
NASA Astrophysics Data System (ADS)
Bajodah, Abdulrahman H.
2003-10-01
Acceleration constraints are used to enhance modeling techniques for dynamical systems. In particular, Kane's equations of motion subjected to bilateral constraints, unilateral constraints, and servo-constraints are modified by utilizing acceleration constraints for the purpose of simplifying the equations and increasing their applicability. The tangential properties of Kane's method provide relationships between the holonomic and the nonholonomic partial velocities, and hence allow one to describe nonholonomic generalized active and inertia forces in terms of their holonomic counterparts, i.e., those which correspond to the system without constraints. Therefore, based on the modeling process objectives, the holonomic and the nonholonomic vector entities in Kane's approach are used interchangeably to model holonomic and nonholonomic systems. When the holonomic partial velocities are used to model nonholonomic systems, the resulting models are full-order (also called nonminimal or unreduced) and separated in accelerations. As a consequence, they are readily integrable and can be used for generic system analysis. Other related topics are constraint forces, numerical stability of the nonminimal equations of motion, and numerical constraint stabilization. Two types of unilateral constraints considered are impulsive and friction constraints. Impulsive constraints are modeled by means of a continuous-in-velocities and impulse-momentum approaches. In controlled motion, the acceleration form of constraints is utilized with the Moore-Penrose generalized inverse of the corresponding constraint matrix to solve for the inverse dynamics of servo-constraints, and for the redundancy resolution of overactuated manipulators. If control variables are involved in the algebraic constraint equations, then these tools are used to modify the controlled equations of motion in order to facilitate control system design. An illustrative example of spacecraft stabilization is presented.
Constraint reasoning in deep biomedical models.
Cruz, Jorge; Barahona, Pedro
2005-05-01
Deep biomedical models are often expressed by means of differential equations. Despite their expressive power, they are difficult to reason about and make decisions, given their non-linearity and the important effects that the uncertainty on data may cause. The objective of this work is to propose a constraint reasoning framework to support safe decisions based on deep biomedical models. The methods used in our approach include the generic constraint propagation techniques for reducing the bounds of uncertainty of the numerical variables complemented with new constraint reasoning techniques that we developed to handle differential equations. The results of our approach are illustrated in biomedical models for the diagnosis of diabetes, tuning of drug design and epidemiology where it was a valuable decision-supporting tool notwithstanding the uncertainty on data. The main conclusion that follows from the results is that, in biomedical decision support, constraint reasoning may be a worthwhile alternative to traditional simulation methods, especially when safe decisions are required.
A Graph Based Backtracking Algorithm for Solving General CSPs
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Goodwin, Scott D.
2003-01-01
Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.
Interrelations between different canonical descriptions of dissipative systems
NASA Astrophysics Data System (ADS)
Schuch, D.; Guerrero, J.; López-Ruiz, F. F.; Aldaya, V.
2015-04-01
There are many approaches for the description of dissipative systems coupled to some kind of environment. This environment can be described in different ways; only effective models are being considered here. In the Bateman model, the environment is represented by one additional degree of freedom and the corresponding momentum. In two other canonical approaches, no environmental degree of freedom appears explicitly, but the canonical variables are connected with the physical ones via non-canonical transformations. The link between the Bateman approach and those without additional variables is achieved via comparison with a canonical approach using expanding coordinates, as, in this case, both Hamiltonians are constants of motion. This leads to constraints that allow for the elimination of the additional degree of freedom in the Bateman approach. These constraints are not unique. Several choices are studied explicitly, and the consequences for the physical interpretation of the additional variable in the Bateman model are discussed.
Phonological Constraints on Children's Production of English Third Person Singular -S
ERIC Educational Resources Information Center
Song, Jae Yung; Sundara, Megha; Demuth, Katherine
2009-01-01
Purpose: Children variably produce grammatical morphemes at early stages of development, often omitting inflectional morphemes in obligatory contexts. This has typically been attributed to immature syntactic or semantic representations. In this study, the authors investigated the hypothesis that children's variable production of the 3rd person…
Comparison of Animal, Action and Phonemic Fluency in Aphasia
ERIC Educational Resources Information Center
Faroqi-Shah, Yasmeen; Milman, Lisa
2018-01-01
Background: The ability to generate words that follow certain constraints, or verbal fluency, is a sensitive indicator of neurocognitive impairment, and is impacted by a variety of variables. Aims: To investigate the effect of post-stroke aphasia, elicitation category and linguistic variables on verbal fluency performance. Methods &…
Piazza, Bryan P.; LaPeyre, Megan K.; Keim, B.D.
2010-01-01
Climate creates environmental constraints (filters) that affect the abundance and distribution of species. In estuaries, these constraints often result from variability in water flow properties and environmental conditions (i.e. water flow, salinity, water temperature) and can have significant effects on the abundance and distribution of commercially important nekton species. We investigated links between large-scale climate variability and juvenile brown shrimp Farfantepenaeus aztecus abundance in Breton Sound estuary, Louisiana (USA). Our goals were to (1) determine if a teleconnection exists between local juvenile brown shrimp abundance and the El Niño Southern Oscillation (ENSO) and (2) relate that linkage to environmental constraints that may affect juvenile brown shrimp recruitment to, and survival in, the estuary. Our results identified a teleconnection between winter ENSO conditions and juvenile brown shrimp abundance in Breton Sound estuary the following spring. The physical connection results from the impact of ENSO on winter weather conditions in Breton Sound (air pressure, temperature, and precipitation). Juvenile brown shrimp abundance effects lagged ENSO by 3 mo: lower than average abundances of juvenile brown shrimp were caught in springs following winter El Niño events, and higher than average abundances of brown shrimp were caught in springs following La Niña winters. Salinity was the dominant ENSO-forced environmental filter for juvenile brown shrimp. Spring salinity was cumulatively forced by winter river discharge, winter wind forcing, and spring precipitation. Thus, predicting brown shrimp abundance requires incorporating climate variability into models.
$L^1$ penalization of volumetric dose objectives in optimal control of PDEs
Barnard, Richard C.; Clason, Christian
2017-02-11
This work is concerned with a class of PDE-constrained optimization problems that are motivated by an application in radiotherapy treatment planning. Here the primary design objective is to minimize the volume where a functional of the state violates a prescribed level, but prescribing these levels in the form of pointwise state constraints leads to infeasible problems. We therefore propose an alternative approach based on L 1 penalization of the violation that is also applicable when state constraints are infeasible. We establish well-posedness of the corresponding optimal control problem, derive first-order optimality conditions, discuss convergence of minimizers as the penalty parametermore » tends to infinity, and present a semismooth Newton method for their efficient numerical solution. Finally, the performance of this method for a model problem is illustrated and contrasted with an alternative approach based on (regularized) state constraints.« less
Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer
NASA Astrophysics Data System (ADS)
Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre
2014-07-01
We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.
Powered Descent Guidance with General Thrust-Pointing Constraints
NASA Technical Reports Server (NTRS)
Carson, John M., III; Acikmese, Behcet; Blackmore, Lars
2013-01-01
The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.
Fang, Yilin; Scheibe, Timothy D; Mahadevan, Radhakrishnan; Garg, Srinath; Long, Philip E; Lovley, Derek R
2011-03-25
The activity of microorganisms often plays an important role in dynamic natural attenuation or engineered bioremediation of subsurface contaminants, such as chlorinated solvents, metals, and radionuclides. To evaluate and/or design bioremediated systems, quantitative reactive transport models are needed. State-of-the-art reactive transport models often ignore the microbial effects or simulate the microbial effects with static growth yield and constant reaction rate parameters over simulated conditions, while in reality microorganisms can dynamically modify their functionality (such as utilization of alternative respiratory pathways) in response to spatial and temporal variations in environmental conditions. Constraint-based genome-scale microbial in silico models, using genomic data and multiple-pathway reaction networks, have been shown to be able to simulate transient metabolism of some well studied microorganisms and identify growth rate, substrate uptake rates, and byproduct rates under different growth conditions. These rates can be identified and used to replace specific microbially-mediated reaction rates in a reactive transport model using local geochemical conditions as constraints. We previously demonstrated the potential utility of integrating a constraint-based microbial metabolism model with a reactive transport simulator as applied to bioremediation of uranium in groundwater. However, that work relied on an indirect coupling approach that was effective for initial demonstration but may not be extensible to more complex problems that are of significant interest (e.g., communities of microbial species and multiple constraining variables). Here, we extend that work by presenting and demonstrating a method of directly integrating a reactive transport model (FORTRAN code) with constraint-based in silico models solved with IBM ILOG CPLEX linear optimizer base system (C library). The models were integrated with BABEL, a language interoperability tool. The modeling system is designed in such a way that constraint-based models targeting different microorganisms or competing organism communities can be easily plugged into the system. Constraint-based modeling is very costly given the size of a genome-scale reaction network. To save computation time, a binary tree is traversed to examine the concentration and solution pool generated during the simulation in order to decide whether the constraint-based model should be called. We also show preliminary results from the integrated model including a comparison of the direct and indirect coupling approaches and evaluated the ability of the approach to simulate field experiment. Published by Elsevier B.V.
Maximum-entropy probability distributions under Lp-norm constraints
NASA Technical Reports Server (NTRS)
Dolinar, S.
1991-01-01
Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.
New limits on coupled dark energy model after Planck 2015
NASA Astrophysics Data System (ADS)
Li, Hang; Yang, Weiqiang; Wu, Yabo; Jiang, Ying
2018-06-01
We used the Planck 2015 cosmic microwave background anisotropy, baryon acoustic oscillation, type-Ia supernovae, redshift-space distortions, and weak gravitational lensing to test the model parameter space of coupled dark energy. We assumed the constant and time-varying equation of state parameter for dark energy, and treated dark matter and dark energy as the fluids whose energy transfer was proportional to the combined term of the energy densities and equation of state, such as Q = 3 Hξ(1 +wx) ρx and Q = 3 Hξ [ 1 +w0 +w1(1 - a) ] ρx, the full space of equation of state could be measured when we considered the term (1 +wx) in the energy exchange. According to the joint observational constraint, the results showed that wx = - 1.006-0.027+0.047 and ξ = 0.098-0.098>+0.026 for coupled dark energy with a constant equation of state, w0 = -1.076-0.076+0.085, w1 = - 0.069-0.319+0.361, and ξ = 0.210-0.210+0.048 for a variable equation of state. We did not get any clear evidence for the coupling in the dark fluids at 1 σ region.
NASA Astrophysics Data System (ADS)
Hu, Shun; Shi, Liangsheng; Zha, Yuanyuan; Williams, Mathew; Lin, Lin
2017-12-01
Improvements to agricultural water and crop managements require detailed information on crop and soil states, and their evolution. Data assimilation provides an attractive way of obtaining these information by integrating measurements with model in a sequential manner. However, data assimilation for soil-water-atmosphere-plant (SWAP) system is still lack of comprehensive exploration due to a large number of variables and parameters in the system. In this study, simultaneous state-parameter estimation using ensemble Kalman filter (EnKF) was employed to evaluate the data assimilation performance and provide advice on measurement design for SWAP system. The results demonstrated that a proper selection of state vector is critical to effective data assimilation. Especially, updating the development stage was able to avoid the negative effect of ;phenological shift;, which was caused by the contrasted phenological stage in different ensemble members. Simultaneous state-parameter estimation (SSPE) assimilation strategy outperformed updating-state-only (USO) assimilation strategy because of its ability to alleviate the inconsistency between model variables and parameters. However, the performance of SSPE assimilation strategy could deteriorate with an increasing number of uncertain parameters as a result of soil stratification and limited knowledge on crop parameters. In addition to the most easily available surface soil moisture (SSM) and leaf area index (LAI) measurements, deep soil moisture, grain yield or other auxiliary data were required to provide sufficient constraints on parameter estimation and to assure the data assimilation performance. This study provides an insight into the response of soil moisture and grain yield to data assimilation in SWAP system and is helpful for soil moisture movement and crop growth modeling and measurement design in practice.
MIIC online: a web server to reconstruct causal or non-causal networks from non-perturbative data.
Sella, Nadir; Verny, Louis; Uguzzoni, Guido; Affeldt, Séverine; Isambert, Hervé
2018-07-01
We present a web server running the MIIC algorithm, a network learning method combining constraint-based and information-theoretic frameworks to reconstruct causal, non-causal or mixed networks from non-perturbative data, without the need for an a priori choice on the class of reconstructed network. Starting from a fully connected network, the algorithm first removes dispensable edges by iteratively subtracting the most significant information contributions from indirect paths between each pair of variables. The remaining edges are then filtered based on their confidence assessment or oriented based on the signature of causality in observational data. MIIC online server can be used for a broad range of biological data, including possible unobserved (latent) variables, from single-cell gene expression data to protein sequence evolution and outperforms or matches state-of-the-art methods for either causal or non-causal network reconstruction. MIIC online can be freely accessed at https://miic.curie.fr. Supplementary data are available at Bioinformatics online.
A General Reversible Hereditary Constitutive Model. Part 1; Theoretical Developments
NASA Technical Reports Server (NTRS)
Saleeb, A. F.; Arnold, S. M.
1997-01-01
Using an internal-variable formalism as a starting point, we describe the viscoelastic extension of a previously-developed viscoplasticity formulation of the complete potential structure type. It is mainly motivated by experimental evidence for the presence of rate/time effects in the so-called quasilinear, reversible, material response range. Several possible generalizations are described, in the general format of hereditary-integral representations for non-equilibrium, stress-type, state variables, both for isotropic as well as anisotropic materials. In particular, thorough discussions are given on the important issues of thermodynamic admissibility requirements for such general descriptions, resulting in a set of explicit mathematical constraints on the associated kernel (relaxation and creep compliance) functions. In addition, a number of explicit, integrated forms are derived, under stress and strain control to facilitate the parametric and qualitative response characteristic studies reported here, as well as to help identify critical factors in the actual experimental characterizations from test data that will be reported in Part II.
One shot methods for optimal control of distributed parameter systems 1: Finite dimensional control
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1991-01-01
The efficient numerical treatment of optimal control problems governed by elliptic partial differential equations (PDEs) and systems of elliptic PDEs, where the control is finite dimensional is discussed. Distributed control as well as boundary control cases are discussed. The main characteristic of the new methods is that they are designed to solve the full optimization problem directly, rather than accelerating a descent method by an efficient multigrid solver for the equations involved. The methods use the adjoint state in order to achieve efficient smoother and a robust coarsening strategy. The main idea is the treatment of the control variables on appropriate scales, i.e., control variables that correspond to smooth functions are solved for on coarse grids depending on the smoothness of these functions. Solution of the control problems is achieved with the cost of solving the constraint equations about two to three times (by a multigrid solver). Numerical examples demonstrate the effectiveness of the method proposed in distributed control case, pointwise control and boundary control problems.
NASA Technical Reports Server (NTRS)
Moncrief, V.; Teitelboim, C.
1972-01-01
It is shown that if the Hamiltonian constraint of general relativity is imposed as a restriction on the Hamilton principal functional in the classical theory, or on the state functional in the quantum theory, then the momentum constraints are automatically satisfied. This result holds both for closed and open spaces and it means that the full content of the theory is summarized by a single functional equation of the Tomonaga-Schwinger type.
Using atmospheric 14CO to constrain OH variability: concept and potential for future measurements
NASA Astrophysics Data System (ADS)
Petrenko, V. V.; Murray, L. T.; Smith, A. W.
2017-12-01
The primary source of 14C-containing carbon monoxide (14CO) in the atmosphere is via 14C production from 14N by secondary cosmic rays, and the primary sink is removal by OH. Variations in the global abundance of 14CO that are not explained by variations in 14C production are mainly driven by variations in the global abundance of OH. Monitoring OH variability via methyl chloroform is becoming increasingly difficult as methyl chloroform abundance is continuing to decline. Measurements of atmospheric 14CO have previously been successfully used to infer OH variability. However, these measurements are currently only continuing at one location (Baring Head, New Zealand), which is insufficient to infer global trends. We propose to restart global 14CO monitoring with the aim of providing another constraint on OH variability. A new analytical system for 14CO sampling and measurements is in development, which will allow to strongly reduce the required sample air volumes (previously ≥ 400 L) and simplify field logistics. A set of test measurements is planned, with sampling at the Mauna Loa Observatory. Preliminary work with a state-of-the-art chemical transport model is identifying the most promising locations for global 14CO sampling.
Matrix-type transdermal films to enhance simvastatin ex vivo skin permeability.
El-Say, Khalid M; Ahmed, Osama A A; Aljaeid, Bader M; Zidan, Ahmed S
2017-06-01
This study aimed at employing Plackett-Burman design in screening formulation variables that affect quality of matrix-type simvastatin (SMV) transdermal film. To achieve this goal, 12 formulations were prepared by casting method. The investigated variables were Eudragit RL percentage, polymer mixture percentage, plasticizer type, plasticizer percentage, enhancer type, enhancer percentage and dichloromethane fraction in organic phase. The films were evaluated for physicochemical properties and ex vivo SMV permeation. SMV initial, delayed flux, diffusivity and permeability coefficient were calculated on the delayed flux phase with constraint to minimize the initial flux and approaching steady-state flux. The obtained results revealed flat films with homogeneous distribution of SMV within the films. Thickness values changed from 65 to 180 μm by changing the factors' combinations. Most of the permeation profiles showed sustained release feature with fast permeation phase followed by slow phase. Analysis of variance (ANOVA) showed significant effects (p < 0.05) of the investigated variables on the responses with Prob > F values of 0.0147, 0.0814, 0.0063 and 0.0142 for the initial and delayed fluxes, permeability coefficients and diffusivities, respectively. The findings of screening study showed the importance of the significant variables to be scaled up for full optimization study as a promising alternative drug delivery system.
NASA Astrophysics Data System (ADS)
Saba, Vincent S.; Hyde, Kimberly J. W.; Rebuck, Nathan D.; Friedland, Kevin D.; Hare, Jonathan A.; Kahru, Mati; Fogarty, Michael J.
2015-02-01
The continental shelf of the Northeast United States and Nova Scotia is a productive marine ecosystem that supports a robust biomass of living marine resources. Understanding marine ecosystem sensitivity to changes in the physical environment can start with the first-order response of phytoplankton (i.e., chlorophyll a), the base of the marine food web. However, the primary physical associations to the interannual variability of chlorophyll a in these waters are unclear. Here we used ocean color satellite measurements and identified the local and remote physical associations to interannual variability of spring surface chlorophyll a from 1998 to 2013. The highest interannual variability of chlorophyll a occurred in March and April on the northern flank of Georges Bank, the western Gulf of Maine, and Nantucket Shoals. Complex interactions between winter wind speed over the Shelf, local winter water levels, and the relative proportions of Atlantic versus Labrador Sea source waters entering the Gulf of Maine from the previous summer/fall were associated with the variability of March/April chlorophyll a in Georges Bank and the Gulf of Maine. Sea surface temperature and sea surface salinity were not robust correlates to spring chlorophyll a. Surface nitrate in the winter was not a robust correlate to chlorophyll a or the physical variables in every case suggesting that nitrate limitation may not be the primary constraint on the interannual variability of the spring bloom throughout all regions. Generalized linear models suggest that we can resolve 88% of March chlorophyll a interannual variability in Georges Bank using lagged physical data.
Balancing aggregation and smoothing errors in inverse models
Turner, A. J.; Jacob, D. J.
2015-06-30
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-01-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-06-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Breaking evolutionary constraint with a tradeoff ratchet
de Vos, Marjon G. J.; Dawid, Alexandre; Sunderlikova, Vanda; Tans, Sander J.
2015-01-01
Epistatic interactions can frustrate and shape evolutionary change. Indeed, phenotypes may fail to evolve when essential mutations are only accessible through positive selection if they are fixed simultaneously. How environmental variability affects such constraints is poorly understood. Here, we studied genetic constraints in fixed and fluctuating environments using the Escherichia coli lac operon as a model system for genotype–environment interactions. We found that, in different fixed environments, all trajectories that were reconstructed by applying point mutations within the transcription factor–operator interface became trapped at suboptima, where no additional improvements were possible. Paradoxically, repeated switching between these same environments allows unconstrained adaptation by continuous improvements. This evolutionary mode is explained by pervasive cross-environmental tradeoffs that reposition the peaks in such a way that trapped genotypes can repeatedly climb ascending slopes and hence, escape adaptive stasis. Using a Markov approach, we developed a mathematical framework to quantify the landscape-crossing rates and show that this ratchet-like adaptive mechanism is robust in a wide spectrum of fluctuating environments. Overall, this study shows that genetic constraints can be overcome by environmental change and that cross-environmental tradeoffs do not necessarily impede but also, can facilitate adaptive evolution. Because tradeoffs and environmental variability are ubiquitous in nature, we speculate this evolutionary mode to be of general relevance. PMID:26567153
Structural optimization for joined-wing synthesis
NASA Technical Reports Server (NTRS)
Gallman, John W.; Kroo, Ilan M.
1992-01-01
The differences between fully stressed and minimum-weight joined-wing structures are identified, and these differences are quantified in terms of weight, stress, and direct operating cost. A numerical optimization method and a fully stressed design method are used to design joined-wing structures. Both methods determine the sizes of 204 structural members, satisfying 1020 stress constraints and five buckling constraints. Monotonic splines are shown to be a very effective way of linking spanwise distributions of material to a few design variables. Both linear and nonlinear analyses are employed to formulate the buckling constraints. With a constraint on buckling, the fully stressed design is shown to be very similar to the minimum-weight structure. It is suggested that a fully stressed design method based on nonlinear analysis is adequate for an aircraft optimization study.
Moras, Gerard; Fernández-Valdés, Bruno; Vázquez-Guerrero, Jairo; Tous-Fajardo, Julio; Exel, Juliana; Sampaio, Jaime
2018-05-24
This study described the variability in acceleration during a resistance training task, performed in horizontal inertial flywheels without (NOBALL) or with the constraint of catching and throwing a rugby ball (BALL). Twelve elite rugby players (mean±SD: age 25.6±3.0years, height 1.82±0.07m, weight 94.0±9.9kg) performed a resistance training task in both conditions (NOBALL AND BALL). Players had five minutes of a standardized warm-up, followed by two series of six repetitions of both conditions: at the first three repetitions the intensity was progressively increased while the last three were performed at maximal voluntary effort. Thereafter, the participants performed two series of eight repetitions from each condition for two days and in a random order, with a minimum of 10min between series. The structure of variability was analysed using non-linear measures of entropy. Mean changes (%; ±90% CL) of 4.64; ±3.1g for mean acceleration and 39.48; ±36.63a.u. for sample entropy indicated likely and very likely increase when in BALL condition. Multiscale entropy also showed higher unpredictability of acceleration under the BALL condition, especially at higher time scales. The application of match specific constraints in resistance training for rugby players elicit different amount of variability of body acceleration across multiple physiological time scales. Understanding the non-linear process inherent to the manipulation of resistance training variables with constraints and its motor adaptations may help coaches and trainers to enhance the effectiveness of physical training and, ultimately, better understand and maximize sports performance. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tang, Qiuhua; Li, Zixiang; Zhang, Liping; Floudas, C. A.; Cao, Xiaojun
2015-09-01
Due to the NP-hardness of the two-sided assembly line balancing (TALB) problem, multiple constraints existing in real applications are less studied, especially when one task is involved with several constraints. In this paper, an effective hybrid algorithm is proposed to address the TALB problem with multiple constraints (TALB-MC). Considering the discrete attribute of TALB-MC and the continuous attribute of the standard teaching-learning-based optimization (TLBO) algorithm, the random-keys method is hired in task permutation representation, for the purpose of bridging the gap between them. Subsequently, a special mechanism for handling multiple constraints is developed. In the mechanism, the directions constraint of each task is ensured by the direction check and adjustment. The zoning constraints and the synchronism constraints are satisfied by teasing out the hidden correlations among constraints. The positional constraint is allowed to be violated to some extent in decoding and punished in cost function. Finally, with the TLBO seeking for the global optimum, the variable neighborhood search (VNS) is further hybridized to extend the local search space. The experimental results show that the proposed hybrid algorithm outperforms the late acceptance hill-climbing algorithm (LAHC) for TALB-MC in most cases, especially for large-size problems with multiple constraints, and demonstrates well balance between the exploration and the exploitation. This research proposes an effective and efficient algorithm for solving TALB-MC problem by hybridizing the TLBO and VNS.
NASA Astrophysics Data System (ADS)
Wang, Jiang; Ferguson, Andrew
Ring polymers offer a wide range of natural and engineered functions and applications, including as circular bacterial DNA, crown ethers for cation chelation, and ``molecular machines'' such as mechanical nanoswitches. The morphology and dynamics of ring polymers are governed by the chemistry and degree of polymerization of the ring, and intramolecular and supramolecular topological constraints such as knots or mechanically-interlocked rings. We perform molecular dynamics simulations of polyethylene ring polymers as a function of degree of polymerization and in different topological states, including a knotted state, catenane state (two interlocked rings), and borromean state (three interlocked rings). Applying nonlinear manifold learning to our all-atom simulation trajectories, we extract low-dimensional free energy surfaces governing the accessible conformational states and their relative thermodynamic stability. The free energy surfaces reveal how degree of polymerization and topological constraints affect the thermally accessible conformations, chiral symmetry breaking, and folding and collapse pathways of the rings, and present a means to rationally engineer ring size and topology to preferentially stabilize particular conformational states.
Quasi-dynamic Earthquake Cycle Simulation in a Viscoelastic Medium with Memory Variables
NASA Astrophysics Data System (ADS)
Hirahara, K.; Ohtani, M.; Shikakura, Y.
2011-12-01
Earthquake cycle simulations based on rate and state friction laws have successfully reproduced the observed complex earthquake cycles at subduction zones. Most of simulations have assumed elastic media. The lower crust and the upper mantle have, however, viscoelastic properties, which cause postseismic stress relaxation. Hence the slip evolution on the plate interfaces or the faults in long earthquake cycles is different from that in elastic media. Especially, the viscoelasticity plays an important role in the interactive occurrence of inland and great interplate earthquakes. In viscoelastic media, the stress is usually calculated by the temporal convolution of the slip response function matrix and the slip deficit rate vector, which needs the past history of slip rates at all cells. Even if properly truncating the convolution, it requires huge computations. This is why few simulation studies have considered viscoelastic media so far. In this study, we examine the method using memory variables or anelastic functions, which has been developed for the time-domain finite-difference calculation of seismic waves in a dissipative medium (e.g., Emmerich and Korn,1987; Moczo and Kristek, 2005). The procedure for stress calculation with memory variables is as follows. First, we approximate the time-domain slip response function calculated in a viscoelastic medium with a series of relaxation functions with coefficients and relaxation times derived from a generalized Maxell body model. Then we can define the time-domain material-independent memory variable or anelastic function for each relaxation mechanism. Each time-domain memory variable satisfies the first-order differential equation. As a result, we can calculate the stress simply by the product of the unrelaxed modulus and the slip deficit subtracted from the sum of memory variables without temporal convolution. With respect to computational cost, we can summarize as in the followings. Dividing the plate interface into N cells, in elastic media, the stress at all cells is calculated by the product of the slip response function matrix and the slip deficit vector. The computational cost is O(N**2). With H-matrices method, we can reduce this to O(N)-O(NlogN) (Ohtani et al. 2011). The memory size is also reduced from O(N**2) to O(N). In viscoelastic media, the product of the unrelaxed modulus matrix and the vector of the slip deficit subtracted from the sum of memory variables costs O(N) with H-matrices method, which is the same as in elastic ones. If we use m relaxation functions, m x N differential equations are additionally solved at a time. The increase in memory size is (4m+1) x N**2. For approximation of slip response function, we need to estimate coefficients and relaxation times for m relaxation functions non-linearly with constraints. Because it is difficult to execute the non-linear least square estimation with constraints, we consider only m=2 with satisfying constraints. Test calculations in a layered or 3-D heterogeneous viscoelastic structure show this gives the satisfactory approximation. As an example, we report a 2-D earthquake cycle simulation for the 2011 giant Tohoku earthquake in a layered viscoelastic medium.
Optimal aeroassisted orbital transfer with plane change using collocation and nonlinear programming
NASA Technical Reports Server (NTRS)
Shi, Yun. Y.; Nelson, R. L.; Young, D. H.
1990-01-01
The fuel optimal control problem arising in the non-planar orbital transfer employing aeroassisted technology is addressed. The mission involves the transfer from high energy orbit (HEO) to low energy orbit (LEO) with orbital plane change. The basic strategy here is to employ a combination of propulsive maneuvers in space and aerodynamic maneuvers in the atmosphere. The basic sequence of events for the aeroassisted HEO to LEO transfer consists of three phases. In the first phase, the orbital transfer begins with a deorbit impulse at HEO which injects the vehicle into an elliptic transfer orbit with perigee inside the atmosphere. In the second phase, the vehicle is optimally controlled by lift and bank angle modulations to perform the desired orbital plane change and to satisfy heating constraints. Because of the energy loss during the turn, an impulse is required to initiate the third phase to boost the vehicle back to the desired LEO orbital altitude. The third impulse is then used to circularize the orbit at LEO. The problem is solved by a direct optimization technique which uses piecewise polynomial representation for the state and control variables and collocation to satisfy the differential equations. This technique converts the optimal control problem into a nonlinear programming problem which is solved numerically. Solutions were obtained for cases with and without heat constraints and for cases of different orbital inclination changes. The method appears to be more powerful and robust than other optimization methods. In addition, the method can handle complex dynamical constraints.
Towards equation of state of dark energy from quasar monitoring: Reverberation strategy
NASA Astrophysics Data System (ADS)
Czerny, B.; Hryniewicz, K.; Maity, I.; Schwarzenberg-Czerny, A.; Życki, P. T.; Bilicki, M.
2013-08-01
Context. High-redshift quasars can be used to constrain the equation of state of dark energy. They can serve as a complementary tool to supernovae Type Ia, especially at z > 1. Aims: The method is based on the determination of the size of the broad line region (BLR) from the emission line delay, the determination of the absolute monochromatic luminosity either from the observed statistical relation or from a model of the formation of the BLR, and the determination of the observed monochromatic flux from photometry. This allows the luminosity distance to a quasar to be obtained, independently from its redshift. The accuracy of the measurements is, however, a key issue. Methods: We modeled the expected accuracy of the measurements by creating artificial quasar monochromatic lightcurves and responses from the BLR under various assumptions about the variability of a quasar, BLR extension, distribution of the measurements in time, accuracy of the measurements, and the intrinsic line variability. Results: We show that the five-year monitoring of a single quasar based on the Mg II line should give an accuracy of 0.06-0.32 mag in the distance modulus which will allow new constraints to be put on the expansion rate of the Universe at high redshifts. Successful monitoring of higher redshift quasars based on C IV lines requires proper selection of the objects to avoid sources with much higher levels of the intrinsic variability of C IV compared to Mg II.
Trends and variability in the Hadley circulation over the Last Millennium from the proxy record
NASA Astrophysics Data System (ADS)
Horlick, K. A.; Noone, D.; Hakim, G. J.; Tardif, R.; Anderson, D. M.; Perkins, W. A.; Erb, M. P.; Steig, E. J.
2017-12-01
The Hadley circulation (HC) is the dominant atmospheric overturning circulation controlling variability in precipitation distribution in the tropics and subtropics, affecting agricultural production and water resource allocation, among other human civilizational dependencies. A lack of pre-instrumental data-model synthesis has been cited as the barrier to diagnostic analyses of the variability in width, position, and intensity of the HC and its response to anthropogenic forcing. We analyze the HC, and its rising limb associated with the Intertropical Convergence Zone (ITCZ), over the past 1000 years using the Last Millennium Reanalysis (LMR) (Hakim et al. 2016). The LMR systematically blends the dynamical constraints of climate models with a proxy network of coral, tree ring, and ice core records. It allows for a spatiotemporal analysis with robust uncertainty measures. A three dimensional analysis of LMR wind fields shows an centennial-scale circulatory trend over the last 200 years resembling that which might be expected from an ENSO and PDO-like structure. An observed aridification of both the central equatorial Pacific and the southwest United States, a strengthening of the east-west sea surface temperature and sea level pressure gradient in the equatorial Pacific, and a strengthening of the Walker overturning circulation suggest a more "La Niña-like" mean state. This is compared to our statistical description of the centennial-scale mean circulation and variability of the previous millennia. Similarly, precipitation and relative humidity trends suggest expansion and asymmetric meridional movement of the Hadley circulation as a result of asymmetric shifts in mean ITCZ position and intensity. These observations are then compared to free running model simulations, other instrumental reanalysis products, and late-Holocene aerosol, solar, and greenhouse forcings. This LMR reconstruction improves upon previous work by enabling a proxy-consistent, quantitative analysis of Hadley circulation intensity, structure, and variability rather than relying on simpler empirical reconstructions of variables like surface temperature alone.
Algebraic solution for the forward displacement analysis of the general 6-6 stewart mechanism
NASA Astrophysics Data System (ADS)
Wei, Feng; Wei, Shimin; Zhang, Ying; Liao, Qizheng
2016-01-01
The solution for the forward displacement analysis(FDA) of the general 6-6 Stewart mechanism(i.e., the connection points of the moving and fixed platforms are not restricted to lying in a plane) has been extensively studied, but the efficiency of the solution remains to be effectively addressed. To this end, an algebraic elimination method is proposed for the FDA of the general 6-6 Stewart mechanism. The kinematic constraint equations are built using conformal geometric algebra(CGA). The kinematic constraint equations are transformed by a substitution of variables into seven equations with seven unknown variables. According to the characteristic of anti-symmetric matrices, the aforementioned seven equations can be further transformed into seven equations with four unknown variables by a substitution of variables using the Gröbner basis. Its elimination weight is increased through changing the degree of one variable, and sixteen equations with four unknown variables can be obtained using the Gröbner basis. A 40th-degree univariate polynomial equation is derived by constructing a relatively small-sized 9´9 Sylvester resultant matrix. Finally, two numerical examples are employed to verify the proposed method. The results indicate that the proposed method can effectively improve the efficiency of solution and reduce the computational burden because of the small-sized resultant matrix.
Collective coordinates and constrained hamiltonian systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dayi, O.F.
1992-07-01
A general method of incorporating collective coordinates (transformation of fields into an overcomplete basis) with constrained hamiltonian systems is given where the original phase space variables and collective coordinates can be bosonic or/and fermionic. This method is illustrated by applying it to the SU(2) Yang-Mills-Higgs theory and its BFV-BRST quantization is discussed. Moreover, this formalism is used to give a systematic way of converting second class constraints into effectively first class ones, by considering second class constraints as first class constraints and gauge fixing conditions. This approach is applied to the massive superparticle. Proca lagrangian, and some topological quantum fieldmore » theories.« less
Effect of leading-edge load constraints on the design and performance of supersonic wings
NASA Technical Reports Server (NTRS)
Darden, C. M.
1985-01-01
A theoretical and experimental investigation was conducted to assess the effect of leading-edge load constraints on supersonic wing design and performance. In the effort to delay flow separation and the formation of leading-edge vortices, two constrained, linear-theory optimization approaches were used to limit the loadings on the leading edge of a variable-sweep planform design. Experimental force and moment tests were made on two constrained camber wings, a flat uncambered wing, and an optimum design with no constraints. Results indicate that vortex strength and separation regions were mildest on the severely and moderately constrained wings.
NASA Technical Reports Server (NTRS)
Englander, Arnold C.; Englander, Jacob A.
2017-01-01
Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.
Fleet Assignment Using Collective Intelligence
NASA Technical Reports Server (NTRS)
Antoine, Nicolas E.; Bieniawski, Stefan R.; Kroo, Ilan M.; Wolpert, David H.
2004-01-01
Airline fleet assignment involves the allocation of aircraft to a set of flights legs in order to meet passenger demand, while satisfying a variety of constraints. Over the course of the day, the routing of each aircraft is determined in order to minimize the number of required flights for a given fleet. The associated flow continuity and aircraft count constraints have led researchers to focus on obtaining quasi-optimal solutions, especially at larger scales. In this paper, the authors propose the application of an agent-based integer optimization algorithm to a "cold start" fleet assignment problem. Results show that the optimizer can successfully solve such highly- constrained problems (129 variables, 184 constraints).
On implementation of the extended interior penalty function. [optimum structural design
NASA Technical Reports Server (NTRS)
Cassis, J. H.; Schmit, L. A., Jr.
1976-01-01
The extended interior penalty function formulation is implemented. A rational method for determining the transition between the interior and extended parts is set forth. The formulation includes a straightforward method for avoiding design points with some negative components, which are physically meaningless in structural analysis. The technique, when extended to problems involving parametric constraints, can facilitate closed form integration of the penalty terms over the most important parts of the parameter interval. The method lends itself well to the use of approximation concepts, such as design variable linking, constraint deletion and Taylor series expansions of response quantities in terms of design variables. Examples demonstrating the algorithm, in the context of planar orthogonal frames subjected to ground motion, are included.
ERIC Educational Resources Information Center
Stodden, David F.; Langendorfer, Stephen J.; Fleisig, Glenn S.; Andrews, James R.
2006-01-01
The purposes of this study were to: (a) examine the differences within 11 specific kinematic variables and an outcome measure (ball velocity) associated with component developmental levels of humerus and forearm action (Roberton & Halverson, 1984), and (b) if the differences in kinematic variables were significantly associated with the differences…
ERIC Educational Resources Information Center
Stodden, David F.; Langendorfer, Stephen J.; Fleisig, Glenn S.; Andrews, James R.
2006-01-01
The purposes of this study were to: (a) examine differences within specific kinematic variables and ball velocity associated with developmental component levels of step and trunk action (Roberton & Halverson, 1984), and (b) if the differences in kinematic variables were significantly associated with the differences in component levels, determine…
Necessary constraints for an equation of state to be physically acceptable
NASA Astrophysics Data System (ADS)
Sheelendra, K.; Vijay, A.
2018-04-01
We have pointed out the constraints required for an equation of state (EOS) to be physically acceptable and universally applicable for the entire range of compressions for a material at high pressures. We have discussed the boundary conditions valid at zero pressure and infinite pressure. The concept of infinite pressure behavior has been discussed. It has been emphasized that the Stacey reciprocal K-primed EOS satisfies all the necessary criterion for the validity of EOS. On the other hand, equations of state reported previously do not satisfy the condition of physical acceptability of an equation of state.
ERIC Educational Resources Information Center
Zhao, Dong
2014-01-01
This study discusses the educational constraints facing Muslim Hui students and the measures that should be pondered by the Chinese government to address these constraints. Three key research questions are addressed: (1) How does the mainstream Han, Confucian, or the state ideology interact with Hui students' culture? (2) In what ways do ethnic…
Societal constraints related to environmental remediation and decommissioning programmes.
Perko, Tanja; Monken-Fernandes, Horst; Martell, Meritxell; Zeleznik, Nadja; O'Sullivan, Patrick
2017-06-20
The decisions related to decommissioning or environmental remediation projects (D/ER) cannot be isolated from the socio-political and cultural environment. Experiences of the IAEA Member States point out the importance of giving due attention to the societal aspects in project planning and implementation. The purpose of this paper is threefold: i) to systematically review societal constraints that some organisations in different IAEA Member States encounter when implementing D/ER programmes, ii) to identify different approaches to overcome these constraints and iii) to collect examples of existing practices related to the integration of societal aspects in D/ER programmes worldwide. The research was conducted in the context of the IAEA project Constraints to Decommissioning and Environmental Remediation (CIDER). The research results show that societal constraints arise mostly as a result of the different perceptions, attitudes, opinions and concerns of stakeholders towards the risks and benefits of D/ER programmes and due to the lack of stakeholder involvement in planning. There are different approaches to address these constraints, however all approaches have common points: early involvement, respect for different views, mutual understanding and learning. These results are relevant for all on-going and planned D/ER programmes. Copyright © 2017 Elsevier Ltd. All rights reserved.
XY vs X Mixer in Quantum Alternating Operator Ansatz for Optimization Problems with Constraints
NASA Technical Reports Server (NTRS)
Wang, Zhihui; Rubin, Nicholas; Rieffel, Eleanor G.
2018-01-01
Quantum Approximate Optimization Algorithm, further generalized as Quantum Alternating Operator Ansatz (QAOA), is a family of algorithms for combinatorial optimization problems. It is a leading candidate to run on emerging universal quantum computers to gain insight into quantum heuristics. In constrained optimization, penalties are often introduced so that the ground state of the cost Hamiltonian encodes the solution (a standard practice in quantum annealing). An alternative is to choose a mixing Hamiltonian such that the constraint corresponds to a constant of motion and the quantum evolution stays in the feasible subspace. Better performance of the algorithm is speculated due to a much smaller search space. We consider problems with a constant Hamming weight as the constraint. We also compare different methods of generating the generalized W-state, which serves as a natural initial state for the Hamming-weight constraint. Using graph-coloring as an example, we compare the performance of using XY model as a mixer that preserves the Hamming weight with the performance of adding a penalty term in the cost Hamiltonian.
Closing the wedge: Search strategies for extended Higgs sectors with heavy flavor final states
Gori, Stefania; Kim, Ian-Woo; Shah, Nausheen R.; ...
2016-04-29
We consider search strategies for an extended Higgs sector at the high-luminosity LHC14 utilizing multitop final states. In the framework of a two Higgs doublet model, the purely top final states (more » $$t\\bar{t}$$, 4t) are important channels for heavy Higgs bosons with masses in the wedge above 2m t and at low values of tanβ, while a 2b2t final state is most relevant at moderate values of tanβ. We find, in the $$t\\bar{t}$$ H channel, with H→$$t\\bar{t}$$, that both single and three lepton final states can provide statistically significant constraints at low values of tanβ for mA as high as ~750 GeV. When systematics on the $$t\\bar{t}$$ background are taken into account, however, the three lepton final state is more powerful, though the precise constraint depends fairly sensitively on lepton fake rates. We also find that neither 2b2t nor $$t\\bar{t}$$ final states provide constraints on additional heavy Higgs bosons with couplings to tops smaller than the top Yukawa due to expected systematic uncertainties in the tt background.« less
NASA Astrophysics Data System (ADS)
Kokubo, Mitsuru
2015-05-01
The physical mechanisms of the quasar ultraviolet (UV)-optical variability are not well understood despite the long history of observations. Recently, Dexter & Agol presented a model of quasar UV-optical variability, which assumes large local temperature fluctuations in the quasar accretion discs. This inhomogeneous accretion disc model is claimed to describe not only the single-band variability amplitude, but also microlensing size constraints and the quasar composite spectral shape. In this work, we examine the validity of the inhomogeneous accretion disc model in the light of quasar UV-optical spectral variability by using five-band multi-epoch light curves for nearly 9 000 quasars in the Sloan Digital Sky Survey (SDSS) Stripe 82 region. By comparing the values of the intrinsic scatter σint of the two-band magnitude-magnitude plots for the SDSS quasar light curves and for the simulated light curves, we show that Dexter & Agol's inhomogeneous accretion disc model cannot explain the tight inter-band correlation often observed in the SDSS quasar light curves. This result leads us to conclude that the local temperature fluctuations in the accretion discs are not the main driver of the several years' UV-optical variability of quasars, and consequently, that the assumption that the quasar accretion discs have large localized temperature fluctuations is not preferred from the viewpoint of the UV-optical spectral variability.
Ensemble theory for slightly deformable granular matter.
Tejada, Ignacio G
2014-09-01
Given a granular system of slightly deformable particles, it is possible to obtain different static and jammed packings subjected to the same macroscopic constraints. These microstates can be compared in a mathematical space defined by the components of the force-moment tensor (i.e. the product of the equivalent stress by the volume of the Voronoi cell). In order to explain the statistical distributions observed there, an athermal ensemble theory can be used. This work proposes a formalism (based on developments of the original theory of Edwards and collaborators) that considers both the internal and the external constraints of the problem. The former give the density of states of the points of this space, and the latter give their statistical weight. The internal constraints are those caused by the intrinsic features of the system (e.g. size distribution, friction, cohesion). They, together with the force-balance condition, determine which the possible local states of equilibrium of a particle are. Under the principle of equal a priori probabilities, and when no other constraints are imposed, it can be assumed that particles are equally likely to be found in any one of these local states of equilibrium. Then a flat sampling over all these local states turns into a non-uniform distribution in the force-moment space that can be represented with density of states functions. Although these functions can be measured, some of their features are explored in this paper. The external constraints are those macroscopic quantities that define the ensemble and are fixed by the protocol. The force-moment, the volume, the elastic potential energy and the stress are some examples of quantities that can be expressed as functions of the force-moment. The associated ensembles are included in the formalism presented here.
Prosthetic Leg Control in the Nullspace of Human Interaction.
Gregg, Robert D; Martin, Anne E
2016-07-01
Recent work has extended the control method of virtual constraints, originally developed for autonomous walking robots, to powered prosthetic legs for lower-limb amputees. Virtual constraints define desired joint patterns as functions of a mechanical phasing variable, which are typically enforced by torque control laws that linearize the output dynamics associated with the virtual constraints. However, the output dynamics of a powered prosthetic leg generally depend on the human interaction forces, which must be measured and canceled by the feedback linearizing control law. This feedback requires expensive multi-axis load cells, and actively canceling the interaction forces may minimize the human's influence over the prosthesis. To address these limitations, this paper proposes a method for projecting virtual constraints into the nullspace of the human interaction terms in the output dynamics. The projected virtual constraints naturally render the output dynamics invariant with respect to the human interaction forces, which instead enter into the internal dynamics of the partially linearized prosthetic system. This method is illustrated with simulations of a transfemoral amputee model walking with a powered knee-ankle prosthesis that is controlled via virtual constraints with and without the proposed projection.
Minimum weight design of rectangular and tapered helicopter rotor blades with frequency constraints
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Walsh, Joanne L.
1988-01-01
The minimum weight design of a helicopter rotor blade subject to constraints on coupled flap-lag natural frequencies has been studied. A constraint has also been imposed on the minimum value of the autorotational inertia of the blade in order to ensure that it has sufficient inertia to autorotate in the case of engine failure. The program CAMRAD is used for the blade modal analysis and CONMIN is used for the optimization. In addition, a linear approximation analysis involving Taylor series expansion has been used to reduce the analysis effort. The procedure contains a sensitivity analysis which consists of analytical derivatives of the objective function and the autorotational inertia constraint and central finite difference derivatives of the frequency constraints. Optimum designs have been obtained for both rectangular and tapered blades. Design variables include taper ratio, segment weights, and box beam dimensions. It is shown that even when starting with an acceptable baseline design, a significant amount of weight reduction is possible while satisfying all the constraints for both rectangular and tapered blades.
Minimum weight design of rectangular and tapered helicopter rotor blades with frequency constraints
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Walsh, Joanne L.
1988-01-01
The minimum weight design of a helicopter rotor blade subject to constraints on coupled flap-lag natural frequencies has been studied. A constraint has also been imposed on the minimum value of the autorotational inertia of the blade in order to ensure that it has sufficient inertia to aurorotate in the case of engine failure. The program CAMRAD is used for the blade modal analysis and CONMIN is used for the optimization. In addition, a linear approximation analysis involving Taylor series expansion has been used to reduce the analysis effort. The procedure contains a sensitivity analysis which consists of analytical derivatives of the objective function and the autorotational inertia constraint and central finite difference derivatives of the frequency constraints. Optimum designs have been obtained for both rectangular and tapered blades. Design variables include taper ratio, segment weights, and box beam dimensions. It is shown that even when starting with an acceptable baseline design, a significant amount of weight reduction is possible while satisfying all the constraints for both rectangular and tapered blades.
Minimum weight design of helicopter rotor blades with frequency constraints
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Walsh, Joanne L.
1989-01-01
The minimum weight design of helicopter rotor blades subject to constraints on fundamental coupled flap-lag natural frequencies has been studied in this paper. A constraint has also been imposed on the minimum value of the blade autorotational inertia to ensure that the blade has sufficient inertia to autorotate in case of an engine failure. The program CAMRAD has been used for the blade modal analysis and the program CONMIN has been used for the optimization. In addition, a linear approximation analysis involving Taylor series expansion has been used to reduce the analysis effort. The procedure contains a sensitivity analysis which consists of analytical derivatives of the objective function and the autorotational inertia constraint and central finite difference derivatives of the frequency constraints. Optimum designs have been obtained for blades in vacuum with both rectangular and tapered box beam structures. Design variables include taper ratio, nonstructural segment weights and box beam dimensions. The paper shows that even when starting with an acceptable baseline design, a significant amount of weight reduction is possible while satisfying all the constraints for blades with rectangular and tapered box beams.
Future Cosmological Constraints From Fast Radio Bursts
NASA Astrophysics Data System (ADS)
Walters, Anthony; Weltman, Amanda; Gaensler, B. M.; Ma, Yin-Zhe; Witzemann, Amadeus
2018-03-01
We consider the possible observation of fast radio bursts (FRBs) with planned future radio telescopes, and investigate how well the dispersions and redshifts of these signals might constrain cosmological parameters. We construct mock catalogs of FRB dispersion measure (DM) data and employ Markov Chain Monte Carlo analysis, with which we forecast and compare with existing constraints in the flat ΛCDM model, as well as some popular extensions that include dark energy equation of state and curvature parameters. We find that the scatter in DM observations caused by inhomogeneities in the intergalactic medium (IGM) poses a big challenge to the utility of FRBs as a cosmic probe. Only in the most optimistic case, with a high number of events and low IGM variance, do FRBs aid in improving current constraints. In particular, when FRBs are combined with CMB+BAO+SNe+H 0 data, we find the biggest improvement comes in the {{{Ω }}}{{b}}{h}2 constraint. Also, we find that the dark energy equation of state is poorly constrained, while the constraint on the curvature parameter, Ω k , shows some improvement when combined with current constraints. When FRBs are combined with future baryon acoustic oscillation (BAO) data from 21 cm Intensity Mapping, we find little improvement over the constraints from BAOs alone. However, the inclusion of FRBs introduces an additional parameter constraint, {{{Ω }}}{{b}}{h}2, which turns out to be comparable to existing constraints. This suggests that FRBs provide valuable information about the cosmological baryon density in the intermediate redshift universe, independent of high-redshift CMB data.
Object-Oriented Multi-Disciplinary Design, Analysis, and Optimization Tool
NASA Technical Reports Server (NTRS)
Pak, Chan-gi
2011-01-01
An Object-Oriented Optimization (O3) tool was developed that leverages existing tools and practices, and allows the easy integration and adoption of new state-of-the-art software. At the heart of the O3 tool is the Central Executive Module (CEM), which can integrate disparate software packages in a cross platform network environment so as to quickly perform optimization and design tasks in a cohesive, streamlined manner. This object-oriented framework can integrate the analysis codes for multiple disciplines instead of relying on one code to perform the analysis for all disciplines. The CEM was written in FORTRAN and the script commands for each performance index were submitted through the use of the FORTRAN Call System command. In this CEM, the user chooses an optimization methodology, defines objective and constraint functions from performance indices, and provides starting and side constraints for continuous as well as discrete design variables. The structural analysis modules such as computations of the structural weight, stress, deflection, buckling, and flutter and divergence speeds have been developed and incorporated into the O3 tool to build an object-oriented Multidisciplinary Design, Analysis, and Optimization (MDAO) tool.
Shear-induced rigidity in athermal materials
NASA Astrophysics Data System (ADS)
Chakraborty, Bulbul; Sarkar, Sumantra
2014-03-01
In this talk, we present a minimal model of rigidity and plastic failure in solids whose rigidity emerges directly as a result of applied stresses. Examples include shear-jamming (SJ) in dry grains and discontinuous shear thickening (DST) of dense non-Brownian suspensions. Both SJ and DST states are examples of non-equilibrium, self-assembled structures that have evolved to support the load that created them. These are strongly-interacting systems where the interactions arise primarily from the strict constraints of force and torque balance at the local and global scales. Our model is based on a reciprocal-space picture that strictly enforces the local and global constraints, and is, therefore, best suited to capturing the strong correlations in these non-equilibrium systems. The reciprocal space is a tiling whose edges represent contact forces, and whose faces represent grains. A separation of scale between force fluctuations and displacements of grains is used to represent the positional disorder as quenched randomness on variables in the reciprocal space. Comparing theoretical results to experiments, we will argue that the packing fraction controls the strength of the quenched disorder. Sumantra Sarkar et al, Phys. Rev. Lett. 111, 068301 (2013)
NASA Astrophysics Data System (ADS)
Barthélémy, S.; Ricci, S.; Morel, T.; Goutal, N.; Le Pape, E.; Zaoui, F.
2018-07-01
In the context of hydrodynamic modeling, the use of 2D models is adapted in areas where the flow is not mono-dimensional (confluence zones, flood plains). Nonetheless the lack of field data and the computational cost constraints limit the extensive use of 2D models for operational flood forecasting. Multi-dimensional coupling offers a solution with 1D models where the flow is mono-dimensional and with local 2D models where needed. This solution allows for the representation of complex processes in 2D models, while the simulated hydraulic state is significantly better than that of the full 1D model. In this study, coupling is implemented between three 1D sub-models and a local 2D model for a confluence on the Adour river (France). A Schwarz algorithm is implemented to guarantee the continuity of the variables at the 1D/2D interfaces while in situ observations are assimilated in the 1D sub-models to improve results and forecasts in operational mode as carried out by the French flood forecasting services. An implementation of the coupling and data assimilation (DA) solution with domain decomposition and task/data parallelism is proposed so that it is compatible with operational constraints. The coupling with the 2D model improves the simulated hydraulic state compared to a global 1D model, and DA improves results in 1D and 2D areas.
Coordinated path following of multiple underacutated marine surface vehicles along one curve.
Liu, Lu; Wang, Dan; Peng, Zhouhua
2016-09-01
This paper investigates the coordinated path following problem for a fleet of underactuated marine surface vehicles (MSVs) along one curve. The dedicated control design is divided into two tasks. One is to steer individual underactuated MSV to track the given spatial path, and the other is to force the vehicles dispersed on a parameterized path subject to the constraints of a communication network. Specifically, a robust individual path following controller is developed based on a line-of-sight (LOS) guidance law and a reduced-order extended state observer (ESO). The vehicle sideslip angle due to environmental disturbances can be exactly identified. Then, the vehicle coordination is achieved by a path variable containment approach, under which the path variables are evenly dispersed between two virtual leaders. Another reduced-order ESO is developed to identify the composite disturbance related to the speed of virtual leaders and neighboring vehicles. The proposed coordination design is distributed since the reference speed does not need to be known by all vehicles as a priori. The input-to-state stability of the closed-loop network system is established via cascade theory. Simulation results demonstrate the effectiveness of the proposed design method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Boundary layers in cataclysmic variables: The HEAO-1 X-ray constraints
NASA Technical Reports Server (NTRS)
Jensen, K. A.
1983-01-01
The predictions of the boundary layer model for the X-ray emission from novae are summarized. A discrepancy between observations and theory in the X-ray observations is found. Constraints on the nature of the boundary layers in novae, based on the lack of detections of novae in the HEAO-1 soft X-ray survey are provided. Temperature and column densities for optically thick boundary layers in novae are estimated.
Exploring constrained quantum control landscapes
NASA Astrophysics Data System (ADS)
Moore, Katharine W.; Rabitz, Herschel
2012-10-01
The broad success of optimally controlling quantum systems with external fields has been attributed to the favorable topology of the underlying control landscape, where the landscape is the physical observable as a function of the controls. The control landscape can be shown to contain no suboptimal trapping extrema upon satisfaction of reasonable physical assumptions, but this topological analysis does not hold when significant constraints are placed on the control resources. This work employs simulations to explore the topology and features of the control landscape for pure-state population transfer with a constrained class of control fields. The fields are parameterized in terms of a set of uniformly spaced spectral frequencies, with the associated phases acting as the controls. This restricted family of fields provides a simple illustration for assessing the impact of constraints upon seeking optimal control. Optimization results reveal that the minimum number of phase controls necessary to assure a high yield in the target state has a special dependence on the number of accessible energy levels in the quantum system, revealed from an analysis of the first- and second-order variation of the yield with respect to the controls. When an insufficient number of controls and/or a weak control fluence are employed, trapping extrema and saddle points are observed on the landscape. When the control resources are sufficiently flexible, solutions producing the globally maximal yield are found to form connected "level sets" of continuously variable control fields that preserve the yield. These optimal yield level sets are found to shrink to isolated points on the top of the landscape as the control field fluence is decreased, and further reduction of the fluence turns these points into suboptimal trapping extrema on the landscape. Although constrained control fields can come in many forms beyond the cases explored here, the behavior found in this paper is illustrative of the impacts that constraints can introduce.
The role of ecosystem memory in predicting inter-annual variations of the tropical carbon balance.
NASA Astrophysics Data System (ADS)
Bloom, A. A.; Liu, J.; Bowman, K. W.; Konings, A. G.; Saatchi, S.; Worden, J. R.; Worden, H. M.; Jiang, Z.; Parazoo, N.; Williams, M. D.; Schimel, D.
2017-12-01
Understanding the trajectory of the tropical carbon balance remains challenging, in part due to large uncertainties in the integrated response of carbon cycle processes to climate variability. Satellite observations atmospheric CO2 from GOSAT and OCO-2, together with ancillary satellite measurements, provide crucial constraints on continental-scale terrestrial carbon fluxes. However, an integrated understanding of both climate forcings and legacy effects (or "ecosystem memory") on the terrestrial carbon balance is ultimately needed to reduce uncertainty on its future trajectory. Here we use the CARbon DAta-MOdel fraMework (CARDAMOM) diagnostic model-data fusion approach - constrained by an array of C cycle satellite surface observations, including MODIS leaf area, biomass, GOSAT solar-induced fluorescence, as well as "top-down" atmospheric inversion estimates of CO2 and CO surface fluxes from the NASA Carbon Monitoring System Flux (CMS-Flux) - to constrain and predict spatially-explicit tropical carbon state variables during 2010-2015. We find that the combined assimilation of land surface and atmospheric datasets places key constraints on the temperature sensitivity and first order carbon-water feedbacks throughout the tropics and combustion factors within biomass burning regions. By varying the duration of the assimilation period, we find that the prediction skill on inter-annual net biospheric exchange is primarily limited by record length rather than model structure and process representation. We show that across all tropical biomes, quantitative knowledge of memory effects - which account for 30-50% of interannual variations across the tropics - is critical for understanding and ultimately predicting the inter-annual tropical carbon balance.
Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.
2010-01-01
Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in these highly parameterized modeling contexts. Availability of these utilities is particularly important because, in many cases, a significant proportion of the uncertainty associated with model parameters-and the predictions that depend on them-arises from differences between the complex properties of the real world and the simplified representation of those properties that is expressed by the calibrated model. This report is intended to guide intermediate to advanced modelers in the use of capabilities available with the PEST suite of programs for evaluating model predictive error and uncertainty. A brief theoretical background is presented on sources of parameter and predictive uncertainty and on the means for evaluating this uncertainty. Applications of PEST tools are then discussed for overdetermined and underdetermined problems, both linear and nonlinear. PEST tools for calculating contributions to model predictive uncertainty, as well as optimization of data acquisition for reducing parameter and predictive uncertainty, are presented. The appendixes list the relevant PEST variables, files, and utilities required for the analyses described in the document.
NASA Astrophysics Data System (ADS)
Nijzink, R. C.; Samaniego, L.; Mai, J.; Kumar, R.; Thober, S.; Zink, M.; Schäfer, D.; Savenije, H. H. G.; Hrachowitz, M.
2015-12-01
Heterogeneity of landscape features like terrain, soil, and vegetation properties affect the partitioning of water and energy. However, it remains unclear to which extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated in the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge based model constraints reduces model uncertainty; and (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both, the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as overall measure for model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19 % respectively, compared to the base case of the unconstrained mHM. Most significant improvements in signature representations were, in particular, achieved for low flow statistics. The application of prior semi-quantitative constraints further improved the partitioning between runoff and evaporative fluxes. Besides, it was shown that suitable semi-quantitative prior constraints in combination with the transfer function based regularization approach of mHM, can be beneficial for spatial model transferability as the Euclidian distances for the signatures improved on average by 2 %. The effect of semi-quantitative prior constraints combined with topography-guided sub-grid heterogeneity on transferability showed a more variable picture of improvements and deteriorations, but most improvements were observed for low flow statistics.
Optimizing Wind And Hydropower Generation Within Realistic Reservoir Operating Policy
NASA Astrophysics Data System (ADS)
Magee, T. M.; Clement, M. A.; Zagona, E. A.
2012-12-01
Previous studies have evaluated the benefits of utilizing the flexibility of hydropower systems to balance the variability and uncertainty of wind generation. However, previous hydropower and wind coordination studies have simplified non-power constraints on reservoir systems. For example, some studies have only included hydropower constraints on minimum and maximum storage volumes and minimum and maximum plant discharges. The methodology presented here utilizes the pre-emptive linear goal programming optimization solver in RiverWare to model hydropower operations with a set of prioritized policy constraints and objectives based on realistic policies that govern the operation of actual hydropower systems, including licensing constraints, environmental constraints, water management and power objectives. This approach accounts for the fact that not all policy constraints are of equal importance. For example target environmental flow levels may not be satisfied if it would require violating license minimum or maximum storages (pool elevations), but environmental flow constraints will be satisfied before optimizing power generation. Additionally, this work not only models the economic value of energy from the combined hydropower and wind system, it also captures the economic value of ancillary services provided by the hydropower resources. It is recognized that the increased variability and uncertainty inherent with increased wind penetration levels requires an increase in ancillary services. In regions with liberalized markets for ancillary services, a significant portion of hydropower revenue can result from providing ancillary services. Thus, ancillary services should be accounted for when determining the total value of a hydropower system integrated with wind generation. This research shows that the end value of integrated hydropower and wind generation is dependent on a number of factors that can vary by location. Wind factors include wind penetration level, variability due to geographic distribution of wind resources, and forecast error. Electric power system factors include the mix of thermal generation resources, available transmission, demand patterns, and market structures. Hydropower factors include relative storage capacity, reservoir operating policies and hydrologic conditions. In addition, the wind, power system, and hydropower factors are often interrelated because stochastic weather patterns can simultaneously influence wind generation, power demand, and hydrologic inflows. One of the central findings is that the sensitivity of the model to changes cannot be performed one factor at a time because the impact of the factors is highly interdependent. For example, the net value of wind generation may be very sensitive to changes in transmission capacity under some hydrologic conditions, but not at all under others.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ungun, B; Stanford University School of Medicine, Stanford, CA; Fu, A
2016-06-15
Purpose: To develop a procedure for including dose constraints in convex programming-based approaches to treatment planning, and to support dynamic modification of such constraints during planning. Methods: We present a mathematical approach that allows mean dose, maximum dose, minimum dose and dose volume (i.e., percentile) constraints to be appended to any convex formulation of an inverse planning problem. The first three constraint types are convex and readily incorporated. Dose volume constraints are not convex, however, so we introduce a convex restriction that is related to CVaR-based approaches previously proposed in the literature. To compensate for the conservatism of this restriction,more » we propose a new two-pass algorithm that solves the restricted problem on a first pass and uses this solution to form exact constraints on a second pass. In another variant, we introduce slack variables for each dose constraint to prevent the problem from becoming infeasible when the user specifies an incompatible set of constraints. We implement the proposed methods in Python using the convex programming package cvxpy in conjunction with the open source convex solvers SCS and ECOS. Results: We show, for several cases taken from the clinic, that our proposed method meets specified constraints (often with margin) when they are feasible. Constraints are met exactly when we use the two-pass method, and infeasible constraints are replaced with the nearest feasible constraint when slacks are used. Finally, we introduce ConRad, a Python-embedded free software package for convex radiation therapy planning. ConRad implements the methods described above and offers a simple interface for specifying prescriptions and dose constraints. Conclusion: This work demonstrates the feasibility of using modifiable dose constraints in a convex formulation, making it practical to guide the treatment planning process with interactively specified dose constraints. This work was supported by the Stanford BioX Graduate Fellowship and NIH Grant 5R01CA176553.« less
Cosmic time and reduced phase space of general relativity
NASA Astrophysics Data System (ADS)
Ita, Eyo Eyo; Soo, Chopin; Yu, Hoi-Lai
2018-05-01
In an ever-expanding spatially closed universe, the fractional change of the volume is the preeminent intrinsic time interval to describe evolution in general relativity. The expansion of the universe serves as a subsidiary condition which transforms Einstein's theory from a first class to a second class constrained system when the physical degrees of freedom (d.o.f.) are identified with transverse traceless excitations. The super-Hamiltonian constraint is solved by eliminating the trace of the momentum in terms of the other variables, and spatial diffeomorphism symmetry is tackled explicitly by imposing transversality. The theorems of Maskawa-Nishijima appositely relate the reduced phase space to the physical variables in canonical functional integral and Dirac's criterion for second class constraints to nonvanishing Faddeev-Popov determinants in the phase space measures. A reduced physical Hamiltonian for intrinsic time evolution of the two physical d.o.f. emerges. Freed from the first class Dirac algebra, deformation of the Hamiltonian constraint is permitted, and natural extension of the Hamiltonian while maintaining spatial diffeomorphism invariance leads to a theory with Cotton-York term as the ultraviolet completion of Einstein's theory.
Nestler, Steffen
2014-05-01
Parameters in structural equation models are typically estimated using the maximum likelihood (ML) approach. Bollen (1996) proposed an alternative non-iterative, equation-by-equation estimator that uses instrumental variables. Although this two-stage least squares/instrumental variables (2SLS/IV) estimator has good statistical properties, one problem with its application is that parameter equality constraints cannot be imposed. This paper presents a mathematical solution to this problem that is based on an extension of the 2SLS/IV approach to a system of equations. We present an example in which our approach was used to examine strong longitudinal measurement invariance. We also investigated the new approach in a simulation study that compared it with ML in the examination of the equality of two latent regression coefficients and strong measurement invariance. Overall, the results show that the suggested approach is a useful extension of the original 2SLS/IV estimator and allows for the effective handling of equality constraints in structural equation models. © 2013 The British Psychological Society.
Digital robust active control law synthesis for large order systems using constrained optimization
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1987-01-01
This paper presents a direct digital control law synthesis procedure for a large order, sampled data, linear feedback system using constrained optimization techniques to meet multiple design requirements. A linear quadratic Gaussian type cost function is minimized while satisfying a set of constraints on the design loads and responses. General expressions for gradients of the cost function and constraints, with respect to the digital control law design variables are derived analytically and computed by solving a set of discrete Liapunov equations. The designer can choose the structure of the control law and the design variables, hence a stable classical control law as well as an estimator-based full or reduced order control law can be used as an initial starting point. Selected design responses can be treated as constraints instead of lumping them into the cost function. This feature can be used to modify a control law, to meet individual root mean square response limitations as well as minimum single value restrictions. Low order, robust digital control laws were synthesized for gust load alleviation of a flexible remotely piloted drone aircraft.
Simultaneous multiple non-crossing quantile regression estimation using kernel constraints
Liu, Yufeng; Wu, Yichao
2011-01-01
Quantile regression (QR) is a very useful statistical tool for learning the relationship between the response variable and covariates. For many applications, one often needs to estimate multiple conditional quantile functions of the response variable given covariates. Although one can estimate multiple quantiles separately, it is of great interest to estimate them simultaneously. One advantage of simultaneous estimation is that multiple quantiles can share strength among them to gain better estimation accuracy than individually estimated quantile functions. Another important advantage of joint estimation is the feasibility of incorporating simultaneous non-crossing constraints of QR functions. In this paper, we propose a new kernel-based multiple QR estimation technique, namely simultaneous non-crossing quantile regression (SNQR). We use kernel representations for QR functions and apply constraints on the kernel coefficients to avoid crossing. Both unregularised and regularised SNQR techniques are considered. Asymptotic properties such as asymptotic normality of linear SNQR and oracle properties of the sparse linear SNQR are developed. Our numerical results demonstrate the competitive performance of our SNQR over the original individual QR estimation. PMID:22190842
The end-state comfort effect in bimanual grip selection.
Fischman, Mark G; Stodden, David F; Lehman, Davana M
2003-03-01
During a unimanual grip selection task in which people pick up a lightweight dowel and place one end against targets at variable heights, the choice of hand grip (overhand vs. underhand) typically depends on the perception of how comfortable the arm will be at the end of the movement: an end-state comfort effect. The two experiments reported here extend this work to bimanual tasks. In each experiment, 26 right-handed participants used their left and right hands to simultaneously pick up two wooden dowels and place either the right or left end against a series of 14 targets ranging from 14 to 210 cm above the floor. These tasks were performed in systematic ascending and descending orders in Experiment 1 and in random order in Expiment 2. Results were generally consistent with predictions of end-state comfort in that, for the extreme highest and lowest targets, participants tended to select opposite grips with each hand. Taken together, our findings are consistent with the concept of constraint hierarchies within a posture-based motion-planning model.
Active tension network model suggests an exotic mechanical state realized in epithelial tissues
NASA Astrophysics Data System (ADS)
Noll, Nicholas; Mani, Madhav; Heemskerk, Idse; Streichan, Sebastian J.; Shraiman, Boris I.
2017-12-01
Mechanical interactions play a crucial role in epithelial morphogenesis, yet understanding the complex mechanisms through which stress and deformation affect cell behaviour remains an open problem. Here we formulate and analyse the active tension network (ATN) model, which assumes that the mechanical balance of cells within a tissue is dominated by cortical tension and introduces tension-dependent active remodelling of the cortex. We find that ATNs exhibit unusual mechanical properties. Specifically, an ATN behaves as a fluid at short times, but at long times supports external tension like a solid. Furthermore, an ATN has an extensively degenerate equilibrium mechanical state associated with a discrete conformal--`isogonal'--deformation of cells. The ATN model predicts a constraint on equilibrium cell geometries, which we demonstrate to approximately hold in certain epithelial tissues. We further show that isogonal modes are observed in the fruit fly embryo, accounting for the striking variability of apical areas of ventral cells and helping understand the early phase of gastrulation. Living matter realizes new and exotic mechanical states, the study of which helps to understand biological phenomena.
A new look at the simultaneous analysis and design of structures
NASA Technical Reports Server (NTRS)
Striz, Alfred G.
1994-01-01
The minimum weight optimization of structural systems, subject to strength and displacement constraints as well as size side constraints, was investigated by the Simultaneous ANalysis and Design (SAND) approach. As an optimizer, the code NPSOL was used which is based on a sequential quadratic programming (SQP) algorithm. The structures were modeled by the finite element method. The finite element related input to NPSOL was automatically generated from the input decks of such standard FEM/optimization codes as NASTRAN or ASTROS, with the stiffness matrices, at present, extracted from the FEM code ANALYZE. In order to avoid ill-conditioned matrices that can be encountered when the global stiffness equations are used as additional nonlinear equality constraints in the SAND approach (with the displacements as additional variables), the matrix displacement method was applied. In this approach, the element stiffness equations are used as constraints instead of the global stiffness equations, in conjunction with the nodal force equilibrium equations. This approach adds the element forces as variables to the system. Since, for complex structures and the associated large and very sparce matrices, the execution times of the optimization code became excessive due to the large number of required constraint gradient evaluations, the Kreisselmeier-Steinhauser function approach was used to decrease the computational effort by reducing the nonlinear equality constraint system to essentially a single combined constraint equation. As the linear equality and inequality constraints require much less computational effort to evaluate, they were kept in their previous form to limit the complexity of the KS function evaluation. To date, the standard three-bar, ten-bar, and 72-bar trusses have been tested. For the standard SAND approach, correct results were obtained for all three trusses although convergence became slower for the 72-bar truss. When the matrix displacement method was used, correct results were still obtained, but the execution times became excessive due to the large number of constraint gradient evaluations required. Using the KS function, the computational effort dropped, but the optimization seemed to become less robust. The investigation of this phenomenon is continuing. As an alternate approach, the code MINOS for the optimization of sparse matrices can be applied to the problem in lieu of the Kreisselmeier-Steinhauser function. This investigation is underway.
Czarnecki, John B.
2008-01-01
An existing conjunctive use optimization model of the Mississippi River Valley alluvial aquifer was used to evaluate the effect of selected constraints and model variables on ground-water sustainable yield. Modifications to the optimization model were made to evaluate the effects of varying (1) the upper limit of ground-water withdrawal rates, (2) the streamflow constraint associated with the White River, and (3) the specified stage of the White River. Upper limits of ground-water withdrawal rates were reduced to 75, 50, and 25 percent of the 1997 ground-water withdrawal rates. As the upper limit is reduced, the spatial distribution of sustainable pumping increases, although the total sustainable pumping from the entire model area decreases. In addition, the number of binding constraint points decreases. In a separate analysis, the streamflow constraint associated with the White River was optimized, resulting in an estimate of the maximum sustainable streamflow at DeValls Bluff, Arkansas, the site of potential surface-water withdrawals from the White River for the Grand Prairie Area Demonstration Project. The maximum sustainable streamflow, however, is less than the amount of streamflow allocated in the spring during the paddlefish spawning period. Finally, decreasing the specified stage of the White River was done to evaluate a hypothetical river stage that might result if the White River were to breach the Melinda Head Cut Structure, one of several manmade diversions that prevents the White River from permanently joining the Arkansas River. A reduction in the stage of the White River causes reductions in the sustainable yield of ground water.
Dynamical aspects of behavior generation under constraints
Harter, Derek; Achunala, Srinivas
2007-01-01
Dynamic adaptation is a key feature of brains helping to maintain the quality of their performance in the face of increasingly difficult constraints. How to achieve high-quality performance under demanding real-time conditions is an important question in the study of cognitive behaviors. Animals and humans are embedded in and constrained by their environments. Our goal is to improve the understanding of the dynamics of the interacting brain–environment system by studying human behaviors when completing constrained tasks and by modeling the observed behavior. In this article we present results of experiments with humans performing tasks on the computer under variable time and resource constraints. We compare various models of behavior generation in order to describe the observed human performance. Finally we speculate on mechanisms how chaotic neurodynamics can contribute to the generation of flexible human behaviors under constraints. PMID:19003514
Zheng, Wenjun; Brooks, Bernard R
2006-06-15
Recently we have developed a normal-modes-based algorithm that predicts the direction of protein conformational changes given the initial state crystal structure together with a small number of pairwise distance constraints for the end state. Here we significantly extend this method to accurately model both the direction and amplitude of protein conformational changes. The new protocol implements a multisteps search in the conformational space that is driven by iteratively minimizing the error of fitting the given distance constraints and simultaneously enforcing the restraint of low elastic energy. At each step, an incremental structural displacement is computed as a linear combination of the lowest 10 normal modes derived from an elastic network model, whose eigenvectors are reorientated to correct for the distortions caused by the structural displacements in the previous steps. We test this method on a list of 16 pairs of protein structures for which relatively large conformational changes are observed (root mean square deviation >3 angstroms), using up to 10 pairwise distance constraints selected by a fluctuation analysis of the initial state structures. This method has achieved a near-optimal performance in almost all cases, and in many cases the final structural models lie within root mean square deviation of 1 approximately 2 angstroms from the native end state structures.
Leonardi, Nora; Shirer, William R; Greicius, Michael D; Van De Ville, Dimitri
2014-12-01
Resting-state functional connectivity (FC) is highly variable across the duration of a scan. Groups of coevolving connections, or reproducible patterns of dynamic FC (dFC), have been revealed in fluctuating FC by applying unsupervised learning techniques. Based on results from k-means clustering and sliding-window correlations, it has recently been hypothesized that dFC may cycle through several discrete FC states. Alternatively, it has been proposed to represent dFC as a linear combination of multiple FC patterns using principal component analysis. As it is unclear whether sparse or nonsparse combinations of FC patterns are most appropriate, and as this affects their interpretation and use as markers of cognitive processing, the goal of our study was to evaluate the impact of sparsity by performing an empirical evaluation of simulated, task-based, and resting-state dFC. To this aim, we applied matrix factorizations subject to variable constraints in the temporal domain and studied both the reproducibility of ensuing representations of dFC and the expression of FC patterns over time. During subject-driven tasks, dFC was well described by alternating FC states in accordance with the nature of the data. The estimated FC patterns showed a rich structure with combinations of known functional networks enabling accurate identification of three different tasks. During rest, dFC was better described by multiple FC patterns that overlap. The executive control networks, which are critical for working memory, appeared grouped alternately with externally or internally oriented networks. These results suggest that combinations of FC patterns can provide a meaningful way to disentangle resting-state dFC. © 2014 The Authors. Human Brain Mapping published by Wiley Periodicals, Inc.
Multiwavelength observations of a VHE gamma-ray flare from PKS 1510-089 in 2015
NASA Astrophysics Data System (ADS)
Ahnen, M. L.; Ansoldi, S.; Antonelli, L. A.; Arcaro, C.; Babić, A.; Banerjee, B.; Bangale, P.; Barres de Almeida, U.; Barrio, J. A.; Bednarek, W.; Bernardini, E.; Berti, A.; Biasuzzi, B.; Biland, A.; Blanch, O.; Bonnefoy, S.; Bonnoli, G.; Borracci, F.; Bretz, T.; Carosi, R.; Carosi, A.; Chatterjee, A.; Colin, P.; Colombo, E.; Contreras, J. L.; Cortina, J.; Covino, S.; Cumani, P.; Da Vela, P.; Dazzi, F.; De Angelis, A.; De Lotto, B.; de Oña Wilhelmi, E.; Di Pierro, F.; Doert, M.; Domínguez, A.; Dominis Prester, D.; Dorner, D.; Doro, M.; Einecke, S.; Eisenacher Glawion, D.; Elsaesser, D.; Engelkemeier, M.; Fallah Ramazani, V.; Fernández-Barral, A.; Fidalgo, D.; Fonseca, M. V.; Font, L.; Fruck, C.; Galindo, D.; García López, R. J.; Garczarczyk, M.; Gaug, M.; Giammaria, P.; Godinović, N.; Gora, D.; Guberman, D.; Hadasch, D.; Hahn, A.; Hassan, T.; Hayashida, M.; Herrera, J.; Hose, J.; Hrupec, D.; Hughes, G.; Ishio, K.; Konno, Y.; Kubo, H.; Kushida, J.; Kuveždić, D.; Lelas, D.; Lindfors, E.; Lombardi, S.; Longo, F.; López, M.; Majumdar, P.; Makariev, M.; Maneva, G.; Manganaro, M.; Mannheim, K.; Maraschi, L.; Mariotti, M.; Martínez, M.; Mazin, D.; Menzel, U.; Mirzoyan, R.; Moralejo, A.; Moretti, E.; Nakajima, D.; Neustroev, V.; Niedzwiecki, A.; Nievas Rosillo, M.; Nilsson, K.; Nishijima, K.; Noda, K.; Nogués, L.; Paiano, S.; Palacio, J.; Palatiello, M.; Paneque, D.; Paoletti, R.; Paredes, J. M.; Paredes-Fortuny, X.; Pedaletti, G.; Peresano, M.; Perri, L.; Persic, M.; Poutanen, J.; Prada Moroni, P. G.; Prandini, E.; Puljak, I.; Garcia, J. R.; Reichardt, I.; Rhode, W.; Ribó, M.; Rico, J.; Saito, T.; Satalecka, K.; Schroeder, S.; Schweizer, T.; Shore, S. N.; Sillanpää, A.; Sitarek, J.; Šnidarić, I.; Sobczynska, D.; Stamerra, A.; Strzys, M.; Surić, T.; Takalo, L.; Tavecchio, F.; Temnikov, P.; Terzić, T.; Tescaro, D.; Teshima, M.; Torres, D. F.; Torres-Albà, N.; Toyama, T.; Treves, A.; Vanzo, G.; Vazquez Acosta, M.; Vovk, I.; Ward, J. E.; Will, M.; Wu, M. H.; Zarić, D.; Desiante, R.; Becerra González, J.; D'Ammando, F.; Larsson, S.; Raiteri, C. M.; Reinthal, R.; Lähteenmäki, A.; Järvelä, E.; Tornikoski, M.; Ramakrishnan, V.; Jorstad, S. G.; Marscher, A. P.; Bala, V.; MacDonald, N. R.; Kaur, N.; Sameer; Baliyan, K.; Acosta-Pulido, J. A.; Lazaro, C.; Martí-nez-Lombilla, C.; Grinon-Marin, A. B.; Pastor Yabar, A.; Protasio, C.; Carnerero, M. I.; Jermak, H.; Steele, I. A.; Larionov, V. M.; Borman, G. A.; Grishina, T. S.
2017-07-01
Context. PKS 1510-089 is one of only a few flat spectrum radio quasars detected in the very-high-energy (VHE, > 100 GeV) gamma-ray band. Aims: We study the broadband spectral and temporal properties of the PKS 1510-089 emission during a high gamma-ray state. Methods: We performed VHE gamma-ray observations of PKS 1510-089 with the Major Atmospheric Gamma Imaging Cherenkov (MAGIC) telescopes during a long, high gamma-ray state in May 2015. In order to perform broadband modeling of the source, we have also gathered contemporaneous multiwavelength data in radio, IR, optical photometry and polarization, UV, X-ray, and GeV gamma-ray ranges. We construct a broadband spectral energy distribution (SED) in two periods, selected according to VHE gamma-ray state. Results: PKS 1510-089 was detected by MAGIC during a few day-long observations performed in the middle of a long, high optical and gamma-ray state, showing for the first time a significant VHE gamma-ray variability. Similarly to the optical and gamma-ray high state of the source detected in 2012, it was accompanied by a rotation of the optical polarization angle and the emission of a new jet component observed in radio. However, owing to large uncertainty on the knot separation time, the association with the VHE gamma-ray emission cannot be firmly established. The spectral shape in the VHE band during the flare is similar to those obtained during previous measurements of the source. The observed flux variability sets constraints for the first time on the size of the region from which VHE gamma rays are emitted. We model the broadband SED in the framework of the external Compton scenario and discuss the possible emission site in view of multiwavelength data and alternative emission models.
Automatic Methods and Tools for the Verification of Real Time Systems
1997-07-31
real - time systems . This was accomplished by extending techniques, based on automata theory and temporal logic, that have been successful for the verification of time-independent reactive systems. As system specification lanmaage for embedded real - time systems , we introduced hybrid automata, which equip traditional discrete automata with real-numbered clock variables and continuous environment variables. As requirements specification languages, we introduced temporal logics with clock variables for expressing timing constraints.
He, Kaifei; Xu, Tianhe; Förste, Christoph; Petrovic, Svetozar; Barthelmes, Franz; Jiang, Nan; Flechtner, Frank
2016-01-01
When applying the Global Navigation Satellite System (GNSS) for precise kinematic positioning in airborne and shipborne gravimetry, multiple GNSS receiving equipment is often fixed mounted on the kinematic platform carrying the gravimetry instrumentation. Thus, the distances among these GNSS antennas are known and invariant. This information can be used to improve the accuracy and reliability of the state estimates. For this purpose, the known distances between the antennas are applied as a priori constraints within the state parameters adjustment. These constraints are introduced in such a way that their accuracy is taken into account. To test this approach, GNSS data of a Baltic Sea shipborne gravimetric campaign have been used. The results of our study show that an application of distance constraints improves the accuracy of the GNSS kinematic positioning, for example, by about 4 mm for the radial component. PMID:27043580
NASA Technical Reports Server (NTRS)
Zimmerman, W. F.; Matijevic, J. R.
1987-01-01
Novel system engineering techniques have been developed and applied to establishing structured design and performance objectives for the Telerobotics Testbed that reduce technical risk while still allowing the testbed to demonstrate an advancement in state-of-the-art robotic technologies. To estblish the appropriate tradeoff structure and balance of technology performance against technical risk, an analytical data base was developed which drew on: (1) automation/robot-technology availability projections, (2) typical or potential application mission task sets, (3) performance simulations, (4) project schedule constraints, and (5) project funding constraints. Design tradeoffs and configuration/performance iterations were conducted by comparing feasible technology/task set configurations against schedule/budget constraints as well as original program target technology objectives. The final system configuration, task set, and technology set reflected a balanced advancement in state-of-the-art robotic technologies, while meeting programmatic objectives and schedule/cost constraints.
He, Kaifei; Xu, Tianhe; Förste, Christoph; Petrovic, Svetozar; Barthelmes, Franz; Jiang, Nan; Flechtner, Frank
2016-04-01
When applying the Global Navigation Satellite System (GNSS) for precise kinematic positioning in airborne and shipborne gravimetry, multiple GNSS receiving equipment is often fixed mounted on the kinematic platform carrying the gravimetry instrumentation. Thus, the distances among these GNSS antennas are known and invariant. This information can be used to improve the accuracy and reliability of the state estimates. For this purpose, the known distances between the antennas are applied as a priori constraints within the state parameters adjustment. These constraints are introduced in such a way that their accuracy is taken into account. To test this approach, GNSS data of a Baltic Sea shipborne gravimetric campaign have been used. The results of our study show that an application of distance constraints improves the accuracy of the GNSS kinematic positioning, for example, by about 4 mm for the radial component.
Necessary optimality conditions for infinite dimensional state constrained control problems
NASA Astrophysics Data System (ADS)
Frankowska, H.; Marchini, E. M.; Mazzola, M.
2018-06-01
This paper is concerned with first order necessary optimality conditions for state constrained control problems in separable Banach spaces. Assuming inward pointing conditions on the constraint, we give a simple proof of Pontryagin maximum principle, relying on infinite dimensional neighboring feasible trajectories theorems proved in [20]. Further, we provide sufficient conditions guaranteeing normality of the maximum principle. We work in the abstract semigroup setting, but nevertheless we apply our results to several concrete models involving controlled PDEs. Pointwise state constraints (as positivity of the solutions) are allowed.
Gyan P. Nyaupane; Duarte B. Morais; Alan Graefe
2003-01-01
The purpose of this study was to compare leisure constraints across three outdoor recreation activities, whitewater rafting, canoeing, and overnight horseback riding, in the context of the three-dimensional leisure constraints model proposed by Crawford and Godbey (1987). The sample consisted of 650 outdoor enthusiasts from 14 U.S. states who showed an interest in...
Reasoning about real-time systems with temporal interval logic constraints on multi-state automata
NASA Technical Reports Server (NTRS)
Gabrielian, Armen
1991-01-01
Models of real-time systems using a single paradigm often turn out to be inadequate, whether the paradigm is based on states, rules, event sequences, or logic. A model-based approach to reasoning about real-time systems is presented in which a temporal interval logic called TIL is employed to define constraints on a new type of high level automata. The combination, called hierarchical multi-state (HMS) machines, can be used to model formally a real-time system, a dynamic set of requirements, the environment, heuristic knowledge about planning-related problem solving, and the computational states of the reasoning mechanism. In this framework, mathematical techniques were developed for: (1) proving the correctness of a representation; (2) planning of concurrent tasks to achieve goals; and (3) scheduling of plans to satisfy complex temporal constraints. HMS machines allow reasoning about a real-time system from a model of how truth arises instead of merely depending of what is true in a system.
Supporting Multiple Cognitive Processing Styles Using Tailored Support Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuan Q. Tran; Karen M. Feigh; Amy R. Pritchett
According to theories of cognitive processing style or cognitive control mode, human performance is more effective when an individual’s cognitive state (e.g., intuition/scramble vs. deliberate/strategic) matches his/her ecological constraints or context (e.g., utilize intuition to strive for a "good-enough" response instead of deliberating for the "best" response under high time pressure). Ill-mapping between cognitive state and ecological constraints are believed to lead to degraded task performance. Consequently, incorporating support systems which are designed to specifically address multiple cognitive and functional states e.g., high workload, stress, boredom, and initiate appropriate mitigation strategies (e.g., reduce information load) is essential to reduce plantmore » risk. Utilizing the concept of Cognitive Control Models, this paper will discuss the importance of tailoring support systems to match an operator's cognitive state, and will further discuss the importance of these ecological constraints in selecting and implementing mitigation strategies for safe and effective system performance. An example from the nuclear power plant industry illustrating how a support system might be tailored to support different cognitive states is included.« less
Solving quantum optimal control problems using Clebsch variables and Lin constraints
NASA Astrophysics Data System (ADS)
Delgado-Téllez, M.; Ibort, A.; Rodríguez de la Peña, T.
2018-01-01
Clebsch variables (and Lin constraints) are applied to the study of a class of optimal control problems for affine-controlled quantum systems. The optimal control problem will be modelled with controls defined on an auxiliary space where the dynamical group of the system acts freely. The reciprocity between both theories: the classical theory defined by the objective functional and the quantum system, is established by using a suitable version of Lagrange’s multipliers theorem and a geometrical interpretation of the constraints of the system as defining a subspace of horizontal curves in an associated bundle. It is shown how the solutions of the variational problem defined by the objective functional determine solutions of the quantum problem. Then a new way of obtaining explicit solutions for a family of optimal control problems for affine-controlled quantum systems (finite or infinite dimensional) is obtained. One of its main advantages, is the the use of Clebsch variables allows to compute such solutions from solutions of invariant problems that can often be computed explicitly. This procedure can be presented as an algorithm that can be applied to a large class of systems. Finally, some simple examples, spin control, a simple quantum Hamiltonian with an ‘Elroy beanie’ type classical model and a controlled one-dimensional quantum harmonic oscillator, illustrating the main features of the theory, will be discussed.
X-ray spectral analysis of the steady states of GRS 1915+105
NASA Astrophysics Data System (ADS)
Peris, Charith; Remillard, Ronald A.; Steiner, James F.; Vrtilek, Saeqa Dil; Varniere, Peggy; Rodriguez, Jerome; Pooley, Guy G.
2016-04-01
Of the black hole binaries (BHBs) discovered thus far, GRS 1915+105 stands out as an exceptional source primarily due to its wild X-ray variability, the diversity of which has not been replicated in any other stellar-mass black hole. Although extreme variability is commonplace in its light-curve, about half of the observations of GRS1915+105 show fairly steady X-ray intensity. We report on the X-ray spectral behavior within these steady observations. Our work is based on a vast RXTE/PCA data set obtained on GRS 1915+105 during the course of its entire mission and 10 years of radio data from the Ryle Telescope, which overlap the X-ray data. We find that the steady observations within the X-ray data set naturally separate into two regions in a color-color diagram, which we refer to as steady-soft and steady-hard. GRS 1915+105 displays significant curvature in the Comptonization component within the PCA band pass suggesting significantly heating from a hot disk present in all states. A new Comptonization model 'simplcut' was developed in order to model this curvature to best effect. A majority of the steady-soft observations display a roughly constant inner disk radius, remarkably reminiscent of canonical soft state black hole binaries. In contrast, the steady-hard observations display a growing disk truncation that is correlated to the mass accretion rate through the disk, which suggests a magnetically truncated disk. A comparison of X-ray model parameters to the canonical state definitions show that almost all steady-soft observations match the criteria of either thermal or steep power law state, while the thermal state observations dominate the constant radius branch. A large portion 80 % of the steady-hard observations matches the hard state criteria when the disk fraction constraint is neglected. These results combine to suggest that within the complexity of this source is a simpler underlying basis of states, which map to those observed in canonical BHBs.
Analytic Analysis of Convergent Shocks to Multi-Gigabar Conditions
NASA Astrophysics Data System (ADS)
Ruby, J. J.; Rygg, J. R.; Collins, G. W.; Bachmann, B.; Doeppner, T.; Ping, Y.; Gaffney, J.; Lazicki, A.; Kritcher, A. L.; Swift, D.; Nilsen, J.; Landen, O. L.; Hatarik, R.; Masters, N.; Nagel, S.; Sterne, P.; Pardini, T.; Khan, S.; Celliers, P. M.; Patel, P.; Gericke, D.; Falcone, R.
2017-10-01
The gigabar experimental platform at the National Ignition Facility is designed to increase understanding of the physical states and processes that dominate in the hydrogen at pressures from several hundreds of Mbar to tens of Gbar. Recent experiments using a solid CD2 ball reached temperatures and densities of order 107 K and several tens of g/cm3 , respectively. These conditions lead to the production of D-D fusion neutrons and x-ray bremsstrahlung photons, which allow us to place constraints on the thermodynamic states at peak compression. We use an analytic model to connect the neutron and x-ray emission with the state variables at peak compression. This analytic model is based on the self-similar Guderley solution of an imploding shock wave and the self-similar solution of the point explosion with heat conduction from Reinicke. Work is also being done to create a fully self-similar solution of an imploding shock wave coupled with heat conduction and radiation transport using a general equation of state. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.
Systems with outer constraints. Gupta-Bleuler electromagnetism as an algebraic field theory
NASA Astrophysics Data System (ADS)
Grundling, Hendrik
1988-03-01
Since there are some important systems which have constraints not contained in their field algebras, we develop here in a C*-context the algebraic structures of these. The constraints are defined as a group G acting as outer automorphisms on the field algebra ℱ, α: G ↦ Aut ℱ, α G ⊄ Inn ℱ, and we find that the selection of G-invariant states on ℱ is the same as the selection of states ω on M( G M(Gmathop × limits_α F) ℱ) by ω( U g)=1∨ g∈ G, where U g ∈ M ( G M(Gmathop × limits_α F) ℱ)/ℱ are the canonical elements implementing α g . These states are taken as the physical states, and this specifies the resulting algebraic structure of the physics in M( G M(Gmathop × limits_α F) ℱ), and in particular the maximal constraint free physical algebra ℛ. A nontriviality condition is given for ℛ to exist, and we extend the notion of a crossed product to deal with a situation where G is not locally compact. This is necessary to deal with the field theoretical aspect of the constraints. Next the C*-algebra of the CCR is employed to define the abstract algebraic structure of Gupta-Bleuler electromagnetism in the present framework. The indefinite inner product representation structure is obtained, and this puts Gupta-Bleuler electromagnetism on a rigorous footing. Finally, as a bonus, we find that the algebraic structures just set up, provide a blueprint for constructive quadratic algebraic field theory.
Lustgarten, Jonathan Lyle; Balasubramanian, Jeya Balaji; Visweswaran, Shyam; Gopalakrishnan, Vanathi
2017-03-01
The comprehensibility of good predictive models learned from high-dimensional gene expression data is attractive because it can lead to biomarker discovery. Several good classifiers provide comparable predictive performance but differ in their abilities to summarize the observed data. We extend a Bayesian Rule Learning (BRL-GSS) algorithm, previously shown to be a significantly better predictor than other classical approaches in this domain. It searches a space of Bayesian networks using a decision tree representation of its parameters with global constraints, and infers a set of IF-THEN rules. The number of parameters and therefore the number of rules are combinatorial to the number of predictor variables in the model. We relax these global constraints to a more generalizable local structure (BRL-LSS). BRL-LSS entails more parsimonious set of rules because it does not have to generate all combinatorial rules. The search space of local structures is much richer than the space of global structures. We design the BRL-LSS with the same worst-case time-complexity as BRL-GSS while exploring a richer and more complex model space. We measure predictive performance using Area Under the ROC curve (AUC) and Accuracy. We measure model parsimony performance by noting the average number of rules and variables needed to describe the observed data. We evaluate the predictive and parsimony performance of BRL-GSS, BRL-LSS and the state-of-the-art C4.5 decision tree algorithm, across 10-fold cross-validation using ten microarray gene-expression diagnostic datasets. In these experiments, we observe that BRL-LSS is similar to BRL-GSS in terms of predictive performance, while generating a much more parsimonious set of rules to explain the same observed data. BRL-LSS also needs fewer variables than C4.5 to explain the data with similar predictive performance. We also conduct a feasibility study to demonstrate the general applicability of our BRL methods on the newer RNA sequencing gene-expression data.
Motion of packings of frictional grains.
Halsey, Thomas C
2009-07-01
Friction plays a key role in controlling the rheology of dense granular flows. Counting the number of constraints vs the number of variables indicates that critical coordination numbers Zc=3 (in D=2) and Zc=4 (in D=3) are special, in that states in which all contacts roll without frictional sliding are naively possible at and below these average coordination numbers. We construct an explicit example of such a state in D=2 based on a honeycomb lattice. This state has surprisingly large values for the typical angular velocities of the particles. Solving for the forces in such a state, we conclude that organized shear can exist in this state only on scales l
C-fuzzy variable-branch decision tree with storage and classification error rate constraints
NASA Astrophysics Data System (ADS)
Yang, Shiueng-Bien
2009-10-01
The C-fuzzy decision tree (CFDT), which is based on the fuzzy C-means algorithm, has recently been proposed. The CFDT is grown by selecting the nodes to be split according to its classification error rate. However, the CFDT design does not consider the classification time taken to classify the input vector. Thus, the CFDT can be improved. We propose a new C-fuzzy variable-branch decision tree (CFVBDT) with storage and classification error rate constraints. The design of the CFVBDT consists of two phases-growing and pruning. The CFVBDT is grown by selecting the nodes to be split according to the classification error rate and the classification time in the decision tree. Additionally, the pruning method selects the nodes to prune based on the storage requirement and the classification time of the CFVBDT. Furthermore, the number of branches of each internal node is variable in the CFVBDT. Experimental results indicate that the proposed CFVBDT outperforms the CFDT and other methods.
Relativistic Hamiltonian dynamics for N point particles
NASA Astrophysics Data System (ADS)
King, M. J.
1980-08-01
The theory is quantized canonically to give a relativistic quantum mechanics for N particles. The existence of such a theory has been in doubt since the proof of the No-interaction theorem. However, such a theory does exist and was generalized. This dynamics is expressed in terms of N + 1 pairs of canonical fourvectors (center-of-momentum variables or CMV). A gauge independent reduction due to N + 3 first class kinematic constraints leads to a 6N + 2 dimensional minimum kinematic phase space, K. The kinematics and dynamics of particles with intrinsic spin were also considered. To this end known constraint techniques were generalized to make use of graded Lie algebras. The (Poincare) invariant Hamiltonian is specified in terms of the gauge invarient variables of K. The covariant worldline variables of each particle were found to be gauge dependent. As such they will usually not satisfy a canonical algebra. An exception exists for free particles. The No-interaction theorem therefore is not violated.
Displacement Based Multilevel Structural Optimization
NASA Technical Reports Server (NTRS)
Sobieszezanski-Sobieski, J.; Striz, A. G.
1996-01-01
In the complex environment of true multidisciplinary design optimization (MDO), efficiency is one of the most desirable attributes of any approach. In the present research, a new and highly efficient methodology for the MDO subset of structural optimization is proposed and detailed, i.e., for the weight minimization of a given structure under size, strength, and displacement constraints. Specifically, finite element based multilevel optimization of structures is performed. In the system level optimization, the design variables are the coefficients of assumed polynomially based global displacement functions, and the load unbalance resulting from the solution of the global stiffness equations is minimized. In the subsystems level optimizations, the weight of each element is minimized under the action of stress constraints, with the cross sectional dimensions as design variables. The approach is expected to prove very efficient since the design task is broken down into a large number of small and efficient subtasks, each with a small number of variables, which are amenable to parallel computing.
Multi-band implications of external-IC flares
NASA Astrophysics Data System (ADS)
Richter, Stephan; Spanier, Felix
2015-02-01
Very fast variability on scales of minutes is regularly observed in Blazars. The assumption that these flares are emerging from the dominant emission zone of the very high energy (VHE) radiation within the jet challenges current acceleration and radiation models. In this work we use a spatially resolved and time dependent synchrotron-self-Compton (SSC) model that includes the full time dependence of Fermi-I acceleration. We use the (apparent) orphan γ -ray flare of Mrk501 during MJD 54952 and test various flare scenarios against the observed data. We find that a rapidly variable external radiation field can reproduce the high energy lightcurve best. However, the effect of the strong inverse Compton (IC) cooling on other bands and the X-ray observations are constraining the parameters to rather extreme ranges. Then again other scenarios would require parameters even more extreme or stronger physical constraints on the rise and decay of the source of the variability which might be in contradiction with constraints derived from the size of the black hole's ergosphere.
Model-based metabolism design: constraints for kinetic and stoichiometric models
Stalidzans, Egils; Seiman, Andrus; Peebo, Karl; Komasilovs, Vitalijs; Pentjuss, Agris
2018-01-01
The implementation of model-based designs in metabolic engineering and synthetic biology may fail. One of the reasons for this failure is that only a part of the real-world complexity is included in models. Still, some knowledge can be simplified and taken into account in the form of optimization constraints to improve the feasibility of model-based designs of metabolic pathways in organisms. Some constraints (mass balance, energy balance, and steady-state assumption) serve as a basis for many modelling approaches. There are others (total enzyme activity constraint and homeostatic constraint) proposed decades ago, but which are frequently ignored in design development. Several new approaches of cellular analysis have made possible the application of constraints like cell size, surface, and resource balance. Constraints for kinetic and stoichiometric models are grouped according to their applicability preconditions in (1) general constraints, (2) organism-level constraints, and (3) experiment-level constraints. General constraints are universal and are applicable for any system. Organism-level constraints are applicable for biological systems and usually are organism-specific, but these constraints can be applied without information about experimental conditions. To apply experimental-level constraints, peculiarities of the organism and the experimental set-up have to be taken into account to calculate the values of constraints. The limitations of applicability of particular constraints for kinetic and stoichiometric models are addressed. PMID:29472367
Closed-form solutions for a class of optimal quadratic regulator problems with terminal constraints
NASA Technical Reports Server (NTRS)
Juang, J.-N.; Turner, J. D.; Chun, H. M.
1984-01-01
Closed-form solutions are derived for coupled Riccati-like matrix differential equations describing the solution of a class of optimal finite time quadratic regulator problems with terminal constraints. Analytical solutions are obtained for the feedback gains and the closed-loop response trajectory. A computational procedure is presented which introduces new variables for efficient computation of the terminal control law. Two examples are given to illustrate the validity and usefulness of the theory.
Re/Os constraint on the time variability of the fine-structure constant.
Fujii, Yasunori; Iwamoto, Akira
2003-12-31
We argue that the accuracy by which the isochron parameters of the decay 187Re-->187Os are determined by dating iron meteorites may constrain the possible time dependence of the decay rate and hence of the fine-structure constant alpha, not directly but only in a model-dependent manner. From this point of view, some of the attempts to analyze the Oklo constraint and the results of the quasistellar-object absorption lines are reexamined.
Strict Constraint Feasibility in Analysis and Design of Uncertain Systems
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2006-01-01
This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.
Nonlinear evolution of coarse-grained quantum systems with generalized purity constraints
NASA Astrophysics Data System (ADS)
Burić, Nikola
2010-12-01
Constrained quantum dynamics is used to propose a nonlinear dynamical equation for pure states of a generalized coarse-grained system. The relevant constraint is given either by the generalized purity or by the generalized invariant fluctuation, and the coarse-grained pure states correspond to the generalized coherent, i.e. generalized nonentangled states. Open system model of the coarse-graining is discussed. It is shown that in this model and in the weak coupling limit the constrained dynamical equations coincide with an equation for pointer states, based on Hilbert-Schmidt distance, that was previously suggested in the context of the decoherence theory.
DOT National Transportation Integrated Search
2013-10-01
This research project assessed the multimodal transportation needs, constraints, and opportunities facing : the state of Georgia and the Georgia Department of Transportation (GDOT). The project report : includes: 1) a literature review focusing on th...
NASA Astrophysics Data System (ADS)
Ariki, Taketo
2018-02-01
A hyperfluid model is constructed on the basis of its action entirely free from external constraints, regarding the hyperfluid as a self-consistent classical field. Intrinsic hypermomentum is no longer a supplemental variable given by external constraints, but arises purely from the diffeomorphism covariance of dynamical field. The field-theoretic approach allows natural classification of a hyperfluid on the basis of its symmetry group and corresponding homogeneous space; scalar, spinor, vector, and tensor fluids are introduced as simple examples. Apart from phenomenological constraints, the theory predicts the hypermomentum exchange of fluid via field-theoretic interactions of various classes; fluid–fluid interactions, minimal and non-minimal SU(n) -gauge couplings, and coupling with metric-affine gravity are all successfully formulated within the classical regime.
NASA Technical Reports Server (NTRS)
Dolvin, Douglas J.
1992-01-01
The superior survivability of a multirole fighter is dependent upon balanced integration of technologies for reduced vulnerability and susceptability. The objective is to develop a methodology for structural design optimization with survivability dependent constraints. The design criteria for optimization will be survivability in a tactical laser environment. The following analyses are studied to establish a dependent design relationship between structural weight and survivability: (1) develop a physically linked global design model of survivability variables; and (2) apply conventional constraints to quantify survivability dependent design. It was not possible to develop an exact approach which would include all aspects of survivability dependent design, therefore guidelines are offered for solving similar problems.
Simultaneous analysis and design
NASA Technical Reports Server (NTRS)
Haftka, R. T.
1984-01-01
Optimization techniques are increasingly being used for performing nonlinear structural analysis. The development of element by element (EBE) preconditioned conjugate gradient (CG) techniques is expected to extend this trend to linear analysis. Under these circumstances the structural design problem can be viewed as a nested optimization problem. There are computational benefits to treating this nested problem as a large single optimization problem. The response variables (such as displacements) and the structural parameters are all treated as design variables in a unified formulation which performs simultaneously the design and analysis. Two examples are used for demonstration. A seventy-two bar truss is optimized subject to linear stress constraints and a wing box structure is optimized subject to nonlinear collapse constraints. Both examples show substantial computational savings with the unified approach as compared to the traditional nested approach.
The ΩDE-ΩM Plane in Dark Energy Cosmology
NASA Astrophysics Data System (ADS)
Qiang, Yuan; Zhang, Tong-Jie
The dark energy cosmology with the equation of state w=const. is considered in this paper. The ΩDE-ΩM plane has been used to study the present state and expansion history of the universe. Through the mathematical analysis, we give the theoretical constraint of cosmological parameters. Together with some observations such as the transition redshift from deceleration to acceleration, more precise constraint on cosmological parameters can be acquired.
Efficient robust reconstruction of dynamic PET activity maps with radioisotope decay constraints.
Gao, Fei; Liu, Huafeng; Shi, Pengcheng
2010-01-01
Dynamic PET imaging performs sequence of data acquisition in order to provide visualization and quantification of physiological changes in specific tissues and organs. The reconstruction of activity maps is generally the first step in dynamic PET. State space Hinfinity approaches have been proved to be a robust method for PET image reconstruction where, however, temporal constraints are not considered during the reconstruction process. In addition, the state space strategies for PET image reconstruction have been computationally prohibitive for practical usage because of the need for matrix inversion. In this paper, we present a minimax formulation of the dynamic PET imaging problem where a radioisotope decay model is employed as physics-based temporal constraints on the photon counts. Furthermore, a robust steady state Hinfinity filter is developed to significantly improve the computational efficiency with minimal loss of accuracy. Experiments are conducted on Monte Carlo simulated image sequences for quantitative analysis and validation.
Influence of flow constraints on the properties of the critical endpoint of symmetric nuclear matter
NASA Astrophysics Data System (ADS)
Ivanytskyi, A. I.; Bugaev, K. A.; Sagun, V. V.; Bravina, L. V.; Zabrodin, E. E.
2018-06-01
We propose a novel family of equations of state for symmetric nuclear matter based on the induced surface tension concept for the hard-core repulsion. It is shown that having only four adjustable parameters the suggested equations of state can, simultaneously, reproduce not only the main properties of the nuclear matter ground state, but the proton flow constraint up its maximal particle number densities. Varying the model parameters we carefully examine the range of values of incompressibility constant of normal nuclear matter and its critical temperature, which are consistent with the proton flow constraint. This analysis allows us to show that the physically most justified value of nuclear matter critical temperature is 15.5-18 MeV, the incompressibility constant is 270-315 MeV and the hard-core radius of nucleons is less than 0.4 fm.
Adaptive Neural Control of Uncertain MIMO Nonlinear Systems With State and Input Constraints.
Chen, Ziting; Li, Zhijun; Chen, C L Philip
2017-06-01
An adaptive neural control strategy for multiple input multiple output nonlinear systems with various constraints is presented in this paper. To deal with the nonsymmetric input nonlinearity and the constrained states, the proposed adaptive neural control is combined with the backstepping method, radial basis function neural network, barrier Lyapunov function (BLF), and disturbance observer. By ensuring the boundedness of the BLF of the closed-loop system, it is demonstrated that the output tracking is achieved with all states remaining in the constraint sets and the general assumption on nonsingularity of unknown control coefficient matrices has been eliminated. The constructed adaptive neural control has been rigorously proved that it can guarantee the semiglobally uniformly ultimate boundedness of all signals in the closed-loop system. Finally, the simulation studies on a 2-DOF robotic manipulator system indicate that the designed adaptive control is effective.
Discrete-time BAM neural networks with variable delays
NASA Astrophysics Data System (ADS)
Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi
2007-07-01
This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.
Optimal Low Energy Earth-Moon Transfers
NASA Technical Reports Server (NTRS)
Griesemer, Paul Ricord; Ocampo, Cesar; Cooley, D. S.
2010-01-01
The optimality of a low-energy Earth-Moon transfer is examined for the first time using primer vector theory. An optimal control problem is formed with the following free variables: the location, time, and magnitude of the transfer insertion burn, and the transfer time. A constraint is placed on the initial state of the spacecraft to bind it to a given initial orbit around a first body, and on the final state of the spacecraft to limit its Keplerian energy with respect to a second body. Optimal transfers in the system are shown to meet certain conditions placed on the primer vector and its time derivative. A two point boundary value problem containing these necessary conditions is created for use in targeting optimal transfers. The two point boundary value problem is then applied to the ballistic lunar capture problem, and an optimal trajectory is shown. Additionally, the ballistic lunar capture trajectory is examined to determine whether one or more additional impulses may improve on the cost of the transfer.
A Rocket Engine Design Expert System
NASA Technical Reports Server (NTRS)
Davidian, Kenneth J.
1989-01-01
The overall structure and capabilities of an expert system designed to evaluate rocket engine performance are described. The expert system incorporates a JANNAF standard reference computer code to determine rocket engine performance and a state of the art finite element computer code to calculate the interactions between propellant injection, energy release in the combustion chamber, and regenerative cooling heat transfer. Rule-of-thumb heuristics were incorporated for the H2-O2 coaxial injector design, including a minimum gap size constraint on the total number of injector elements. One dimensional equilibrium chemistry was used in the energy release analysis of the combustion chamber. A 3-D conduction and/or 1-D advection analysis is used to predict heat transfer and coolant channel wall temperature distributions, in addition to coolant temperature and pressure drop. Inputting values to describe the geometry and state properties of the entire system is done directly from the computer keyboard. Graphical display of all output results from the computer code analyses is facilitated by menu selection of up to five dependent variables per plot.
A rocket engine design expert system
NASA Technical Reports Server (NTRS)
Davidian, Kenneth J.
1989-01-01
The overall structure and capabilities of an expert system designed to evaluate rocket engine performance are described. The expert system incorporates a JANNAF standard reference computer code to determine rocket engine performance and a state-of-the-art finite element computer code to calculate the interactions between propellant injection, energy release in the combustion chamber, and regenerative cooling heat transfer. Rule-of-thumb heuristics were incorporated for the hydrogen-oxygen coaxial injector design, including a minimum gap size constraint on the total number of injector elements. One-dimensional equilibrium chemistry was employed in the energy release analysis of the combustion chamber and three-dimensional finite-difference analysis of the regenerative cooling channels was used to calculate the pressure drop along the channels and the coolant temperature as it exits the coolant circuit. Inputting values to describe the geometry and state properties of the entire system is done directly from the computer keyboard. Graphical display of all output results from the computer code analyses is facilitated by menu selection of up to five dependent variables per plot.
The second laws of quantum thermodynamics.
Brandão, Fernando; Horodecki, Michał; Ng, Nelly; Oppenheim, Jonathan; Wehner, Stephanie
2015-03-17
The second law of thermodynamics places constraints on state transformations. It applies to systems composed of many particles, however, we are seeing that one can formulate laws of thermodynamics when only a small number of particles are interacting with a heat bath. Is there a second law of thermodynamics in this regime? Here, we find that for processes which are approximately cyclic, the second law for microscopic systems takes on a different form compared to the macroscopic scale, imposing not just one constraint on state transformations, but an entire family of constraints. We find a family of free energies which generalize the traditional one, and show that they can never increase. The ordinary second law relates to one of these, with the remainder imposing additional constraints on thermodynamic transitions. We find three regimes which determine which family of second laws govern state transitions, depending on how cyclic the process is. In one regime one can cause an apparent violation of the usual second law, through a process of embezzling work from a large system which remains arbitrarily close to its original state. These second laws are relevant for small systems, and also apply to individual macroscopic systems interacting via long-range interactions. By making precise the definition of thermal operations, the laws of thermodynamics are unified in this framework, with the first law defining the class of operations, the zeroth law emerging as an equivalence relation between thermal states, and the remaining laws being monotonicity of our generalized free energies.
The second laws of quantum thermodynamics
Brandão, Fernando; Horodecki, Michał; Ng, Nelly; Oppenheim, Jonathan; Wehner, Stephanie
2015-01-01
The second law of thermodynamics places constraints on state transformations. It applies to systems composed of many particles, however, we are seeing that one can formulate laws of thermodynamics when only a small number of particles are interacting with a heat bath. Is there a second law of thermodynamics in this regime? Here, we find that for processes which are approximately cyclic, the second law for microscopic systems takes on a different form compared to the macroscopic scale, imposing not just one constraint on state transformations, but an entire family of constraints. We find a family of free energies which generalize the traditional one, and show that they can never increase. The ordinary second law relates to one of these, with the remainder imposing additional constraints on thermodynamic transitions. We find three regimes which determine which family of second laws govern state transitions, depending on how cyclic the process is. In one regime one can cause an apparent violation of the usual second law, through a process of embezzling work from a large system which remains arbitrarily close to its original state. These second laws are relevant for small systems, and also apply to individual macroscopic systems interacting via long-range interactions. By making precise the definition of thermal operations, the laws of thermodynamics are unified in this framework, with the first law defining the class of operations, the zeroth law emerging as an equivalence relation between thermal states, and the remaining laws being monotonicity of our generalized free energies. PMID:25675476
Geometric constraints during epithelial jamming
NASA Astrophysics Data System (ADS)
Atia, Lior; Bi, Dapeng; Sharma, Yasha; Mitchel, Jennifer A.; Gweon, Bomi; Koehler, Stephan A.; DeCamp, Stephen J.; Lan, Bo; Kim, Jae Hun; Hirsch, Rebecca; Pegoraro, Adrian F.; Lee, Kyu Ha; Starr, Jacqueline R.; Weitz, David A.; Martin, Adam C.; Park, Jin-Ah; Butler, James P.; Fredberg, Jeffrey J.
2018-06-01
As an injury heals, an embryo develops or a carcinoma spreads, epithelial cells systematically change their shape. In each of these processes cell shape is studied extensively whereas variability of shape from cell to cell is regarded most often as biological noise. But where do cell shape and its variability come from? Here we report that cell shape and shape variability are mutually constrained through a relationship that is purely geometrical. That relationship is shown to govern processes as diverse as maturation of the pseudostratified bronchial epithelial layer cultured from non-asthmatic or asthmatic donors, and formation of the ventral furrow in the Drosophila embryo. Across these and other epithelial systems, shape variability collapses to a family of distributions that is common to all. That distribution, in turn, is accounted for by a mechanistic theory of cell-cell interaction, showing that cell shape becomes progressively less elongated and less variable as the layer becomes progressively more jammed. These findings suggest a connection between jamming and geometry that spans living organisms and inert jammed systems, and thus transcends system details. Although molecular events are needed for any complete theory of cell shape and cell packing, observations point to the hypothesis that jamming behaviour at larger scales of organization sets overriding geometric constraints.
Revelations of X-ray spectral analysis of the enigmatic black hole binary GRS 1915+105
NASA Astrophysics Data System (ADS)
Peris, Charith; Remillard, Ronald A.; Steiner, James; Vrtilek, Saeqa Dil; Varniere, Peggy; Rodriguez, Jerome; Pooley, Guy
2016-01-01
Of the black hole binaries discovered thus far, GRS 1915+105 stands out as an exceptional source primarily due to its wild X-ray variability, the diversity of which has not been replicated in any other stellar-mass black hole. Although extreme variability is commonplace in its light-curve, about half of the observations of GRS1915+105 show fairly steady X-ray intensity. We report on the X-ray spectral behavior within these steady observations. Our work is based on a vast RXTE/PCA data set obtained on GRS 1915+105 during the course of its entire mission and 10 years of radio data from the Ryle Telescope, which overlap the X-ray data. We find that the steady observations within the X-ray data set naturally separate into two regions in a color-color diagram, which we refer to as steady-soft and steady-hard. GRS 1915+105 displays significant curvature in the Comptonization component within the PCA band pass suggesting significantly heating from a hot disk present in all states. A new Comptonization model 'simplcut' was developed in order to model this curvature to best effect. A majority of the steady-soft observations display a roughly constant inner radius; remarkably reminiscent of canonical soft state black hole binaries. In contrast, the steady-hard observations display a growing disk truncation that is correlated to the mass accretion rate through the disk, which suggests a magnetically truncated disk. A comparison of X-ray model parameters to the canonical state definitions show that almost all steady-soft observations match the criteria of either thermal or steep power law state, while the thermal state observations dominate the constant radius branch. A large portion (80%) of the steady-hard observations matches the hard state criteria when the disk fraction constraint is neglected. These results suggest that within the complexity of this source is a simpler underlying basis of states, which map to those observed in canonical black hole binaries. When represented in a color-color diagram, state assignments appear to map to ``A, B and C'' (Belloni et al. 2000) regions that govern fast variability cycles in GRS 1915+105 demonstrating a compelling link between short and long time scales in its phenomenology.
Constraints on Stress Components at the Internal Singular Point of an Elastic Compound Structure
NASA Astrophysics Data System (ADS)
Pestrenin, V. M.; Pestrenina, I. V.
2017-03-01
The classical analytical and numerical methods for investigating the stress-strain state (SSS) in the vicinity of a singular point consider the point as a mathematical one (having no linear dimensions). The reliability of the solution obtained by such methods is valid only outside a small vicinity of the singular point, because the macroscopic equations become incorrect and microscopic ones have to be used to describe the SSS in this vicinity. Also, it is impossible to set constraint or to formulate solutions in stress-strain terms for a mathematical point. These problems do not arise if the singular point is identified with the representative volume of material of the structure studied. In authors' opinion, this approach is consistent with the postulates of continuum mechanics. In this case, the formulation of constraints at a singular point and their investigation becomes an independent problem of mechanics for bodies with singularities. This method was used to explore constraints at an internal singular point (representative volume) of a compound wedge and a compound rib. It is shown that, in addition to the constraints given in the classical approach, there are also constraints depending on the macroscopic parameters of constituent materials. These constraints turn the problems of deformable bodies with an internal singular point into nonclassical ones. Combinations of material parameters determine the number of additional constraints and the critical stress state at the singular point. Results of this research can be used in the mechanics of composite materials and fracture mechanics and in studying stress concentrations in composite structural elements.
Application of constraint-based satellite mission planning model in forest fire monitoring
NASA Astrophysics Data System (ADS)
Guo, Bingjun; Wang, Hongfei; Wu, Peng
2017-10-01
In this paper, a constraint-based satellite mission planning model is established based on the thought of constraint satisfaction. It includes target, request, observation, satellite, payload and other elements, with constraints linked up. The optimization goal of the model is to make full use of time and resources, and improve the efficiency of target observation. Greedy algorithm is used in the model solving to make observation plan and data transmission plan. Two simulation experiments are designed and carried out, which are routine monitoring of global forest fire and emergency monitoring of forest fires in Australia. The simulation results proved that the model and algorithm perform well. And the model is of good emergency response capability. Efficient and reasonable plan can be worked out to meet users' needs under complex cases of multiple payloads, multiple targets and variable priorities with this model.
Employing Sensitivity Derivatives for Robust Optimization under Uncertainty in CFD
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Putko, Michele M.; Taylor, Arthur C., III
2004-01-01
A robust optimization is demonstrated on a two-dimensional inviscid airfoil problem in subsonic flow. Given uncertainties in statistically independent, random, normally distributed flow parameters (input variables), an approximate first-order statistical moment method is employed to represent the Computational Fluid Dynamics (CFD) code outputs as expected values with variances. These output quantities are used to form the objective function and constraints. The constraints are cast in probabilistic terms; that is, the probability that a constraint is satisfied is greater than or equal to some desired target probability. Gradient-based robust optimization of this stochastic problem is accomplished through use of both first and second-order sensitivity derivatives. For each robust optimization, the effect of increasing both input standard deviations and target probability of constraint satisfaction are demonstrated. This method provides a means for incorporating uncertainty when considering small deviations from input mean values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gandhi, P.; Dhillon, V. S.; Durant, M.
2010-07-15
In a fast multi-wavelength timing study of black hole X-ray binaries (BHBs), we have discovered correlated optical and X-ray variability in the low/hard state of two sources: GX 339-4 and SWIFT J1753.5-0127. After XTE J1118+480, these are the only BHBs currently known to show rapid (sub-second) aperiodic optical flickering. Our simultaneous VLT/Ultracam and RXTE data reveal intriguing patterns with characteristic peaks, dips and lags down to very short timescales. Simple linear reprocessing models can be ruled out as the origin of the rapid, aperiodic optical power in both sources. A magnetic energy release model with fast interactions between the disk,more » jet and corona can explain the complex correlation patterns. We also show that in both the optical and X-ray light curves, the absolute source variability r.m.s. amplitude linearly increases with flux, and that the flares have a log-normal distribution. The implication is that variability at both wavelengths is not due to local fluctuations alone, but rather arises as a result of coupling of perturbations over a wide range of radii and timescales. These 'optical and X-ray rms-flux relations' thus provide new constraints to connect the outer and inner parts of the accretion flow, and the jet.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, L.; Rao, N.D.
1983-04-01
This paper presents a new method for optimal dispatch of real and reactive power generation which is based on cartesian coordinate formulation of economic dispatch problem and reclassification of state and control variables associated with generator buses. The voltage and power at these buses are classified as parametric and functional inequality constraints, and are handled by reduced gradient technique and penalty factor approach respectively. The advantage of this classification is the reduction in the size of the equality constraint model, leading to less storage requirement. The rectangular coordinate formulation results in an exact equality constraint model in which the coefficientmore » matrix is real, sparse, diagonally dominant, smaller in size and need be computed and factorized once only in each gradient step. In addition, Lagragian multipliers are calculated using a new efficient procedure. A natural outcome of these features is the solution of the economic dispatch problem, faster than other methods available to date in the literature. Rapid and reliable convergence is an additional desirable characteristic of the method. Digital simulation results are presented on several IEEE test systems to illustrate the range of application of the method visa-vis the popular Dommel-Tinney (DT) procedure. It is found that the proposed method is more reliable, 3-4 times faster and requires 20-30 percent less storage compared to the DT algorithm, while being just as general. Thus, owing to its exactness, robust mathematical model and less computational requirements, the method developed in the paper is shown to be a practically feasible algorithm for on-line optimal power dispatch.« less
ERIC Educational Resources Information Center
Trenkic, Danijela
2007-01-01
This article addresses the debate on the causes of variability in production of second language functional morphology. It reports a study on article production by first language (L1) Serbian/second language (L2) English learners and compares their behaviour to that of a Turkish learner of English, reported in Goad and White (2004). In particular,…
The algebra of supertraces for 2+1 super de Sitter gravity
NASA Technical Reports Server (NTRS)
Urrutia, L. F.; Waelbroeck, H.; Zertuche, F.
1993-01-01
The algebra of the observables for 2+1 super de Sitter gravity, for one genus of the spatial surface is calculated. The algebra turns out to be an infinite Lie algebra subject to non-linear constraints. The constraints are solved explicitly in terms of five independent complex supertraces. These variables are the true degrees of freedom of the system and their quantized algebra generates a new structure which is referred to as a 'central extension' of the quantum algebra SU(2)q.
Boundary layers in cataclysmic variables - The HEAO 1 X-ray constraints
NASA Technical Reports Server (NTRS)
Jensen, K. A.
1984-01-01
The predictions of the boundary layer model for the X-ray emission from novae are summarized. A discrepancy between observations and theory in the X-ray observations is found. Constraints on the nature of the boundary layers in novae, based on the lack of detections of novae in the HEAO-1 soft X-ray survey are provided. Temperature and column densities for optically thick boundary layers in novae are estimated. Previously announced in STAR as N84-13046
Design of helicopter rotors to noise constraints
NASA Technical Reports Server (NTRS)
Schaeffer, E. G.; Sternfeld, H., Jr.
1978-01-01
Results of the initial phase of a research project to study the design constraints on helicopter noise are presented. These include the calculation of nonimpulsive rotor harmonic and broadband hover noise spectra, over a wide range of rotor design variables and the sensitivity of perceived noise level (PNL) to changes in rotor design parameters. The prediction methodology used correlated well with measured whirl tower data. Application of the predictions to variations in rotor design showed tip speed and thrust as having the most effect on changing PNL.