Sample records for constraint-based modeling approach

  1. Support vector methods for survival analysis: a comparison between ranking and regression approaches.

    PubMed

    Van Belle, Vanya; Pelckmans, Kristiaan; Van Huffel, Sabine; Suykens, Johan A K

    2011-10-01

    To compare and evaluate ranking, regression and combined machine learning approaches for the analysis of survival data. The literature describes two approaches based on support vector machines to deal with censored observations. In the first approach the key idea is to rephrase the task as a ranking problem via the concordance index, a problem which can be solved efficiently in a context of structural risk minimization and convex optimization techniques. In a second approach, one uses a regression approach, dealing with censoring by means of inequality constraints. The goal of this paper is then twofold: (i) introducing a new model combining the ranking and regression strategy, which retains the link with existing survival models such as the proportional hazards model via transformation models; and (ii) comparison of the three techniques on 6 clinical and 3 high-dimensional datasets and discussing the relevance of these techniques over classical approaches fur survival data. We compare svm-based survival models based on ranking constraints, based on regression constraints and models based on both ranking and regression constraints. The performance of the models is compared by means of three different measures: (i) the concordance index, measuring the model's discriminating ability; (ii) the logrank test statistic, indicating whether patients with a prognostic index lower than the median prognostic index have a significant different survival than patients with a prognostic index higher than the median; and (iii) the hazard ratio after normalization to restrict the prognostic index between 0 and 1. Our results indicate a significantly better performance for models including regression constraints above models only based on ranking constraints. This work gives empirical evidence that svm-based models using regression constraints perform significantly better than svm-based models based on ranking constraints. Our experiments show a comparable performance for methods including only regression or both regression and ranking constraints on clinical data. On high dimensional data, the former model performs better. However, this approach does not have a theoretical link with standard statistical models for survival data. This link can be made by means of transformation models when ranking constraints are included. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Statistical Techniques to Explore the Quality of Constraints in Constraint-Based Modeling Environments

    ERIC Educational Resources Information Center

    Gálvez, Jaime; Conejo, Ricardo; Guzmán, Eduardo

    2013-01-01

    One of the most popular student modeling approaches is Constraint-Based Modeling (CBM). It is an efficient approach that can be easily applied inside an Intelligent Tutoring System (ITS). Even with these characteristics, building new ITSs requires carefully designing the domain model to be taught because different sources of errors could affect…

  3. Model-based metabolism design: constraints for kinetic and stoichiometric models

    PubMed Central

    Stalidzans, Egils; Seiman, Andrus; Peebo, Karl; Komasilovs, Vitalijs; Pentjuss, Agris

    2018-01-01

    The implementation of model-based designs in metabolic engineering and synthetic biology may fail. One of the reasons for this failure is that only a part of the real-world complexity is included in models. Still, some knowledge can be simplified and taken into account in the form of optimization constraints to improve the feasibility of model-based designs of metabolic pathways in organisms. Some constraints (mass balance, energy balance, and steady-state assumption) serve as a basis for many modelling approaches. There are others (total enzyme activity constraint and homeostatic constraint) proposed decades ago, but which are frequently ignored in design development. Several new approaches of cellular analysis have made possible the application of constraints like cell size, surface, and resource balance. Constraints for kinetic and stoichiometric models are grouped according to their applicability preconditions in (1) general constraints, (2) organism-level constraints, and (3) experiment-level constraints. General constraints are universal and are applicable for any system. Organism-level constraints are applicable for biological systems and usually are organism-specific, but these constraints can be applied without information about experimental conditions. To apply experimental-level constraints, peculiarities of the organism and the experimental set-up have to be taken into account to calculate the values of constraints. The limitations of applicability of particular constraints for kinetic and stoichiometric models are addressed. PMID:29472367

  4. Compromise Approach-Based Genetic Algorithm for Constrained Multiobjective Portfolio Selection Model

    NASA Astrophysics Data System (ADS)

    Li, Jun

    In this paper, fuzzy set theory is incorporated into a multiobjective portfolio selection model for investors’ taking into three criteria: return, risk and liquidity. The cardinality constraint, the buy-in threshold constraint and the round-lots constraints are considered in the proposed model. To overcome the difficulty of evaluation a large set of efficient solutions and selection of the best one on non-dominated surface, a compromise approach-based genetic algorithm is presented to obtain a compromised solution for the proposed constrained multiobjective portfolio selection model.

  5. Constraints based analysis of extended cybernetic models.

    PubMed

    Mandli, Aravinda R; Venkatesh, Kareenhalli V; Modak, Jayant M

    2015-11-01

    The cybernetic modeling framework provides an interesting approach to model the regulatory phenomena occurring in microorganisms. In the present work, we adopt a constraints based approach to analyze the nonlinear behavior of the extended equations of the cybernetic model. We first show that the cybernetic model exhibits linear growth behavior under the constraint of no resource allocation for the induction of the key enzyme. We then quantify the maximum achievable specific growth rate of microorganisms on mixtures of substitutable substrates under various kinds of regulation and show its use in gaining an understanding of the regulatory strategies of microorganisms. Finally, we show that Saccharomyces cerevisiae exhibits suboptimal dynamic growth with a long diauxic lag phase when growing on a mixture of glucose and galactose and discuss on its potential to achieve optimal growth with a significantly reduced diauxic lag period. The analysis carried out in the present study illustrates the utility of adopting a constraints based approach to understand the dynamic growth strategies of microorganisms. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  6. A proof for loop-law constraints in stoichiometric metabolic networks

    PubMed Central

    2012-01-01

    Background Constraint-based modeling is increasingly employed for metabolic network analysis. Its underlying assumption is that natural metabolic phenotypes can be predicted by adding physicochemical constraints to remove unrealistic metabolic flux solutions. The loopless-COBRA approach provides an additional constraint that eliminates thermodynamically infeasible internal cycles (or loops) from the space of solutions. This allows the prediction of flux solutions that are more consistent with experimental data. However, it is not clear if this approach over-constrains the models by removing non-loop solutions as well. Results Here we apply Gordan’s theorem from linear algebra to prove for the first time that the constraints added in loopless-COBRA do not over-constrain the problem beyond the elimination of the loops themselves. Conclusions The loopless-COBRA constraints can be reliably applied. Furthermore, this proof may be adapted to evaluate the theoretical soundness for other methods in constraint-based modeling. PMID:23146116

  7. A constraint-based evolutionary learning approach to the expectation maximization for optimal estimation of the hidden Markov model for speech signal modeling.

    PubMed

    Huda, Shamsul; Yearwood, John; Togneri, Roberto

    2009-02-01

    This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM).

  8. Enforcement of entailment constraints in distributed service-based business processes.

    PubMed

    Hummer, Waldemar; Gaubatz, Patrick; Strembeck, Mark; Zdun, Uwe; Dustdar, Schahram

    2013-11-01

    A distributed business process is executed in a distributed computing environment. The service-oriented architecture (SOA) paradigm is a popular option for the integration of software services and execution of distributed business processes. Entailment constraints, such as mutual exclusion and binding constraints, are important means to control process execution. Mutually exclusive tasks result from the division of powerful rights and responsibilities to prevent fraud and abuse. In contrast, binding constraints define that a subject who performed one task must also perform the corresponding bound task(s). We aim to provide a model-driven approach for the specification and enforcement of task-based entailment constraints in distributed service-based business processes. Based on a generic metamodel, we define a domain-specific language (DSL) that maps the different modeling-level artifacts to the implementation-level. The DSL integrates elements from role-based access control (RBAC) with the tasks that are performed in a business process. Process definitions are annotated using the DSL, and our software platform uses automated model transformations to produce executable WS-BPEL specifications which enforce the entailment constraints. We evaluate the impact of constraint enforcement on runtime performance for five selected service-based processes from existing literature. Our evaluation demonstrates that the approach correctly enforces task-based entailment constraints at runtime. The performance experiments illustrate that the runtime enforcement operates with an overhead that scales well up to the order of several ten thousand logged invocations. Using our DSL annotations, the user-defined process definition remains declarative and clean of security enforcement code. Our approach decouples the concerns of (non-technical) domain experts from technical details of entailment constraint enforcement. The developed framework integrates seamlessly with WS-BPEL and the Web services technology stack. Our prototype implementation shows the feasibility of the approach, and the evaluation points to future work and further performance optimizations.

  9. Constraint-Based Local Search for Constrained Optimum Paths Problems

    NASA Astrophysics Data System (ADS)

    Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal

    Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.

  10. Direct coupling of a genome-scale microbial in silico model and a groundwater reactive transport model.

    PubMed

    Fang, Yilin; Scheibe, Timothy D; Mahadevan, Radhakrishnan; Garg, Srinath; Long, Philip E; Lovley, Derek R

    2011-03-25

    The activity of microorganisms often plays an important role in dynamic natural attenuation or engineered bioremediation of subsurface contaminants, such as chlorinated solvents, metals, and radionuclides. To evaluate and/or design bioremediated systems, quantitative reactive transport models are needed. State-of-the-art reactive transport models often ignore the microbial effects or simulate the microbial effects with static growth yield and constant reaction rate parameters over simulated conditions, while in reality microorganisms can dynamically modify their functionality (such as utilization of alternative respiratory pathways) in response to spatial and temporal variations in environmental conditions. Constraint-based genome-scale microbial in silico models, using genomic data and multiple-pathway reaction networks, have been shown to be able to simulate transient metabolism of some well studied microorganisms and identify growth rate, substrate uptake rates, and byproduct rates under different growth conditions. These rates can be identified and used to replace specific microbially-mediated reaction rates in a reactive transport model using local geochemical conditions as constraints. We previously demonstrated the potential utility of integrating a constraint-based microbial metabolism model with a reactive transport simulator as applied to bioremediation of uranium in groundwater. However, that work relied on an indirect coupling approach that was effective for initial demonstration but may not be extensible to more complex problems that are of significant interest (e.g., communities of microbial species and multiple constraining variables). Here, we extend that work by presenting and demonstrating a method of directly integrating a reactive transport model (FORTRAN code) with constraint-based in silico models solved with IBM ILOG CPLEX linear optimizer base system (C library). The models were integrated with BABEL, a language interoperability tool. The modeling system is designed in such a way that constraint-based models targeting different microorganisms or competing organism communities can be easily plugged into the system. Constraint-based modeling is very costly given the size of a genome-scale reaction network. To save computation time, a binary tree is traversed to examine the concentration and solution pool generated during the simulation in order to decide whether the constraint-based model should be called. We also show preliminary results from the integrated model including a comparison of the direct and indirect coupling approaches and evaluated the ability of the approach to simulate field experiment. Published by Elsevier B.V.

  11. Constraint-Based Modeling: From Cognitive Theory to Computer Tutoring--and Back Again

    ERIC Educational Resources Information Center

    Ohlsson, Stellan

    2016-01-01

    The ideas behind the constraint-based modeling (CBM) approach to the design of intelligent tutoring systems (ITSs) grew out of attempts in the 1980's to clarify how declarative and procedural knowledge interact during skill acquisition. The learning theory that underpins CBM was based on two conceptual innovations. The first innovation was to…

  12. Robust Synchronization Models for Presentation System Using SMIL-Driven Approach

    ERIC Educational Resources Information Center

    Asnawi, Rustam; Ahmad, Wan Fatimah Wan; Rambli, Dayang Rohaya Awang

    2013-01-01

    Current common Presentation System (PS) models are slide based oriented and lack synchronization analysis either with temporal or spatial constraints. Such models, in fact, tend to lead to synchronization problems, particularly on parallel synchronization with spatial constraints between multimedia element presentations. However, parallel…

  13. A reduced order, test verified component mode synthesis approach for system modeling applications

    NASA Astrophysics Data System (ADS)

    Butland, Adam; Avitabile, Peter

    2010-05-01

    Component mode synthesis (CMS) is a very common approach used for the generation of large system models. In general, these modeling techniques can be separated into two categories: those utilizing a combination of constraint modes and fixed interface normal modes and those based on a combination of free interface normal modes and residual flexibility terms. The major limitation of the methods utilizing constraint modes and fixed interface normal modes is the inability to easily obtain the required information from testing; the result of this limitation is that constraint mode-based techniques are primarily used with numerical models. An alternate approach is proposed which utilizes frequency and shape information acquired from modal testing to update reduced order finite element models using exact analytical model improvement techniques. The connection degrees of freedom are then rigidly constrained in the test verified, reduced order model to provide the boundary conditions necessary for constraint modes and fixed interface normal modes. The CMS approach is then used with this test verified, reduced order model to generate the system model for further analysis. A laboratory structure is used to show the application of the technique with both numerical and simulated experimental components to describe the system and validate the proposed approach. Actual test data is then used in the approach proposed. Due to typical measurement data contaminants that are always included in any test, the measured data is further processed to remove contaminants and is then used in the proposed approach. The final case using improved data with the reduced order, test verified components is shown to produce very acceptable results from the Craig-Bampton component mode synthesis approach. Use of the technique with its strengths and weaknesses are discussed.

  14. Complementary Constrains on Component based Multiphase Flow Problems, Should It Be Implemented Locally or Globally?

    NASA Astrophysics Data System (ADS)

    Shao, H.; Huang, Y.; Kolditz, O.

    2015-12-01

    Multiphase flow problems are numerically difficult to solve, as it often contains nonlinear Phase transition phenomena A conventional technique is to introduce the complementarity constraints where fluid properties such as liquid saturations are confined within a physically reasonable range. Based on such constraints, the mathematical model can be reformulated into a system of nonlinear partial differential equations coupled with variational inequalities. They can be then numerically handled by optimization algorithms. In this work, two different approaches utilizing the complementarity constraints based on persistent primary variables formulation[4] are implemented and investigated. The first approach proposed by Marchand et.al[1] is using "local complementary constraints", i.e. coupling the constraints with the local constitutive equations. The second approach[2],[3] , namely the "global complementary constrains", applies the constraints globally with the mass conservation equation. We will discuss how these two approaches are applied to solve non-isothermal componential multiphase flow problem with the phase change phenomenon. Several benchmarks will be presented for investigating the overall numerical performance of different approaches. The advantages and disadvantages of different models will also be concluded. References[1] E.Marchand, T.Mueller and P.Knabner. Fully coupled generalized hybrid-mixed finite element approximation of two-phase two-component flow in porous media. Part I: formulation and properties of the mathematical model, Computational Geosciences 17(2): 431-442, (2013). [2] A. Lauser, C. Hager, R. Helmig, B. Wohlmuth. A new approach for phase transitions in miscible multi-phase flow in porous media. Water Resour., 34,(2011), 957-966. [3] J. Jaffré, and A. Sboui. Henry's Law and Gas Phase Disappearance. Transp. Porous Media. 82, (2010), 521-526. [4] A. Bourgeat, M. Jurak and F. Smaï. Two-phase partially miscible flow and transport modeling in porous media : application to gas migration in a nuclear waste repository, Comp.Geosciences. (2009), Volume 13, Number 1, 29-42.

  15. Constraint reasoning in deep biomedical models.

    PubMed

    Cruz, Jorge; Barahona, Pedro

    2005-05-01

    Deep biomedical models are often expressed by means of differential equations. Despite their expressive power, they are difficult to reason about and make decisions, given their non-linearity and the important effects that the uncertainty on data may cause. The objective of this work is to propose a constraint reasoning framework to support safe decisions based on deep biomedical models. The methods used in our approach include the generic constraint propagation techniques for reducing the bounds of uncertainty of the numerical variables complemented with new constraint reasoning techniques that we developed to handle differential equations. The results of our approach are illustrated in biomedical models for the diagnosis of diabetes, tuning of drug design and epidemiology where it was a valuable decision-supporting tool notwithstanding the uncertainty on data. The main conclusion that follows from the results is that, in biomedical decision support, constraint reasoning may be a worthwhile alternative to traditional simulation methods, especially when safe decisions are required.

  16. Time-lapse joint inversion of geophysical data with automatic joint constraints and dynamic attributes

    NASA Astrophysics Data System (ADS)

    Rittgers, J. B.; Revil, A.; Mooney, M. A.; Karaoulis, M.; Wodajo, L.; Hickey, C. J.

    2016-12-01

    Joint inversion and time-lapse inversion techniques of geophysical data are often implemented in an attempt to improve imaging of complex subsurface structures and dynamic processes by minimizing negative effects of random and uncorrelated spatial and temporal noise in the data. We focus on the structural cross-gradient (SCG) approach (enforcing recovered models to exhibit similar spatial structures) in combination with time-lapse inversion constraints applied to surface-based electrical resistivity and seismic traveltime refraction data. The combination of both techniques is justified by the underlying petrophysical models. We investigate the benefits and trade-offs of SCG and time-lapse constraints. Using a synthetic case study, we show that a combined joint time-lapse inversion approach provides an overall improvement in final recovered models. Additionally, we introduce a new approach to reweighting SCG constraints based on an iteratively updated normalized ratio of model sensitivity distributions at each time-step. We refer to the new technique as the Automatic Joint Constraints (AJC) approach. The relevance of the new joint time-lapse inversion process is demonstrated on the synthetic example. Then, these approaches are applied to real time-lapse monitoring field data collected during a quarter-scale earthen embankment induced-piping failure test. The use of time-lapse joint inversion is justified by the fact that a change of porosity drives concomitant changes in seismic velocities (through its effect on the bulk and shear moduli) and resistivities (through its influence upon the formation factor). Combined with the definition of attributes (i.e. specific characteristics) of the evolving target associated with piping, our approach allows localizing the position of the preferential flow path associated with internal erosion. This is not the case using other approaches.

  17. Acceleration constraints in modeling and control of nonholonomic systems

    NASA Astrophysics Data System (ADS)

    Bajodah, Abdulrahman H.

    2003-10-01

    Acceleration constraints are used to enhance modeling techniques for dynamical systems. In particular, Kane's equations of motion subjected to bilateral constraints, unilateral constraints, and servo-constraints are modified by utilizing acceleration constraints for the purpose of simplifying the equations and increasing their applicability. The tangential properties of Kane's method provide relationships between the holonomic and the nonholonomic partial velocities, and hence allow one to describe nonholonomic generalized active and inertia forces in terms of their holonomic counterparts, i.e., those which correspond to the system without constraints. Therefore, based on the modeling process objectives, the holonomic and the nonholonomic vector entities in Kane's approach are used interchangeably to model holonomic and nonholonomic systems. When the holonomic partial velocities are used to model nonholonomic systems, the resulting models are full-order (also called nonminimal or unreduced) and separated in accelerations. As a consequence, they are readily integrable and can be used for generic system analysis. Other related topics are constraint forces, numerical stability of the nonminimal equations of motion, and numerical constraint stabilization. Two types of unilateral constraints considered are impulsive and friction constraints. Impulsive constraints are modeled by means of a continuous-in-velocities and impulse-momentum approaches. In controlled motion, the acceleration form of constraints is utilized with the Moore-Penrose generalized inverse of the corresponding constraint matrix to solve for the inverse dynamics of servo-constraints, and for the redundancy resolution of overactuated manipulators. If control variables are involved in the algebraic constraint equations, then these tools are used to modify the controlled equations of motion in order to facilitate control system design. An illustrative example of spacecraft stabilization is presented.

  18. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    PubMed

    Budinich, Marko; Bourdon, Jérémie; Larhlimi, Abdelhalim; Eveillard, Damien

    2017-01-01

    Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs) for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA) and multi-objective flux variability analysis (MO-FVA). Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity) that take place at the ecosystem scale.

  19. An exact arithmetic toolbox for a consistent and reproducible structural analysis of metabolic network models

    PubMed Central

    Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie

    2014-01-01

    Constraint-based models are currently the only methodology that allows the study of metabolism at the whole-genome scale. Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic. Here we introduce MONGOOSE, a toolbox for analysing the structure of constraint-based metabolic models in exact arithmetic. We apply MONGOOSE to the analysis of 98 existing metabolic network models and find that the biomass reaction is surprisingly blocked (unable to sustain non-zero flux) in nearly half of them. We propose a principled approach for unblocking these reactions and extend it to the problems of identifying essential and synthetic lethal reactions and minimal media. Our structural insights enable a systematic study of constraint-based metabolic models, yielding a deeper understanding of their possibilities and limitations. PMID:25291352

  20. First-Stage Development and Validation of a Web-Based Automated Dietary Modeling Tool: Using Constraint Optimization Techniques to Streamline Food Group and Macronutrient Focused Dietary Prescriptions for Clinical Trials.

    PubMed

    Probst, Yasmine; Morrison, Evan; Sullivan, Emma; Dam, Hoa Khanh

    2016-07-28

    Standardizing the background diet of participants during a dietary randomized controlled trial is vital to trial outcomes. For this process, dietary modeling based on food groups and their target servings is employed via a dietary prescription before an intervention, often using a manual process. Partial automation has employed the use of linear programming. Validity of the modeling approach is critical to allow trial outcomes to be translated to practice. This paper describes the first-stage development of a tool to automatically perform dietary modeling using food group and macronutrient requirements as a test case. The Dietary Modeling Tool (DMT) was then compared with existing approaches to dietary modeling (manual and partially automated), which were previously available to dietitians working within a dietary intervention trial. Constraint optimization techniques were implemented to determine whether nonlinear constraints are best suited to the development of the automated dietary modeling tool using food composition and food consumption data. Dietary models were produced and compared with a manual Microsoft Excel calculator, a partially automated Excel Solver approach, and the automated DMT that was developed. The web-based DMT was produced using nonlinear constraint optimization, incorporating estimated energy requirement calculations, nutrition guidance systems, and the flexibility to amend food group targets for individuals. Percentage differences between modeling tools revealed similar results for the macronutrients. Polyunsaturated fatty acids and monounsaturated fatty acids showed greater variation between tools (practically equating to a 2-teaspoon difference), although it was not considered clinically significant when the whole diet, as opposed to targeted nutrients or energy requirements, were being addressed. Automated modeling tools can streamline the modeling process for dietary intervention trials ensuring consistency of the background diets, although appropriate constraints must be used in their development to achieve desired results. The DMT was found to be a valid automated tool producing similar results to tools with less automation. The results of this study suggest interchangeability of the modeling approaches used, although implementation should reflect the requirements of the dietary intervention trial in which it is used.

  1. First-Stage Development and Validation of a Web-Based Automated Dietary Modeling Tool: Using Constraint Optimization Techniques to Streamline Food Group and Macronutrient Focused Dietary Prescriptions for Clinical Trials

    PubMed Central

    Morrison, Evan; Sullivan, Emma; Dam, Hoa Khanh

    2016-01-01

    Background Standardizing the background diet of participants during a dietary randomized controlled trial is vital to trial outcomes. For this process, dietary modeling based on food groups and their target servings is employed via a dietary prescription before an intervention, often using a manual process. Partial automation has employed the use of linear programming. Validity of the modeling approach is critical to allow trial outcomes to be translated to practice. Objective This paper describes the first-stage development of a tool to automatically perform dietary modeling using food group and macronutrient requirements as a test case. The Dietary Modeling Tool (DMT) was then compared with existing approaches to dietary modeling (manual and partially automated), which were previously available to dietitians working within a dietary intervention trial. Methods Constraint optimization techniques were implemented to determine whether nonlinear constraints are best suited to the development of the automated dietary modeling tool using food composition and food consumption data. Dietary models were produced and compared with a manual Microsoft Excel calculator, a partially automated Excel Solver approach, and the automated DMT that was developed. Results The web-based DMT was produced using nonlinear constraint optimization, incorporating estimated energy requirement calculations, nutrition guidance systems, and the flexibility to amend food group targets for individuals. Percentage differences between modeling tools revealed similar results for the macronutrients. Polyunsaturated fatty acids and monounsaturated fatty acids showed greater variation between tools (practically equating to a 2-teaspoon difference), although it was not considered clinically significant when the whole diet, as opposed to targeted nutrients or energy requirements, were being addressed. Conclusions Automated modeling tools can streamline the modeling process for dietary intervention trials ensuring consistency of the background diets, although appropriate constraints must be used in their development to achieve desired results. The DMT was found to be a valid automated tool producing similar results to tools with less automation. The results of this study suggest interchangeability of the modeling approaches used, although implementation should reflect the requirements of the dietary intervention trial in which it is used. PMID:27471104

  2. Concepts, challenges, and successes in modeling thermodynamics of metabolism.

    PubMed

    Cannon, William R

    2014-01-01

    The modeling of the chemical reactions involved in metabolism is a daunting task. Ideally, the modeling of metabolism would use kinetic simulations, but these simulations require knowledge of the thousands of rate constants involved in the reactions. The measurement of rate constants is very labor intensive, and hence rate constants for most enzymatic reactions are not available. Consequently, constraint-based flux modeling has been the method of choice because it does not require the use of the rate constants of the law of mass action. However, this convenience also limits the predictive power of constraint-based approaches in that the law of mass action is used only as a constraint, making it difficult to predict metabolite levels or energy requirements of pathways. An alternative to both of these approaches is to model metabolism using simulations of states rather than simulations of reactions, in which the state is defined as the set of all metabolite counts or concentrations. While kinetic simulations model reactions based on the likelihood of the reaction derived from the law of mass action, states are modeled based on likelihood ratios of mass action. Both approaches provide information on the energy requirements of metabolic reactions and pathways. However, modeling states rather than reactions has the advantage that the parameters needed to model states (chemical potentials) are much easier to determine than the parameters needed to model reactions (rate constants). Herein, we discuss recent results, assumptions, and issues in using simulations of state to model metabolism.

  3. A Model-Based Expert System for Space Power Distribution Diagnostics

    NASA Technical Reports Server (NTRS)

    Quinn, Todd M.; Schlegelmilch, Richard F.

    1994-01-01

    When engineers diagnose system failures, they often use models to confirm system operation. This concept has produced a class of advanced expert systems that perform model-based diagnosis. A model-based diagnostic expert system for the Space Station Freedom electrical power distribution test bed is currently being developed at the NASA Lewis Research Center. The objective of this expert system is to autonomously detect and isolate electrical fault conditions. Marple, a software package developed at TRW, provides a model-based environment utilizing constraint suspension. Originally, constraint suspension techniques were developed for digital systems. However, Marple provides the mechanisms for applying this approach to analog systems such as the test bed, as well. The expert system was developed using Marple and Lucid Common Lisp running on a Sun Sparc-2 workstation. The Marple modeling environment has proved to be a useful tool for investigating the various aspects of model-based diagnostics. This report describes work completed to date and lessons learned while employing model-based diagnostics using constraint suspension within an analog system.

  4. On the equivalence between traction- and stress-based approaches for the modeling of localized failure in solids

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Ying; Cervera, Miguel

    2015-09-01

    This work investigates systematically traction- and stress-based approaches for the modeling of strong and regularized discontinuities induced by localized failure in solids. Two complementary methodologies, i.e., discontinuities localized in an elastic solid and strain localization of an inelastic softening solid, are addressed. In the former it is assumed a priori that the discontinuity forms with a continuous stress field and along the known orientation. A traction-based failure criterion is introduced to characterize the discontinuity and the orientation is determined from Mohr's maximization postulate. If the displacement jumps are retained as independent variables, the strong/regularized discontinuity approaches follow, requiring constitutive models for both the bulk and discontinuity. Elimination of the displacement jumps at the material point level results in the embedded/smeared discontinuity approaches in which an overall inelastic constitutive model fulfilling the static constraint suffices. The second methodology is then adopted to check whether the assumed strain localization can occur and identify its consequences on the resulting approaches. The kinematic constraint guaranteeing stress boundedness and continuity upon strain localization is established for general inelastic softening solids. Application to a unified stress-based elastoplastic damage model naturally yields all the ingredients of a localized model for the discontinuity (band), justifying the first methodology. Two dual but not necessarily equivalent approaches, i.e., the traction-based elastoplastic damage model and the stress-based projected discontinuity model, are identified. The former is equivalent to the embedded and smeared discontinuity approaches, whereas in the later the discontinuity orientation and associated failure criterion are determined consistently from the kinematic constraint rather than given a priori. The bi-directional connections and equivalence conditions between the traction- and stress-based approaches are classified. Closed-form results under plane stress condition are also given. A generic failure criterion of either elliptic, parabolic or hyperbolic type is analyzed in a unified manner, with the classical von Mises (J2), Drucker-Prager, Mohr-Coulomb and many other frequently employed criteria recovered as its particular cases.

  5. Courses of action for effects based operations using evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Haider, Sajjad; Levis, Alexander H.

    2006-05-01

    This paper presents an Evolutionary Algorithms (EAs) based approach to identify effective courses of action (COAs) in Effects Based Operations. The approach uses Timed Influence Nets (TINs) as the underlying mathematical model to capture a dynamic uncertain situation. TINs provide a concise graph-theoretic probabilistic approach to specify the cause and effect relationships that exist among the variables of interest (actions, desired effects, and other uncertain events) in a problem domain. The purpose of building these TIN models is to identify and analyze several alternative courses of action. The current practice is to use trial and error based techniques which are not only labor intensive but also produce sub-optimal results and are not capable of modeling constraints among actionable events. The EA based approach presented in this paper is aimed to overcome these limitations. The approach generates multiple COAs that are close enough in terms of achieving the desired effect. The purpose of generating multiple COAs is to give several alternatives to a decision maker. Moreover, the alternate COAs could be generalized based on the relationships that exist among the actions and their execution timings. The approach also allows a system analyst to capture certain types of constraints among actionable events.

  6. Constraint Based Modeling Going Multicellular.

    PubMed

    Martins Conde, Patricia do Rosario; Sauter, Thomas; Pfau, Thomas

    2016-01-01

    Constraint based modeling has seen applications in many microorganisms. For example, there are now established methods to determine potential genetic modifications and external interventions to increase the efficiency of microbial strains in chemical production pipelines. In addition, multiple models of multicellular organisms have been created including plants and humans. While initially the focus here was on modeling individual cell types of the multicellular organism, this focus recently started to switch. Models of microbial communities, as well as multi-tissue models of higher organisms have been constructed. These models thereby can include different parts of a plant, like root, stem, or different tissue types in the same organ. Such models can elucidate details of the interplay between symbiotic organisms, as well as the concerted efforts of multiple tissues and can be applied to analyse the effects of drugs or mutations on a more systemic level. In this review we give an overview of the recent development of multi-tissue models using constraint based techniques and the methods employed when investigating these models. We further highlight advances in combining constraint based models with dynamic and regulatory information and give an overview of these types of hybrid or multi-level approaches.

  7. A systematic approach for finding the objective function and active constraints for dynamic flux balance analysis.

    PubMed

    Nikdel, Ali; Braatz, Richard D; Budman, Hector M

    2018-05-01

    Dynamic flux balance analysis (DFBA) has become an instrumental modeling tool for describing the dynamic behavior of bioprocesses. DFBA involves the maximization of a biologically meaningful objective subject to kinetic constraints on the rate of consumption/production of metabolites. In this paper, we propose a systematic data-based approach for finding both the biological objective function and a minimum set of active constraints necessary for matching the model predictions to the experimental data. The proposed algorithm accounts for the errors in the experiments and eliminates the need for ad hoc choices of objective function and constraints as done in previous studies. The method is illustrated for two cases: (1) for in silico (simulated) data generated by a mathematical model for Escherichia coli and (2) for actual experimental data collected from the batch fermentation of Bordetella Pertussis (whooping cough).

  8. How the 2SLS/IV estimator can handle equality constraints in structural equation models: a system-of-equations approach.

    PubMed

    Nestler, Steffen

    2014-05-01

    Parameters in structural equation models are typically estimated using the maximum likelihood (ML) approach. Bollen (1996) proposed an alternative non-iterative, equation-by-equation estimator that uses instrumental variables. Although this two-stage least squares/instrumental variables (2SLS/IV) estimator has good statistical properties, one problem with its application is that parameter equality constraints cannot be imposed. This paper presents a mathematical solution to this problem that is based on an extension of the 2SLS/IV approach to a system of equations. We present an example in which our approach was used to examine strong longitudinal measurement invariance. We also investigated the new approach in a simulation study that compared it with ML in the examination of the equality of two latent regression coefficients and strong measurement invariance. Overall, the results show that the suggested approach is a useful extension of the original 2SLS/IV estimator and allows for the effective handling of equality constraints in structural equation models. © 2013 The British Psychological Society.

  9. Feed Forward Neural Network and Optimal Control Problem with Control and State Constraints

    NASA Astrophysics Data System (ADS)

    Kmet', Tibor; Kmet'ová, Mária

    2009-09-01

    A feed forward neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The paper extends adaptive critic neural network architecture proposed by [5] to the optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.

  10. Computational Modeling of Human Metabolism and Its Application to Systems Biomedicine.

    PubMed

    Aurich, Maike K; Thiele, Ines

    2016-01-01

    Modern high-throughput techniques offer immense opportunities to investigate whole-systems behavior, such as those underlying human diseases. However, the complexity of the data presents challenges in interpretation, and new avenues are needed to address the complexity of both diseases and data. Constraint-based modeling is one formalism applied in systems biology. It relies on a genome-scale reconstruction that captures extensive biochemical knowledge regarding an organism. The human genome-scale metabolic reconstruction is increasingly used to understand normal cellular and disease states because metabolism is an important factor in many human diseases. The application of human genome-scale reconstruction ranges from mere querying of the model as a knowledge base to studies that take advantage of the model's topology and, most notably, to functional predictions based on cell- and condition-specific metabolic models built based on omics data.An increasing number and diversity of biomedical questions are being addressed using constraint-based modeling and metabolic models. One of the most successful biomedical applications to date is cancer metabolism, but constraint-based modeling also holds great potential for inborn errors of metabolism or obesity. In addition, it offers great prospects for individualized approaches to diagnostics and the design of disease prevention and intervention strategies. Metabolic models support this endeavor by providing easy access to complex high-throughput datasets. Personalized metabolic models have been introduced. Finally, constraint-based modeling can be used to model whole-body metabolism, which will enable the elucidation of metabolic interactions between organs and disturbances of these interactions as either causes or consequence of metabolic diseases. This chapter introduces constraint-based modeling and describes some of its contributions to systems biomedicine.

  11. Feasibility of employing model-based optimization of pulse amplitude and electrode distance for effective tumor electropermeabilization.

    PubMed

    Sel, Davorka; Lebar, Alenka Macek; Miklavcic, Damijan

    2007-05-01

    In electrochemotherapy (ECT) electropermeabilization, parameters (pulse amplitude, electrode setup) need to be customized in order to expose the whole tumor to electric field intensities above permeabilizing threshold to achieve effective ECT. In this paper, we present a model-based optimization approach toward determination of optimal electropermeabilization parameters for effective ECT. The optimization is carried out by minimizing the difference between the permeabilization threshold and electric field intensities computed by finite element model in selected points of tumor. We examined the feasibility of model-based optimization of electropermeabilization parameters on a model geometry generated from computer tomography images, representing brain tissue with tumor. Continuous parameter subject to optimization was pulse amplitude. The distance between electrode pairs was optimized as a discrete parameter. Optimization also considered the pulse generator constraints on voltage and current. During optimization the two constraints were reached preventing the exposure of the entire volume of the tumor to electric field intensities above permeabilizing threshold. However, despite the fact that with the particular needle array holder and pulse generator the entire volume of the tumor was not permeabilized, the maximal extent of permeabilization for the particular case (electrodes, tissue) was determined with the proposed approach. Model-based optimization approach could also be used for electro-gene transfer, where electric field intensities should be distributed between permeabilizing threshold and irreversible threshold-the latter causing tissue necrosis. This can be obtained by adding constraints on maximum electric field intensity in optimization procedure.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.

    Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thusmore » represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. Furthermore, this flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models.« less

  13. Novel Formulation of Adaptive MPC as EKF Using ANN Model: Multiproduct Semibatch Polymerization Reactor Case Study.

    PubMed

    Kamesh, Reddi; Rani, Kalipatnapu Yamuna

    2017-12-01

    In this paper, a novel formulation for nonlinear model predictive control (MPC) has been proposed incorporating the extended Kalman filter (EKF) control concept using a purely data-driven artificial neural network (ANN) model based on measurements for supervisory control. The proposed scheme consists of two modules focusing on online parameter estimation based on past measurements and control estimation over control horizon based on minimizing the deviation of model output predictions from set points along the prediction horizon. An industrial case study for temperature control of a multiproduct semibatch polymerization reactor posed as a challenge problem has been considered as a test bed to apply the proposed ANN-EKFMPC strategy at supervisory level as a cascade control configuration along with proportional integral controller [ANN-EKFMPC with PI (ANN-EKFMPC-PI)]. The proposed approach is formulated incorporating all aspects of MPC including move suppression factor for control effort minimization and constraint-handling capability including terminal constraints. The nominal stability analysis and offset-free tracking capabilities of the proposed controller are proved. Its performance is evaluated by comparison with a standard MPC-based cascade control approach using the same adaptive ANN model. The ANN-EKFMPC-PI control configuration has shown better controller performance in terms of temperature tracking, smoother input profiles, as well as constraint-handling ability compared with the ANN-MPC with PI approach for two products in summer and winter. The proposed scheme is found to be versatile although it is based on a purely data-driven model with online parameter estimation.

  14. Petri Net controller synthesis based on decomposed manufacturing models.

    PubMed

    Dideban, Abbas; Zeraatkar, Hashem

    2018-06-01

    Utilizing of supervisory control theory on the real systems in many modeling tools such as Petri Net (PN) becomes challenging in recent years due to the significant states in the automata models or uncontrollable events. The uncontrollable events initiate the forbidden states which might be removed by employing some linear constraints. Although there are many methods which have been proposed to reduce these constraints, enforcing them to a large-scale system is very difficult and complicated. This paper proposes a new method for controller synthesis based on PN modeling. In this approach, the original PN model is broken down into some smaller models in which the computational cost reduces significantly. Using this method, it is easy to reduce and enforce the constraints to a Petri net model. The appropriate results of our proposed method on the PN models denote worthy controller synthesis for the large scale systems. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Kinetic Modeling of Microbiological Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Chongxuan; Fang, Yilin

    Kinetic description of microbiological processes is vital for the design and control of microbe-based biotechnologies such as waste water treatment, petroleum oil recovery, and contaminant attenuation and remediation. Various models have been proposed to describe microbiological processes. This editorial article discusses the advantages and limiation of these modeling approaches in cluding tranditional, Monod-type models and derivatives, and recently developed constraint-based approaches. The article also offers the future direction of modeling researches that best suit for petroleum and environmental biotechnologies.

  16. Relative Packing Groups in Template-Based Structure Prediction: Cooperative Effects of True Positive Constraints

    PubMed Central

    Day, Ryan; Qu, Xiaotao; Swanson, Rosemarie; Bohannan, Zach; Bliss, Robert

    2011-01-01

    Abstract Most current template-based structure prediction methods concentrate on finding the correct backbone conformation and then packing sidechains within that backbone. Our packing-based method derives distance constraints from conserved relative packing groups (RPGs). In our refinement approach, the RPGs provide a level of resolution that restrains global topology while allowing conformational sampling. In this study, we test our template-based structure prediction method using 51 prediction units from CASP7 experiments. RPG-based constraints are able to substantially improve approximately two-thirds of starting templates. Upon deeper investigation, we find that true positive spatial constraints, especially those non-local in sequence, derived from the RPGs were important to building nearer native models. Surprisingly, the fraction of incorrect or false positive constraints does not strongly influence the quality of the final candidate. This result indicates that our RPG-based true positive constraints sample the self-consistent, cooperative interactions of the native structure. The lack of such reinforcing cooperativity explains the weaker effect of false positive constraints. Generally, these findings are encouraging indications that RPGs will improve template-based structure prediction. PMID:21210729

  17. Extensively Parameterized Mutation-Selection Models Reliably Capture Site-Specific Selective Constraint.

    PubMed

    Spielman, Stephanie J; Wilke, Claus O

    2016-11-01

    The mutation-selection model of coding sequence evolution has received renewed attention for its use in estimating site-specific amino acid propensities and selection coefficient distributions. Two computationally tractable mutation-selection inference frameworks have been introduced: One framework employs a fixed-effects, highly parameterized maximum likelihood approach, whereas the other employs a random-effects Bayesian Dirichlet Process approach. While both implementations follow the same model, they appear to make distinct predictions about the distribution of selection coefficients. The fixed-effects framework estimates a large proportion of highly deleterious substitutions, whereas the random-effects framework estimates that all substitutions are either nearly neutral or weakly deleterious. It remains unknown, however, how accurately each method infers evolutionary constraints at individual sites. Indeed, selection coefficient distributions pool all site-specific inferences, thereby obscuring a precise assessment of site-specific estimates. Therefore, in this study, we use a simulation-based strategy to determine how accurately each approach recapitulates the selective constraint at individual sites. We find that the fixed-effects approach, despite its extensive parameterization, consistently and accurately estimates site-specific evolutionary constraint. By contrast, the random-effects Bayesian approach systematically underestimates the strength of natural selection, particularly for slowly evolving sites. We also find that, despite the strong differences between their inferred selection coefficient distributions, the fixed- and random-effects approaches yield surprisingly similar inferences of site-specific selective constraint. We conclude that the fixed-effects mutation-selection framework provides the more reliable software platform for model application and future development. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Principles of proteome allocation are revealed using proteomic data and genome-scale models

    PubMed Central

    Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.; Ebrahim, Ali; Saunders, Michael A.; Palsson, Bernhard O.

    2016-01-01

    Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thus represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. This flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models. PMID:27857205

  19. Principles of proteome allocation are revealed using proteomic data and genome-scale models

    DOE PAGES

    Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.; ...

    2016-11-18

    Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the “generalist” (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions, prediction errors for growth rate and metabolic fluxes were 69% and 14% lower, respectively. The sector-constrained ME model thusmore » represents a generalist ME model reflecting both growth rate maximization and “hedging” against uncertain environments and stresses, as indicated by significant enrichment of these sectors for the general stress response sigma factor σS. Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally-related protein groups) as demonstrated here. Furthermore, this flexible formalism provides an accessible approach for narrowing the gap between the complexity captured by omics data and governing principles of proteome allocation described by systems-level models.« less

  20. Algebraic reasoning for the enhancement of data-driven building reconstructions

    NASA Astrophysics Data System (ADS)

    Meidow, Jochen; Hammer, Horst

    2016-04-01

    Data-driven approaches for the reconstruction of buildings feature the flexibility needed to capture objects of arbitrary shape. To recognize man-made structures, geometric relations such as orthogonality or parallelism have to be detected. These constraints are typically formulated as sets of multivariate polynomials. For the enforcement of the constraints within an adjustment process, a set of independent and consistent geometric constraints has to be determined. Gröbner bases are an ideal tool to identify such sets exactly. A complete workflow for geometric reasoning is presented to obtain boundary representations of solids based on given point clouds. The constraints are formulated in homogeneous coordinates, which results in simple polynomials suitable for the successful derivation of Gröbner bases for algebraic reasoning. Strategies for the reduction of the algebraical complexity are presented. To enforce the constraints, an adjustment model is introduced, which is able to cope with homogeneous coordinates along with their singular covariance matrices. The feasibility and the potential of the approach are demonstrated by the analysis of a real data set.

  1. Multiparameter elastic full waveform inversion with facies-based constraints

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen-dong; Alkhalifah, Tariq; Naeini, Ehsan Zabihi; Sun, Bingbing

    2018-06-01

    Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize FWI beyond improved acoustic imaging, like in reservoir delineation, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Some anisotropic parameters are insufficiently updated because of their minor contributions to the surface collected data. Adding rock physics constraints to the inversion helps mitigate such limited sensitivity, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a global constraint for the whole area. Since similar rock formations inside the Earth admit consistent elastic properties and relative values of elasticity and anisotropy parameters (this enables us to define them as a seismic facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel approach to use facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such facies using Bayesian theory and update them at each iteration of the inversion using both the inverted models and a priori information. We take the uncertainties of the estimated parameters (approximated by radiation patterns) into consideration and improve the quality of estimated facies maps. Four numerical examples corresponding to different acquisition, physical assumptions and model circumstances are used to verify the effectiveness of the proposed method.

  2. Towards a Semantically-Enabled Control Strategy for Building Simulations: Integration of Semantic Technologies and Model Predictive Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delgoshaei, Parastoo; Austin, Mark A.; Pertzborn, Amanda J.

    State-of-the-art building simulation control methods incorporate physical constraints into their mathematical models, but omit implicit constraints associated with policies of operation and dependency relationships among rules representing those constraints. To overcome these shortcomings, there is a recent trend in enabling the control strategies with inference-based rule checking capabilities. One solution is to exploit semantic web technologies in building simulation control. Such approaches provide the tools for semantic modeling of domains, and the ability to deduce new information based on the models through use of Description Logic (DL). In a step toward enabling this capability, this paper presents a cross-disciplinary data-drivenmore » control strategy for building energy management simulation that integrates semantic modeling and formal rule checking mechanisms into a Model Predictive Control (MPC) formulation. The results show that MPC provides superior levels of performance when initial conditions and inputs are derived from inference-based rules.« less

  3. Applications of a formal approach to decipher discrete genetic networks.

    PubMed

    Corblin, Fabien; Fanchon, Eric; Trilling, Laurent

    2010-07-20

    A growing demand for tools to assist the building and analysis of biological networks exists in systems biology. We argue that the use of a formal approach is relevant and applicable to address questions raised by biologists about such networks. The behaviour of these systems being complex, it is essential to exploit efficiently every bit of experimental information. In our approach, both the evolution rules and the partial knowledge about the structure and the behaviour of the network are formalized using a common constraint-based language. In this article our formal and declarative approach is applied to three biological applications. The software environment that we developed allows to specifically address each application through a new class of biologically relevant queries. We show that we can describe easily and in a formal manner the partial knowledge about a genetic network. Moreover we show that this environment, based on a constraint algorithmic approach, offers a wide variety of functionalities, going beyond simple simulations, such as proof of consistency, model revision, prediction of properties, search for minimal models relatively to specified criteria. The formal approach proposed here deeply changes the way to proceed in the exploration of genetic and biochemical networks, first by avoiding the usual trial-and-error procedure, and second by placing the emphasis on sets of solutions, rather than a single solution arbitrarily chosen among many others. Last, the constraint approach promotes an integration of model and experimental data in a single framework.

  4. Risk Management of New Microelectronics for NASA: Radiation Knowledge-base

    NASA Technical Reports Server (NTRS)

    LaBel, Kenneth A.

    2004-01-01

    Contents include the following: NASA Missions - implications to reliability and radiation constraints. Approach to Insertion of New Technologies Technology Knowledge-base development. Technology model/tool development and validation. Summary comments.

  5. Constraint-based stoichiometric modelling from single organisms to microbial communities

    PubMed Central

    Olivier, Brett G.; Bruggeman, Frank J.; Teusink, Bas

    2016-01-01

    Microbial communities are ubiquitously found in Nature and have direct implications for the environment, human health and biotechnology. The species composition and overall function of microbial communities are largely shaped by metabolic interactions such as competition for resources and cross-feeding. Although considerable scientific progress has been made towards mapping and modelling species-level metabolism, elucidating the metabolic exchanges between microorganisms and steering the community dynamics remain an enormous scientific challenge. In view of the complexity, computational models of microbial communities are essential to obtain systems-level understanding of ecosystem functioning. This review discusses the applications and limitations of constraint-based stoichiometric modelling tools, and in particular flux balance analysis (FBA). We explain this approach from first principles and identify the challenges one faces when extending it to communities, and discuss the approaches used in the field in view of these challenges. We distinguish between steady-state and dynamic FBA approaches extended to communities. We conclude that much progress has been made, but many of the challenges are still open. PMID:28334697

  6. Integrating tracer-based metabolomics data and metabolic fluxes in a linear fashion via Elementary Carbon Modes.

    PubMed

    Pey, Jon; Rubio, Angel; Theodoropoulos, Constantinos; Cascante, Marta; Planes, Francisco J

    2012-07-01

    Constraints-based modeling is an emergent area in Systems Biology that includes an increasing set of methods for the analysis of metabolic networks. In order to refine its predictions, the development of novel methods integrating high-throughput experimental data is currently a key challenge in the field. In this paper, we present a novel set of constraints that integrate tracer-based metabolomics data from Isotope Labeling Experiments and metabolic fluxes in a linear fashion. These constraints are based on Elementary Carbon Modes (ECMs), a recently developed concept that generalizes Elementary Flux Modes at the carbon level. To illustrate the effect of our ECMs-based constraints, a Flux Variability Analysis approach was applied to a previously published metabolic network involving the main pathways in the metabolism of glucose. The addition of our ECMs-based constraints substantially reduced the under-determination resulting from a standard application of Flux Variability Analysis, which shows a clear progress over the state of the art. In addition, our approach is adjusted to deal with combinatorial explosion of ECMs in genome-scale metabolic networks. This extension was applied to infer the maximum biosynthetic capacity of non-essential amino acids in human metabolism. Finally, as linearity is the hallmark of our approach, its importance is discussed at a methodological, computational and theoretical level and illustrated with a practical application in the field of Isotope Labeling Experiments. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Formal Specification and Automatic Analysis of Business Processes under Authorization Constraints: An Action-Based Approach

    NASA Astrophysics Data System (ADS)

    Armando, Alessandro; Giunchiglia, Enrico; Ponta, Serena Elisa

    We present an approach to the formal specification and automatic analysis of business processes under authorization constraints based on the action language \\cal{C}. The use of \\cal{C} allows for a natural and concise modeling of the business process and the associated security policy and for the automatic analysis of the resulting specification by using the Causal Calculator (CCALC). Our approach improves upon previous work by greatly simplifying the specification step while retaining the ability to perform a fully automatic analysis. To illustrate the effectiveness of the approach we describe its application to a version of a business process taken from the banking domain and use CCALC to determine resource allocation plans complying with the security policy.

  8. Constraint Programming to Solve Maximal Density Still Life

    NASA Astrophysics Data System (ADS)

    Chu, Geoffrey; Petrie, Karen Elizabeth; Yorke-Smith, Neil

    The Maximum Density Still Life problem fills a finite Game of Life board with a stable pattern of cells that has as many live cells as possible. Although simple to state, this problem is computationally challenging for any but the smallest sizes of board. Especially difficult is to prove that the maximum number of live cells has been found. Various approaches have been employed. The most successful are approaches based on Constraint Programming (CP). We describe the Maximum Density Still Life problem, introduce the concept of constraint programming, give an overview on how the problem can be modelled and solved with CP, and report on best-known results for the problem.

  9. Introduction to "The Behavior-Analytic Origins of Constraint-Induced Movement Therapy: An Example of Behavioral Neurorehabilitation"

    ERIC Educational Resources Information Center

    Schaal, David W.

    2012-01-01

    This article presents an introduction to "The Behavior-Analytic Origins of Constraint-Induced Movement Therapy: An Example of Behavioral Neurorehabilitation," by Edward Taub and his colleagues (Taub, 2012). Based on extensive experimentation with animal models of peripheral nerve injury, Taub and colleagues have created an approach to overcoming…

  10. Weighting climate model projections using observational constraints.

    PubMed

    Gillett, Nathan P

    2015-11-13

    Projected climate change integrates the net response to multiple climate feedbacks. Whereas existing long-term climate change projections are typically based on unweighted individual climate model simulations, as observed climate change intensifies it is increasingly becoming possible to constrain the net response to feedbacks and hence projected warming directly from observed climate change. One approach scales simulated future warming based on a fit to observations over the historical period, but this approach is only accurate for near-term projections and for scenarios of continuously increasing radiative forcing. For this reason, the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5) included such observationally constrained projections in its assessment of warming to 2035, but used raw model projections of longer term warming to 2100. Here a simple approach to weighting model projections based on an observational constraint is proposed which does not assume a linear relationship between past and future changes. This approach is used to weight model projections of warming in 2081-2100 relative to 1986-2005 under the Representative Concentration Pathway 4.5 forcing scenario, based on an observationally constrained estimate of the Transient Climate Response derived from a detection and attribution analysis. The resulting observationally constrained 5-95% warming range of 0.8-2.5 K is somewhat lower than the unweighted range of 1.1-2.6 K reported in the IPCC AR5. © 2015 The Authors.

  11. Robust model predictive control for satellite formation keeping with eccentricity/inclination vector separation

    NASA Astrophysics Data System (ADS)

    Lim, Yeerang; Jung, Youeyun; Bang, Hyochoong

    2018-05-01

    This study presents model predictive formation control based on an eccentricity/inclination vector separation strategy. Alternative collision avoidance can be accomplished by using eccentricity/inclination vectors and adding a simple goal function term for optimization process. Real-time control is also achievable with model predictive controller based on convex formulation. Constraint-tightening approach is address as well improve robustness of the controller, and simulation results are presented to verify performance enhancement for the proposed approach.

  12. Modeling mammary organogenesis from biological first principles: Cells and their physical constraints.

    PubMed

    Montévil, Maël; Speroni, Lucia; Sonnenschein, Carlos; Soto, Ana M

    2016-10-01

    In multicellular organisms, relations among parts and between parts and the whole are contextual and interdependent. These organisms and their cells are ontogenetically linked: an organism starts as a cell that divides producing non-identical cells, which organize in tri-dimensional patterns. These association patterns and cells types change as tissues and organs are formed. This contextuality and circularity makes it difficult to establish detailed cause and effect relationships. Here we propose an approach to overcome these intrinsic difficulties by combining the use of two models; 1) an experimental one that employs 3D culture technology to obtain the structures of the mammary gland, namely, ducts and acini, and 2) a mathematical model based on biological principles. The typical approach for mathematical modeling in biology is to apply mathematical tools and concepts developed originally in physics or computer sciences. Instead, we propose to construct a mathematical model based on proper biological principles. Specifically, we use principles identified as fundamental for the elaboration of a theory of organisms, namely i) the default state of cell proliferation with variation and motility and ii) the principle of organization by closure of constraints. This model has a biological component, the cells, and a physical component, a matrix which contains collagen fibers. Cells display agency and move and proliferate unless constrained; they exert mechanical forces that i) act on collagen fibers and ii) on other cells. As fibers organize, they constrain the cells on their ability to move and to proliferate. The model exhibits a circularity that can be interpreted in terms of closure of constraints. Implementing the mathematical model shows that constraints to the default state are sufficient to explain ductal and acinar formation, and points to a target of future research, namely, to inhibitors of cell proliferation and motility generated by the epithelial cells. The success of this model suggests a step-wise approach whereby additional constraints imposed by the tissue and the organism could be examined in silico and rigorously tested by in vitro and in vivo experiments, in accordance with the organicist perspective we embrace. Copyright © 2016. Published by Elsevier Ltd.

  13. Modeling mammary organogenesis from biological first principles: Cells and their physical constraints

    PubMed Central

    Montévil, Maël; Speroni, Lucia; Sonnenschein, Carlos; Soto, Ana M.

    2017-01-01

    In multicellular organisms, relations among parts and between parts and the whole are contextual and interdependent. These organisms and their cells are ontogenetically linked: an organism starts as a cell that divides producing non-identical cells, which organize in tri-dimensional patterns. These association patterns and cells types change as tissues and organs are formed. This contextuality and circularity makes it difficult to establish detailed cause and effect relationships. Here we propose an approach to overcome these intrinsic difficulties by combining the use of two models; 1) an experimental one that employs 3D culture technology to obtain the structures of the mammary gland, namely, ducts and acini, and 2) a mathematical model based on biological principles. The typical approach for mathematical modeling in biology is to apply mathematical tools and concepts developed originally in physics or computer sciences. Instead, we propose to construct a mathematical model based on proper biological principles. Specifically, we use principles identified as fundamental for the elaboration of a theory of organisms, namely i) the default state of cell proliferation with variation and motility and ii) the principle of organization by closure of constraints. This model has a biological component, the cells, and a physical component, a matrix which contains collagen fibers. Cells display agency and move and proliferate unless constrained; they exert mechanical forces that i) act on collagen fibers and ii) on other cells. As fibers organize, they constrain the cells on their ability to move and to proliferate. The model exhibits a circularity that can be interpreted in terms of closure of constraints. Implementing the mathematical model shows that constraints to the default state are sufficient to explain ductal and acinar formation, and points to a target of future research, namely, to inhibitors of cell proliferation and motility generated by the epithelial cells. The success of this model suggests a step-wise approach whereby additional constraints imposed by the tissue and the organism could be examined in silico and rigorously tested by in vitro and in vivo experiments, in accordance with the organicist perspective we embrace. PMID:27544910

  14. A Problem-Based Approach to Elastic Wave Propagation: The Role of Constraints

    ERIC Educational Resources Information Center

    Fazio, Claudio; Guastella, Ivan; Tarantino, Giovanni

    2009-01-01

    A problem-based approach to the teaching of mechanical wave propagation, focused on observation and measurement of wave properties in solids and on modelling of these properties, is presented. In particular, some experimental results, originally aimed at measuring the propagation speed of sound waves in metallic rods, are used in order to deepen…

  15. Reduction method with system analysis for multiobjective optimization-based design

    NASA Technical Reports Server (NTRS)

    Azarm, S.; Sobieszczanski-Sobieski, J.

    1993-01-01

    An approach for reducing the number of variables and constraints, which is combined with System Analysis Equations (SAE), for multiobjective optimization-based design is presented. In order to develop a simplified analysis model, the SAE is computed outside an optimization loop and then approximated for use by an operator. Two examples are presented to demonstrate the approach.

  16. Multiple constraint analysis of regional land-surface carbon flux

    Treesearch

    D.P. Turner; M. Göckede; B.E. Law; W.D. Ritts; W.B. Cohen; Z. Yang; T. Hudiburg; R. Kennedy; M. Duane

    2011-01-01

    We applied and compared bottom-up (process model-based) and top-down (atmospheric inversion-based) scaling approaches to evaluate the spatial and temporal patterns of net ecosystem production (NEP) over a 2.5 × 105 km2 area (the state of Oregon) in the western United States. Both approaches indicated a carbon sink over this...

  17. The soft constraints hypothesis: a rational analysis approach to resource allocation for interactive behavior.

    PubMed

    Gray, Wayne D; Sims, Chris R; Fu, Wai-Tat; Schoelles, Michael J

    2006-07-01

    Soft constraints hypothesis (SCH) is a rational analysis approach that holds that the mixture of perceptual-motor and cognitive resources allocated for interactive behavior is adjusted based on temporal cost-benefit tradeoffs. Alternative approaches maintain that cognitive resources are in some sense protected or conserved in that greater amounts of perceptual-motor effort will be expended to conserve lesser amounts of cognitive effort. One alternative, the minimum memory hypothesis (MMH), holds that people favor strategies that minimize the use of memory. SCH is compared with MMH across 3 experiments and with predictions of an Ideal Performer Model that uses ACT-R's memory system in a reinforcement learning approach that maximizes expected utility by minimizing time. Model and data support the SCH view of resource allocation; at the under 1000-ms level of analysis, mixtures of cognitive and perceptual-motor resources are adjusted based on their cost-benefit tradeoffs for interactive behavior. ((c) 2006 APA, all rights reserved).

  18. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    PubMed

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  19. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  20. A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks

    NASA Astrophysics Data System (ADS)

    De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio

    2016-05-01

    This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.

  1. Penalty-Based Finite Element Interface Technology for Analysis of Homogeneous and Composite Structures

    NASA Technical Reports Server (NTRS)

    Averill, Ronald C.

    2002-01-01

    An effective and robust interface element technology able to connect independently modeled finite element subdomains has been developed. This method is based on the use of penalty constraints and allows coupling of finite element models whose nodes do not coincide along their common interface. Additionally, the present formulation leads to a computational approach that is very efficient and completely compatible with existing commercial software. A significant effort has been directed toward identifying those model characteristics (element geometric properties, material properties, and loads) that most strongly affect the required penalty parameter, and subsequently to developing simple 'formulae' for automatically calculating the proper penalty parameter for each interface constraint. This task is especially critical in composite materials and structures, where adjacent sub-regions may be composed of significantly different materials or laminates. This approach has been validated by investigating a variety of two-dimensional problems, including composite laminates.

  2. Predictive Rotation Profile Control for the DIII-D Tokamak

    NASA Astrophysics Data System (ADS)

    Wehner, W. P.; Schuster, E.; Boyer, M. D.; Walker, M. L.; Humphreys, D. A.

    2017-10-01

    Control-oriented modeling and model-based control of the rotation profile are employed to build a suitable control capability for aiding rotation-related physics studies at DIII-D. To obtain a control-oriented model, a simplified version of the momentum balance equation is combined with empirical representations of the momentum sources. The control approach is rooted in a Model Predictive Control (MPC) framework to regulate the rotation profile while satisfying constraints associated with the desired plasma stored energy and/or βN limit. Simple modifications allow for alternative control objectives, such as maximizing the plasma rotation while maintaining a specified input torque. Because the MPC approach can explicitly incorporate various types of constraints, this approach is well suited to a variety of control objectives, and therefore serves as a valuable tool for experimental physics studies. Closed-loop TRANSP simulations are presented to demonstrate the effectiveness of the control approach. Supported by the US DOE under DE-SC0010661 and DE-FC02-04ER54698.

  3. Reformulating Constraints for Compilability and Efficiency

    NASA Technical Reports Server (NTRS)

    Tong, Chris; Braudaway, Wesley; Mohan, Sunil; Voigt, Kerstin

    1992-01-01

    KBSDE is a knowledge compiler that uses a classification-based approach to map solution constraints in a task specification onto particular search algorithm components that will be responsible for satisfying those constraints (e.g., local constraints are incorporated in generators; global constraints are incorporated in either testers or hillclimbing patchers). Associated with each type of search algorithm component is a subcompiler that specializes in mapping constraints into components of that type. Each of these subcompilers in turn uses a classification-based approach, matching a constraint passed to it against one of several schemas, and applying a compilation technique associated with that schema. While much progress has occurred in our research since we first laid out our classification-based approach [Ton91], we focus in this paper on our reformulation research. Two important reformulation issues that arise out of the choice of a schema-based approach are: (1) compilability-- Can a constraint that does not directly match any of a particular subcompiler's schemas be reformulated into one that does? and (2) Efficiency-- If the efficiency of the compiled search algorithm depends on the compiler's performance, and the compiler's performance depends on the form in which the constraint was expressed, can we find forms for constraints which compile better, or reformulate constraints whose forms can be recognized as ones that compile poorly? In this paper, we describe a set of techniques we are developing for partially addressing these issues.

  4. A population-based model for priority setting across the care continuum and across modalities

    PubMed Central

    Segal, Leonie; Mortimer, Duncan

    2006-01-01

    Background The Health-sector Wide (HsW) priority setting model is designed to shift the focus of priority setting away from 'program budgets' – that are typically defined by modality or disease-stage – and towards well-defined target populations with a particular disease/health problem. Methods The key features of the HsW model are i) a disease/health problem framework, ii) a sequential approach to covering the entire health sector, iii) comprehensiveness of scope in identifying intervention options and iv) the use of objective evidence. The HsW model redefines the unit of analysis over which priorities are set to include all mutually exclusive and complementary interventions for the prevention and treatment of each disease/health problem under consideration. The HsW model is therefore incompatible with the fragmented approach to priority setting across multiple program budgets that currently characterises allocation in many health systems. The HsW model employs standard cost-utility analyses and decision-rules with the aim of maximising QALYs contingent upon the global budget constraint for the set of diseases/health problems under consideration. It is recognised that the objective function may include non-health arguments that would imply a departure from simple QALY maximisation and that political constraints frequently limit degrees of freedom. In addressing these broader considerations, the HsW model can be modified to maximise value-weighted QALYs contingent upon the global budget constraint and any political constraints bearing upon allocation decisions. Results The HsW model has been applied in several contexts, recently to osteoarthritis, that has demonstrated both its practical application and its capacity to derive clear evidenced-based policy recommendations. Conclusion Comparisons with other approaches to priority setting, such as Programme Budgeting and Marginal Analysis (PBMA) and modality-based cost-effectiveness comparisons, as typified by Australia's Pharmaceutical Benefits Advisory Committee process for the listing of pharmaceuticals for government funding, demonstrate the value added by the HsW model notably in its greater likelihood of contributing to allocative efficiency. PMID:16566841

  5. MapMaker and PathTracer for tracking carbon in genome-scale metabolic models

    PubMed Central

    Tervo, Christopher J.; Reed, Jennifer L.

    2016-01-01

    Constraint-based reconstruction and analysis (COBRA) modeling results can be difficult to interpret given the large numbers of reactions in genome-scale models. While paths in metabolic networks can be found, existing methods are not easily combined with constraint-based approaches. To address this limitation, two tools (MapMaker and PathTracer) were developed to find paths (including cycles) between metabolites, where each step transfers carbon from reactant to product. MapMaker predicts carbon transfer maps (CTMs) between metabolites using only information on molecular formulae and reaction stoichiometry, effectively determining which reactants and products share carbon atoms. MapMaker correctly assigned CTMs for over 97% of the 2,251 reactions in an Escherichia coli metabolic model (iJO1366). Using CTMs as inputs, PathTracer finds paths between two metabolites. PathTracer was applied to iJO1366 to investigate the importance of using CTMs and COBRA constraints when enumerating paths, to find active and high flux paths in flux balance analysis (FBA) solutions, to identify paths for putrescine utilization, and to elucidate a potential CO2 fixation pathway in E. coli. These results illustrate how MapMaker and PathTracer can be used in combination with constraint-based models to identify feasible, active, and high flux paths between metabolites. PMID:26771089

  6. Efficient estimation of the attributable fraction when there are monotonicity constraints and interactions.

    PubMed

    Traskin, Mikhail; Wang, Wei; Ten Have, Thomas R; Small, Dylan S

    2013-01-01

    The PAF for an exposure is the fraction of disease cases in a population that can be attributed to that exposure. One method of estimating the PAF involves estimating the probability of having the disease given the exposure and confounding variables. In many settings, the exposure will interact with the confounders and the confounders will interact with each other. Also, in many settings, the probability of having the disease is thought, based on subject matter knowledge, to be a monotone increasing function of the exposure and possibly of some of the confounders. We develop an efficient approach for estimating logistic regression models with interactions and monotonicity constraints, and apply this approach to estimating the population attributable fraction (PAF). Our approach produces substantially more accurate estimates of the PAF in some settings than the usual approach which uses logistic regression without monotonicity constraints.

  7. The importance of topography-controlled sub-grid process heterogeneity and semi-quantitative prior constraints in distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, Remko C.; Samaniego, Luis; Mai, Juliane; Kumar, Rohini; Thober, Stephan; Zink, Matthias; Schäfer, David; Savenije, Hubert H. G.; Hrachowitz, Markus

    2016-03-01

    Heterogeneity of landscape features like terrain, soil, and vegetation properties affects the partitioning of water and energy. However, it remains unclear to what extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated into the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge-based model constraints reduces model uncertainty, and whether (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge-based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as an overall measure of model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19 %, respectively, compared to the base case of the unconstrained mHM. Most significant improvements in signature representations were, in particular, achieved for low flow statistics. The application of prior semi-quantitative constraints further improved the partitioning between runoff and evaporative fluxes. In addition, it was shown that suitable semi-quantitative prior constraints in combination with the transfer-function-based regularization approach of mHM can be beneficial for spatial model transferability as the Euclidian distances for the signatures improved on average by 2 %. The effect of semi-quantitative prior constraints combined with topography-guided sub-grid heterogeneity on transferability showed a more variable picture of improvements and deteriorations, but most improvements were observed for low flow statistics.

  8. Robust Path Planning and Feedback Design Under Stochastic Uncertainty

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars

    2008-01-01

    Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.

  9. Using cognitive work analysis to explore activity allocation within military domains.

    PubMed

    Jenkins, D P; Stanton, N A; Salmon, P M; Walker, G H; Young, M S

    2008-06-01

    Cognitive work analysis (CWA) is frequently advocated as an approach for the analysis of complex socio-technical systems. Much of the current CWA literature within the military domain pays particular attention to its initial phases; work domain analysis and contextual task analysis. Comparably, the analysis of the social and organisational constraints receives much less attention. Through the study of a helicopter mission planning system software tool, this paper describes an approach for investigating the constraints affecting the distribution of work. The paper uses this model to evaluate the potential benefits of the social and organisational analysis phase within a military context. The analysis shows that, through its focus on constraints, the approach provides a unique description of the factors influencing the social organisation within a complex domain. This approach appears to be compatible with existing approaches and serves as a validation of more established social analysis techniques. As part of the ergonomic design of mission planning systems, the social organisation and cooperation analysis phase of CWA provides a constraint-based description informing allocation of function between key actor groups. This approach is useful because it poses questions related to the transfer of information and optimum working practices.

  10. Model-independent and model-based local lensing properties of CL0024+1654 from multiply imaged galaxies

    NASA Astrophysics Data System (ADS)

    Wagner, Jenny; Liesenborgs, Jori; Tessore, Nicolas

    2018-04-01

    Context. Local gravitational lensing properties, such as convergence and shear, determined at the positions of multiply imaged background objects, yield valuable information on the smaller-scale lensing matter distribution in the central part of galaxy clusters. Highly distorted multiple images with resolved brightness features like the ones observed in CL0024 allow us to study these local lensing properties and to tighten the constraints on the properties of dark matter on sub-cluster scale. Aim. We investigate to what precision local magnification ratios, J, ratios of convergences, f, and reduced shears, g = (g1, g2), can be determined independently of a lens model for the five resolved multiple images of the source at zs = 1.675 in CL0024. We also determine if a comparison to the respective results obtained by the parametric modelling tool Lenstool and by the non-parametric modelling tool Grale can detect biases in the models. For these lens models, we analyse the influence of the number and location of the constraints from multiple images on the lens properties at the positions of the five multiple images of the source at zs = 1.675. Methods: Our model-independent approach uses a linear mapping between the five resolved multiple images to determine the magnification ratios, ratios of convergences, and reduced shears at their positions. With constraints from up to six multiple image systems, we generate Lenstool and Grale models using the same image positions, cosmological parameters, and number of generated convergence and shear maps to determine the local values of J, f, and g at the same positions across all methods. Results: All approaches show strong agreement on the local values of J, f, and g. We find that Lenstool obtains the tightest confidence bounds even for convergences around one using constraints from six multiple-image systems, while the best Grale model is generated only using constraints from all multiple images with resolved brightness features and adding limited small-scale mass corrections. Yet, confidence bounds as large as the values themselves can occur for convergences close to one in all approaches. Conclusions: Our results agree with previous findings, support the light-traces-mass assumption, and the merger hypothesis for CL0024. Comparing the different approaches can detect model biases. The model-independent approach determines the local lens properties to a comparable precision in less than one second.

  11. Incremental checking of Master Data Management model based on contextual graphs

    NASA Astrophysics Data System (ADS)

    Lamolle, Myriam; Menet, Ludovic; Le Duc, Chan

    2015-10-01

    The validation of models is a crucial step in distributed heterogeneous systems. In this paper, an incremental validation method is proposed in the scope of a Model Driven Engineering (MDE) approach, which is used to develop a Master Data Management (MDM) field represented by XML Schema models. The MDE approach presented in this paper is based on the definition of an abstraction layer using UML class diagrams. The validation method aims to minimise the model errors and to optimisethe process of model checking. Therefore, the notion of validation contexts is introduced allowing the verification of data model views. Description logics specify constraints that the models have to check. An experimentation of the approach is presented through an application developed in ArgoUML IDE.

  12. $L^1$ penalization of volumetric dose objectives in optimal control of PDEs

    DOE PAGES

    Barnard, Richard C.; Clason, Christian

    2017-02-11

    This work is concerned with a class of PDE-constrained optimization problems that are motivated by an application in radiotherapy treatment planning. Here the primary design objective is to minimize the volume where a functional of the state violates a prescribed level, but prescribing these levels in the form of pointwise state constraints leads to infeasible problems. We therefore propose an alternative approach based on L 1 penalization of the violation that is also applicable when state constraints are infeasible. We establish well-posedness of the corresponding optimal control problem, derive first-order optimality conditions, discuss convergence of minimizers as the penalty parametermore » tends to infinity, and present a semismooth Newton method for their efficient numerical solution. Finally, the performance of this method for a model problem is illustrated and contrasted with an alternative approach based on (regularized) state constraints.« less

  13. Exclusive data-based modeling of neutron-nuclear reactions below 20 MeV

    NASA Astrophysics Data System (ADS)

    Savin, Dmitry; Kosov, Mikhail

    2017-09-01

    We are developing CHIPS-TPT physics library for exclusive simulation of neutron-nuclear reactions below 20 MeV. Exclusive modeling reproduces each separate scattering and thus requires conservation of energy, momentum and quantum numbers in each reaction. Inclusive modeling reproduces only selected values while averaging over the others and imposes no such constraints. Therefore the exclusive modeling allows to simulate additional quantities like secondary particle correlations and gamma-lines broadening and avoid artificial fluctuations. CHIPS-TPT is based on the formerly included in Geant4 CHIPS library, which follows the exclusive approach, and extends it to incident neutrons with the energy below 20 MeV. The NeutronHP model for neutrons below 20 MeV included in Geant4 follows the inclusive approach like the well known MCNP code. Unfortunately, the available data in this energy region is mostly presented in ENDF-6 format and semi-inclusive. Imposing additional constraints on secondary particles complicates modeling but also allows to detect inconsistencies in the input data and to avoid errors that may remain unnoticed in inclusive modeling.

  14. A motion-constraint logic for moving-base simulators based on variable filter parameters

    NASA Technical Reports Server (NTRS)

    Miller, G. K., Jr.

    1974-01-01

    A motion-constraint logic for moving-base simulators has been developed that is a modification to the linear second-order filters generally employed in conventional constraints. In the modified constraint logic, the filter parameters are not constant but vary with the instantaneous motion-base position to increase the constraint as the system approaches the positional limits. With the modified constraint logic, accelerations larger than originally expected are limited while conventional linear filters would result in automatic shutdown of the motion base. In addition, the modified washout logic has frequency-response characteristics that are an improvement over conventional linear filters with braking for low-frequency pilot inputs. During simulated landing approaches of an externally blown flap short take-off and landing (STOL) transport using decoupled longitudinal controls, the pilots were unable to detect much difference between the modified constraint logic and the logic based on linear filters with braking.

  15. Reasoning about real-time systems with temporal interval logic constraints on multi-state automata

    NASA Technical Reports Server (NTRS)

    Gabrielian, Armen

    1991-01-01

    Models of real-time systems using a single paradigm often turn out to be inadequate, whether the paradigm is based on states, rules, event sequences, or logic. A model-based approach to reasoning about real-time systems is presented in which a temporal interval logic called TIL is employed to define constraints on a new type of high level automata. The combination, called hierarchical multi-state (HMS) machines, can be used to model formally a real-time system, a dynamic set of requirements, the environment, heuristic knowledge about planning-related problem solving, and the computational states of the reasoning mechanism. In this framework, mathematical techniques were developed for: (1) proving the correctness of a representation; (2) planning of concurrent tasks to achieve goals; and (3) scheduling of plans to satisfy complex temporal constraints. HMS machines allow reasoning about a real-time system from a model of how truth arises instead of merely depending of what is true in a system.

  16. Constraint Force Equation Methodology for Modeling Multi-Body Stage Separation Dynamics

    NASA Technical Reports Server (NTRS)

    Toniolo, Matthew D.; Tartabini, Paul V.; Pamadi, Bandu N.; Hotchko, Nathaniel

    2008-01-01

    This paper discusses a generalized approach to the multi-body separation problems in a launch vehicle staging environment based on constraint force methodology and its implementation into the Program to Optimize Simulated Trajectories II (POST2), a widely used trajectory design and optimization tool. This development facilitates the inclusion of stage separation analysis into POST2 for seamless end-to-end simulations of launch vehicle trajectories, thus simplifying the overall implementation and providing a range of modeling and optimization capabilities that are standard features in POST2. Analysis and results are presented for two test cases that validate the constraint force equation methodology in a stand-alone mode and its implementation in POST2.

  17. Quantitative Assessment of Thermodynamic Constraints on the Solution Space of Genome-Scale Metabolic Models

    PubMed Central

    Hamilton, Joshua J.; Dwivedi, Vivek; Reed, Jennifer L.

    2013-01-01

    Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. PMID:23870272

  18. Constraint Logic Programming approach to protein structure prediction.

    PubMed

    Dal Palù, Alessandro; Dovier, Agostino; Fogolari, Federico

    2004-11-30

    The protein structure prediction problem is one of the most challenging problems in biological sciences. Many approaches have been proposed using database information and/or simplified protein models. The protein structure prediction problem can be cast in the form of an optimization problem. Notwithstanding its importance, the problem has very seldom been tackled by Constraint Logic Programming, a declarative programming paradigm suitable for solving combinatorial optimization problems. Constraint Logic Programming techniques have been applied to the protein structure prediction problem on the face-centered cube lattice model. Molecular dynamics techniques, endowed with the notion of constraint, have been also exploited. Even using a very simplified model, Constraint Logic Programming on the face-centered cube lattice model allowed us to obtain acceptable results for a few small proteins. As a test implementation their (known) secondary structure and the presence of disulfide bridges are used as constraints. Simplified structures obtained in this way have been converted to all atom models with plausible structure. Results have been compared with a similar approach using a well-established technique as molecular dynamics. The results obtained on small proteins show that Constraint Logic Programming techniques can be employed for studying protein simplified models, which can be converted into realistic all atom models. The advantage of Constraint Logic Programming over other, much more explored, methodologies, resides in the rapid software prototyping, in the easy way of encoding heuristics, and in exploiting all the advances made in this research area, e.g. in constraint propagation and its use for pruning the huge search space.

  19. A depth-first search algorithm to compute elementary flux modes by linear programming.

    PubMed

    Quek, Lake-Ee; Nielsen, Lars K

    2014-07-30

    The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints.

  20. Advances in the integration of transcriptional regulatory information into genome-scale metabolic models.

    PubMed

    Vivek-Ananth, R P; Samal, Areejit

    2016-09-01

    A major goal of systems biology is to build predictive computational models of cellular metabolism. Availability of complete genome sequences and wealth of legacy biochemical information has led to the reconstruction of genome-scale metabolic networks in the last 15 years for several organisms across the three domains of life. Due to paucity of information on kinetic parameters associated with metabolic reactions, the constraint-based modelling approach, flux balance analysis (FBA), has proved to be a vital alternative to investigate the capabilities of reconstructed metabolic networks. In parallel, advent of high-throughput technologies has led to the generation of massive amounts of omics data on transcriptional regulation comprising mRNA transcript levels and genome-wide binding profile of transcriptional regulators. A frontier area in metabolic systems biology has been the development of methods to integrate the available transcriptional regulatory information into constraint-based models of reconstructed metabolic networks in order to increase the predictive capabilities of computational models and understand the regulation of cellular metabolism. Here, we review the existing methods to integrate transcriptional regulatory information into constraint-based models of metabolic networks. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Characterizing and reducing equifinality by constraining a distributed catchment model with regional signatures, local observations, and process understanding

    NASA Astrophysics Data System (ADS)

    Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten

    2017-07-01

    Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.

  2. Recovering task fMRI signals from highly under-sampled data with low-rank and temporal subspace constraints.

    PubMed

    Chiew, Mark; Graedel, Nadine N; Miller, Karla L

    2018-07-01

    Recent developments in highly accelerated fMRI data acquisition have employed low-rank and/or sparsity constraints for image reconstruction, as an alternative to conventional, time-independent parallel imaging. When under-sampling factors are high or the signals of interest are low-variance, however, functional data recovery can be poor or incomplete. We introduce a method for improving reconstruction fidelity using external constraints, like an experimental design matrix, to partially orient the estimated fMRI temporal subspace. Combining these external constraints with low-rank constraints introduces a new image reconstruction model that is analogous to using a mixture of subspace-decomposition (PCA/ICA) and regression (GLM) models in fMRI analysis. We show that this approach improves fMRI reconstruction quality in simulations and experimental data, focusing on the model problem of detecting subtle 1-s latency shifts between brain regions in a block-design task-fMRI experiment. Successful latency discrimination is shown at acceleration factors up to R = 16 in a radial-Cartesian acquisition. We show that this approach works with approximate, or not perfectly informative constraints, where the derived benefit is commensurate with the information content contained in the constraints. The proposed method extends low-rank approximation methods for under-sampled fMRI data acquisition by leveraging knowledge of expected task-based variance in the data, enabling improvements in the speed and efficiency of fMRI data acquisition without the loss of subtle features. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  3. 2D Time-lapse Seismic Tomography Using An Active Time Constraint (ATC) Approach

    EPA Science Inventory

    We propose a 2D seismic time-lapse inversion approach to image the evolution of seismic velocities over time and space. The forward modeling is based on solving the eikonal equation using a second-order fast marching method. The wave-paths are represented by Fresnel volumes rathe...

  4. Universal Quantification in a Constraint-Based Planner

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Frank, Jeremy; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Constraints and universal quantification are both useful in planning, but handling universally quantified constraints presents some novel challenges. We present a general approach to proving the validity of universally quantified constraints. The approach essentially consists of checking that the constraint is not violated for all members of the universe. We show that this approach can sometimes be applied even when variable domains are infinite, and we present some useful special cases where this can be done efficiently.

  5. An intelligent case-adjustment algorithm for the automated design of population-based quality auditing protocols.

    PubMed

    Advani, Aneel; Jones, Neil; Shahar, Yuval; Goldstein, Mary K; Musen, Mark A

    2004-01-01

    We develop a method and algorithm for deciding the optimal approach to creating quality-auditing protocols for guideline-based clinical performance measures. An important element of the audit protocol design problem is deciding which guide-line elements to audit. Specifically, the problem is how and when to aggregate individual patient case-specific guideline elements into population-based quality measures. The key statistical issue involved is the trade-off between increased reliability with more general population-based quality measures versus increased validity from individually case-adjusted but more restricted measures done at a greater audit cost. Our intelligent algorithm for auditing protocol design is based on hierarchically modeling incrementally case-adjusted quality constraints. We select quality constraints to measure using an optimization criterion based on statistical generalizability coefficients. We present results of the approach from a deployed decision support system for a hypertension guideline.

  6. The importance of topography controlled sub-grid process heterogeneity in distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, R. C.; Samaniego, L.; Mai, J.; Kumar, R.; Thober, S.; Zink, M.; Schäfer, D.; Savenije, H. H. G.; Hrachowitz, M.

    2015-12-01

    Heterogeneity of landscape features like terrain, soil, and vegetation properties affect the partitioning of water and energy. However, it remains unclear to which extent an explicit representation of this heterogeneity at the sub-grid scale of distributed hydrological models can improve the hydrological consistency and the robustness of such models. In this study, hydrological process complexity arising from sub-grid topography heterogeneity was incorporated in the distributed mesoscale Hydrologic Model (mHM). Seven study catchments across Europe were used to test whether (1) the incorporation of additional sub-grid variability on the basis of landscape-derived response units improves model internal dynamics, (2) the application of semi-quantitative, expert-knowledge based model constraints reduces model uncertainty; and (3) the combined use of sub-grid response units and model constraints improves the spatial transferability of the model. Unconstrained and constrained versions of both, the original mHM and mHMtopo, which allows for topography-based sub-grid heterogeneity, were calibrated for each catchment individually following a multi-objective calibration strategy. In addition, four of the study catchments were simultaneously calibrated and their feasible parameter sets were transferred to the remaining three receiver catchments. In a post-calibration evaluation procedure the probabilities of model and transferability improvement, when accounting for sub-grid variability and/or applying expert-knowledge based model constraints, were assessed on the basis of a set of hydrological signatures. In terms of the Euclidian distance to the optimal model, used as overall measure for model performance with respect to the individual signatures, the model improvement achieved by introducing sub-grid heterogeneity to mHM in mHMtopo was on average 13 %. The addition of semi-quantitative constraints to mHM and mHMtopo resulted in improvements of 13 and 19 % respectively, compared to the base case of the unconstrained mHM. Most significant improvements in signature representations were, in particular, achieved for low flow statistics. The application of prior semi-quantitative constraints further improved the partitioning between runoff and evaporative fluxes. Besides, it was shown that suitable semi-quantitative prior constraints in combination with the transfer function based regularization approach of mHM, can be beneficial for spatial model transferability as the Euclidian distances for the signatures improved on average by 2 %. The effect of semi-quantitative prior constraints combined with topography-guided sub-grid heterogeneity on transferability showed a more variable picture of improvements and deteriorations, but most improvements were observed for low flow statistics.

  7. Bfv Quantization of Relativistic Spinning Particles with a Single Bosonic Constraint

    NASA Astrophysics Data System (ADS)

    Rabello, Silvio J.; Vaidya, Arvind N.

    Using the BFV approach we quantize a pseudoclassical model of the spin-1/2 relativistic particle that contains a single bosonic constraint, contrary to the usual locally supersymmetric models that display first and second class constraints.

  8. Data-Flow Based Model Analysis

    NASA Technical Reports Server (NTRS)

    Saad, Christian; Bauer, Bernhard

    2010-01-01

    The concept of (meta) modeling combines an intuitive way of formalizing the structure of an application domain with a high expressiveness that makes it suitable for a wide variety of use cases and has therefore become an integral part of many areas in computer science. While the definition of modeling languages through the use of meta models, e.g. in Unified Modeling Language (UML), is a well-understood process, their validation and the extraction of behavioral information is still a challenge. In this paper we present a novel approach for dynamic model analysis along with several fields of application. Examining the propagation of information along the edges and nodes of the model graph allows to extend and simplify the definition of semantic constraints in comparison to the capabilities offered by e.g. the Object Constraint Language. Performing a flow-based analysis also enables the simulation of dynamic behavior, thus providing an "abstract interpretation"-like analysis method for the modeling domain.

  9. A time-parallel approach to strong-constraint four-dimensional variational data assimilation

    NASA Astrophysics Data System (ADS)

    Rao, Vishwas; Sandu, Adrian

    2016-05-01

    A parallel-in-time algorithm based on an augmented Lagrangian approach is proposed to solve four-dimensional variational (4D-Var) data assimilation problems. The assimilation window is divided into multiple sub-intervals that allows parallelization of cost function and gradient computations. The solutions to the continuity equations across interval boundaries are added as constraints. The augmented Lagrangian approach leads to a different formulation of the variational data assimilation problem than the weakly constrained 4D-Var. A combination of serial and parallel 4D-Vars to increase performance is also explored. The methodology is illustrated on data assimilation problems involving the Lorenz-96 and the shallow water models.

  10. Constraint Optimization Problem For The Cutting Of A Cobalt Chrome Refractory Material

    NASA Astrophysics Data System (ADS)

    Lebaal, Nadhir; Schlegel, Daniel; Folea, Milena

    2011-05-01

    This paper shows a complete approach to solve a given problem, from the experimentation to the optimization of different cutting parameters. In response to an industrial problem of slotting FSX 414, a Cobalt-based refractory material, we have implemented a design of experiment to determine the most influent parameters on the tool life, the surface roughness and the cutting forces. After theses trials, an optimization approach has been implemented to find the lowest manufacturing cost while respecting the roughness constraints and cutting force limitation constraints. The optimization approach is based on the Response Surface Method (RSM) using the Sequential Quadratic programming algorithm (SQP) for a constrained problem. To avoid a local optimum and to obtain an accurate solution at low cost, an efficient strategy, which allows improving the RSM accuracy in the vicinity of the global optimum, is presented. With these models and these trials, we could apply and compare our optimization methods in order to get the lowest cost for the best quality, i.e. a satisfying surface roughness and limited cutting forces.

  11. Fixed Base Modal Survey of the MPCV Orion European Service Module Structural Test Article

    NASA Technical Reports Server (NTRS)

    Winkel, James P.; Akers, J. C.; Suarez, Vicente J.; Staab, Lucas D.; Napolitano, Kevin L.

    2017-01-01

    Recently, the MPCV Orion European Service Module Structural Test Article (E-STA) underwent sine vibration testing using the multi-axis shaker system at NASA GRC Plum Brook Station Mechanical Vibration Facility (MVF). An innovative approach using measured constraint shapes at the interface of E-STA to the MVF allowed high-quality fixed base modal parameters of the E-STA to be extracted, which have been used to update the E-STA finite element model (FEM), without the need for a traditional fixed base modal survey. This innovative approach provided considerable program cost and test schedule savings. This paper documents this modal survey, which includes the modal pretest analysis sensor selection, the fixed base methodology using measured constraint shapes as virtual references and measured frequency response functions, and post-survey comparison between measured and analysis fixed base modal parameters.

  12. Caliber Corrected Markov Modeling (C2M2): Correcting Equilibrium Markov Models.

    PubMed

    Dixit, Purushottam D; Dill, Ken A

    2018-02-13

    Rate processes are often modeled using Markov State Models (MSMs). Suppose you know a prior MSM and then learn that your prediction of some particular observable rate is wrong. What is the best way to correct the whole MSM? For example, molecular dynamics simulations of protein folding may sample many microstates, possibly giving correct pathways through them while also giving the wrong overall folding rate when compared to experiment. Here, we describe Caliber Corrected Markov Modeling (C 2 M 2 ), an approach based on the principle of maximum entropy for updating a Markov model by imposing state- and trajectory-based constraints. We show that such corrections are equivalent to asserting position-dependent diffusion coefficients in continuous-time continuous-space Markov processes modeled by a Smoluchowski equation. We derive the functional form of the diffusion coefficient explicitly in terms of the trajectory-based constraints. We illustrate with examples of 2D particle diffusion and an overdamped harmonic oscillator.

  13. Quantitative assessment of thermodynamic constraints on the solution space of genome-scale metabolic models.

    PubMed

    Hamilton, Joshua J; Dwivedi, Vivek; Reed, Jennifer L

    2013-07-16

    Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  14. From elementary flux modes to elementary flux vectors: Metabolic pathway analysis with arbitrary linear flux constraints.

    PubMed

    Klamt, Steffen; Regensburger, Georg; Gerstl, Matthias P; Jungreuthmayer, Christian; Schuster, Stefan; Mahadevan, Radhakrishnan; Zanghellini, Jürgen; Müller, Stefan

    2017-04-01

    Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks.

  15. From elementary flux modes to elementary flux vectors: Metabolic pathway analysis with arbitrary linear flux constraints

    PubMed Central

    Klamt, Steffen; Gerstl, Matthias P.; Jungreuthmayer, Christian; Mahadevan, Radhakrishnan; Müller, Stefan

    2017-01-01

    Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks. PMID:28406903

  16. Work domain constraints for modelling surgical performance.

    PubMed

    Morineau, Thierry; Riffaud, Laurent; Morandi, Xavier; Villain, Jonathan; Jannin, Pierre

    2015-10-01

    Three main approaches can be identified for modelling surgical performance: a competency-based approach, a task-based approach, both largely explored in the literature, and a less known work domain-based approach. The work domain-based approach first describes the work domain properties that constrain the agent's actions and shape the performance. This paper presents a work domain-based approach for modelling performance during cervical spine surgery, based on the idea that anatomical structures delineate the surgical performance. This model was evaluated through an analysis of junior and senior surgeons' actions. Twenty-four cervical spine surgeries performed by two junior and two senior surgeons were recorded in real time by an expert surgeon. According to a work domain-based model describing an optimal progression through anatomical structures, the degree of adjustment of each surgical procedure to a statistical polynomial function was assessed. Each surgical procedure showed a significant suitability with the model and regression coefficient values around 0.9. However, the surgeries performed by senior surgeons fitted this model significantly better than those performed by junior surgeons. Analysis of the relative frequencies of actions on anatomical structures showed that some specific anatomical structures discriminate senior from junior performances. The work domain-based modelling approach can provide an overall statistical indicator of surgical performance, but in particular, it can highlight specific points of interest among anatomical structures that the surgeons dwelled on according to their level of expertise.

  17. Application of maximum entropy to statistical inference for inversion of data from a single track segment.

    PubMed

    Stotts, Steven A; Koch, Robert A

    2017-08-01

    In this paper an approach is presented to estimate the constraint required to apply maximum entropy (ME) for statistical inference with underwater acoustic data from a single track segment. Previous algorithms for estimating the ME constraint require multiple source track segments to determine the constraint. The approach is relevant for addressing model mismatch effects, i.e., inaccuracies in parameter values determined from inversions because the propagation model does not account for all acoustic processes that contribute to the measured data. One effect of model mismatch is that the lowest cost inversion solution may be well outside a relatively well-known parameter value's uncertainty interval (prior), e.g., source speed from track reconstruction or towed source levels. The approach requires, for some particular parameter value, the ME constraint to produce an inferred uncertainty interval that encompasses the prior. Motivating this approach is the hypothesis that the proposed constraint determination procedure would produce a posterior probability density that accounts for the effect of model mismatch on inferred values of other inversion parameters for which the priors might be quite broad. Applications to both measured and simulated data are presented for model mismatch that produces minimum cost solutions either inside or outside some priors.

  18. 6 DOF synchronized control for spacecraft formation flying with input constraint and parameter uncertainties.

    PubMed

    Lv, Yueyong; Hu, Qinglei; Ma, Guangfu; Zhou, Jiakang

    2011-10-01

    This paper treats the problem of synchronized control of spacecraft formation flying (SFF) in the presence of input constraint and parameter uncertainties. More specifically, backstepping based robust control is first developed for the total 6 DOF dynamic model of SFF with parameter uncertainties, in which the model consists of relative translation and attitude rotation. Then this controller is redesigned to deal with the input constraint problem by incorporating a command filter such that the generated control could be implementable even under physical or operating constraints on the control input. The convergence of the proposed control algorithms is proved by the Lyapunov stability theorem. Compared with conventional methods, illustrative simulations of spacecraft formation flying are conducted to verify the effectiveness of the proposed approach to achieve the spacecraft track the desired attitude and position trajectories in a synchronized fashion even in the presence of uncertainties, external disturbances and control saturation constraint. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  19. The Integrated Medical Model - A Risk Assessment and Decision Support Tool for Human Space Flight Missions

    NASA Technical Reports Server (NTRS)

    Kerstman, Eric; Minard, Charles G.; Saile, Lynn; FreiredeCarvalho, Mary; Myers, Jerry; Walton, Marlei; Butler, Douglas; Lopez, Vilma

    2010-01-01

    The Integrated Medical Model (IMM) is a decision support tool that is useful to space flight mission planners and medical system designers in assessing risks and optimizing medical systems. The IMM employs an evidence-based, probabilistic risk assessment (PRA) approach within the operational constraints of space flight.

  20. A depth-first search algorithm to compute elementary flux modes by linear programming

    PubMed Central

    2014-01-01

    Background The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Results Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. Conclusions The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints. PMID:25074068

  1. Reduced Dynamics of the Non-holonomic Whipple Bicycle

    NASA Astrophysics Data System (ADS)

    Boyer, Frédéric; Porez, Mathieu; Mauny, Johan

    2018-06-01

    Though the bicycle is a familiar object of everyday life, modeling its full nonlinear three-dimensional dynamics in a closed symbolic form is a difficult issue for classical mechanics. In this article, we address this issue without resorting to the usual simplifications on the bicycle kinematics nor its dynamics. To derive this model, we use a general reduction-based approach in the principal fiber bundle of configurations of the three-dimensional bicycle. This includes a geometrically exact model of the contacts between the wheels and the ground, the explicit calculation of the kernel of constraints, along with the dynamics of the system free of any external forces, and its projection onto the kernel of admissible velocities. The approach takes benefits of the intrinsic formulation of geometric mechanics. Along the path toward the final equations, we show that the exact model of the bicycle dynamics requires to cope with a set of non-symmetric constraints with respect to the structural group of its configuration fiber bundle. The final reduced dynamics are simulated on several examples representative of the bicycle. As expected the constraints imposed by the ground contacts, as well as the energy conservation, are satisfied, while the dynamics can be numerically integrated in real time.

  2. Bayesian Model Selection under Time Constraints

    NASA Astrophysics Data System (ADS)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  3. Non-Deterministic Modelling of Food-Web Dynamics

    PubMed Central

    Planque, Benjamin; Lindstrøm, Ulf; Subbey, Sam

    2014-01-01

    A novel approach to model food-web dynamics, based on a combination of chance (randomness) and necessity (system constraints), was presented by Mullon et al. in 2009. Based on simulations for the Benguela ecosystem, they concluded that observed patterns of ecosystem variability may simply result from basic structural constraints within which the ecosystem functions. To date, and despite the importance of these conclusions, this work has received little attention. The objective of the present paper is to replicate this original model and evaluate the conclusions that were derived from its simulations. For this purpose, we revisit the equations and input parameters that form the structure of the original model and implement a comparable simulation model. We restate the model principles and provide a detailed account of the model structure, equations, and parameters. Our model can reproduce several ecosystem dynamic patterns: pseudo-cycles, variation and volatility, diet, stock-recruitment relationships, and correlations between species biomass series. The original conclusions are supported to a large extent by the current replication of the model. Model parameterisation and computational aspects remain difficult and these need to be investigated further. Hopefully, the present contribution will make this approach available to a larger research community and will promote the use of non-deterministic-network-dynamics models as ‘null models of food-webs’ as originally advocated. PMID:25299245

  4. MPEG-7-based description infrastructure for an audiovisual content analysis and retrieval system

    NASA Astrophysics Data System (ADS)

    Bailer, Werner; Schallauer, Peter; Hausenblas, Michael; Thallinger, Georg

    2005-01-01

    We present a case study of establishing a description infrastructure for an audiovisual content-analysis and retrieval system. The description infrastructure consists of an internal metadata model and access tool for using it. Based on an analysis of requirements, we have selected, out of a set of candidates, MPEG-7 as the basis of our metadata model. The openness and generality of MPEG-7 allow using it in broad range of applications, but increase complexity and hinder interoperability. Profiling has been proposed as a solution, with the focus on selecting and constraining description tools. Semantic constraints are currently only described in textual form. Conformance in terms of semantics can thus not be evaluated automatically and mappings between different profiles can only be defined manually. As a solution, we propose an approach to formalize the semantic constraints of an MPEG-7 profile using a formal vocabulary expressed in OWL, which allows automated processing of semantic constraints. We have defined the Detailed Audiovisual Profile as the profile to be used in our metadata model and we show how some of the semantic constraints of this profile can be formulated using ontologies. To work practically with the metadata model, we have implemented a MPEG-7 library and a client/server document access infrastructure.

  5. The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.

    PubMed

    Olivier, Brett G; Bergmann, Frank T

    2015-09-04

    Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).

  6. The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.

    PubMed

    Olivier, Brett G; Bergmann, Frank T

    2015-06-01

    Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).

  7. The theory of reasoned action as parallel constraint satisfaction: towards a dynamic computational model of health behavior.

    PubMed

    Orr, Mark G; Thrush, Roxanne; Plaut, David C

    2013-01-01

    The reasoned action approach, although ubiquitous in health behavior theory (e.g., Theory of Reasoned Action/Planned Behavior), does not adequately address two key dynamical aspects of health behavior: learning and the effect of immediate social context (i.e., social influence). To remedy this, we put forth a computational implementation of the Theory of Reasoned Action (TRA) using artificial-neural networks. Our model re-conceptualized behavioral intention as arising from a dynamic constraint satisfaction mechanism among a set of beliefs. In two simulations, we show that constraint satisfaction can simultaneously incorporate the effects of past experience (via learning) with the effects of immediate social context to yield behavioral intention, i.e., intention is dynamically constructed from both an individual's pre-existing belief structure and the beliefs of others in the individual's social context. In a third simulation, we illustrate the predictive ability of the model with respect to empirically derived behavioral intention. As the first known computational model of health behavior, it represents a significant advance in theory towards understanding the dynamics of health behavior. Furthermore, our approach may inform the development of population-level agent-based models of health behavior that aim to incorporate psychological theory into models of population dynamics.

  8. The Theory of Reasoned Action as Parallel Constraint Satisfaction: Towards a Dynamic Computational Model of Health Behavior

    PubMed Central

    Orr, Mark G.; Thrush, Roxanne; Plaut, David C.

    2013-01-01

    The reasoned action approach, although ubiquitous in health behavior theory (e.g., Theory of Reasoned Action/Planned Behavior), does not adequately address two key dynamical aspects of health behavior: learning and the effect of immediate social context (i.e., social influence). To remedy this, we put forth a computational implementation of the Theory of Reasoned Action (TRA) using artificial-neural networks. Our model re-conceptualized behavioral intention as arising from a dynamic constraint satisfaction mechanism among a set of beliefs. In two simulations, we show that constraint satisfaction can simultaneously incorporate the effects of past experience (via learning) with the effects of immediate social context to yield behavioral intention, i.e., intention is dynamically constructed from both an individual’s pre-existing belief structure and the beliefs of others in the individual’s social context. In a third simulation, we illustrate the predictive ability of the model with respect to empirically derived behavioral intention. As the first known computational model of health behavior, it represents a significant advance in theory towards understanding the dynamics of health behavior. Furthermore, our approach may inform the development of population-level agent-based models of health behavior that aim to incorporate psychological theory into models of population dynamics. PMID:23671603

  9. Path following control of planar snake robots using virtual holonomic constraints: theory and experiments.

    PubMed

    Rezapour, Ehsan; Pettersen, Kristin Y; Liljebäck, Pål; Gravdahl, Jan T; Kelasidi, Eleni

    This paper considers path following control of planar snake robots using virtual holonomic constraints. In order to present a model-based path following control design for the snake robot, we first derive the Euler-Lagrange equations of motion of the system. Subsequently, we define geometric relations among the generalized coordinates of the system, using the method of virtual holonomic constraints. These appropriately defined constraints shape the geometry of a constraint manifold for the system, which is a submanifold of the configuration space of the robot. Furthermore, we show that the constraint manifold can be made invariant by a suitable choice of feedback. In particular, we analytically design a smooth feedback control law to exponentially stabilize the constraint manifold. We show that enforcing the appropriately defined virtual holonomic constraints for the configuration variables implies that the robot converges to and follows a desired geometric path. Numerical simulations and experimental results are presented to validate the theoretical approach.

  10. Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries

    NASA Astrophysics Data System (ADS)

    Perez, Hector Eduardo

    This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the notion of interval observers to PDE models using a sensitivity-based approach. Practically, this chapter quantifies the sensitivity of battery state estimates to parameter variations, enabling robust battery management schemes. The effectiveness of the proposed sensitivity-based interval observers is verified via a numerical study for the range of uncertain parameters. Chapter 4: This chapter seeks to derive insight on battery charging control using electrochemistry models. Directly using full order complex multi-partial differential equation (PDE) electrochemical battery models is difficult and sometimes impossible to implement. This chapter develops an approach for obtaining optimal charge control schemes, while ensuring safety through constraint satisfaction. An optimal charge control problem is mathematically formulated via a coupled reduced order electrochemical-thermal model which conserves key electrochemical and thermal state information. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting nonlinear multi-state optimal control problem. Minimum time charge protocols are analyzed in detail subject to solid and electrolyte phase concentration constraints, as well as temperature constraints. The optimization scheme is examined using different input current bounds, and an insight on battery design for fast charging is provided. Experimental results are provided to compare the tradeoffs between an electrochemical-thermal model based optimal charge protocol and a traditional charge protocol. Chapter 5: Fast and safe charging protocols are crucial for enhancing the practicality of batteries, especially for mobile applications such as smartphones and electric vehicles. This chapter proposes an innovative approach to devising optimally health-conscious fast-safe charge protocols. A multi-objective optimal control problem is mathematically formulated via a coupled electro-thermal-aging battery model, where electrical and aging sub-models depend upon the core temperature captured by a two-state thermal sub-model. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting highly nonlinear six-state optimal control problem. Charge time and health degradation are therefore optimally traded off, subject to both electrical and thermal constraints. Minimum-time, minimum-aging, and balanced charge scenarios are examined in detail. Sensitivities to the upper voltage bound, ambient temperature, and cooling convection resistance are investigated as well. Experimental results are provided to compare the tradeoffs between a balanced and traditional charge protocol. Chapter 6: This chapter provides concluding remarks on the findings of this dissertation and a discussion of future work.

  11. Discrete Event Simulation-Based Resource Modelling in Health Technology Assessment.

    PubMed

    Salleh, Syed; Thokala, Praveen; Brennan, Alan; Hughes, Ruby; Dixon, Simon

    2017-10-01

    The objective of this article was to conduct a systematic review of published research on the use of discrete event simulation (DES) for resource modelling (RM) in health technology assessment (HTA). RM is broadly defined as incorporating and measuring effects of constraints on physical resources (e.g. beds, doctors, nurses) in HTA models. Systematic literature searches were conducted in academic databases (JSTOR, SAGE, SPRINGER, SCOPUS, IEEE, Science Direct, PubMed, EMBASE) and grey literature (Google Scholar, NHS journal library), enhanced by manual searchers (i.e. reference list checking, citation searching and hand-searching techniques). The search strategy yielded 4117 potentially relevant citations. Following the screening and manual searches, ten articles were included. Reviewing these articles provided insights into the applications of RM: firstly, different types of economic analyses, model settings, RM and cost-effectiveness analysis (CEA) outcomes were identified. Secondly, variation in the characteristics of the constraints such as types and nature of constraints and sources of data for the constraints were identified. Thirdly, it was found that including the effects of constraints caused the CEA results to change in these articles. The review found that DES proved to be an effective technique for RM but there were only a small number of studies applied in HTA. However, these studies showed the important consequences of modelling physical constraints and point to the need for a framework to be developed to guide future applications of this approach.

  12. Computer aided segmentation of kidneys using locally shape constrained deformable models on CT images

    NASA Astrophysics Data System (ADS)

    Erdt, Marius; Sakas, Georgios

    2010-03-01

    This work presents a novel approach for model based segmentation of the kidney in images acquired by Computed Tomography (CT). The developed computer aided segmentation system is expected to support computer aided diagnosis and operation planning. We have developed a deformable model based approach based on local shape constraints that prevents the model from deforming into neighboring structures while allowing the global shape to adapt freely to the data. Those local constraints are derived from the anatomical structure of the kidney and the presence and appearance of neighboring organs. The adaptation process is guided by a rule-based deformation logic in order to improve the robustness of the segmentation in areas of diffuse organ boundaries. Our work flow consists of two steps: 1.) a user guided positioning and 2.) an automatic model adaptation using affine and free form deformation in order to robustly extract the kidney. In cases which show pronounced pathologies, the system also offers real time mesh editing tools for a quick refinement of the segmentation result. Evaluation results based on 30 clinical cases using CT data sets show an average dice correlation coefficient of 93% compared to the ground truth. The results are therefore in most cases comparable to manual delineation. Computation times of the automatic adaptation step are lower than 6 seconds which makes the proposed system suitable for an application in clinical practice.

  13. Extension of Murray's law using a non-Newtonian model of blood flow.

    PubMed

    Revellin, Rémi; Rousset, François; Baud, David; Bonjour, Jocelyn

    2009-05-15

    So far, none of the existing methods on Murray's law deal with the non-Newtonian behavior of blood flow although the non-Newtonian approach for blood flow modelling looks more accurate. MODELING: In the present paper, Murray's law which is applicable to an arterial bifurcation, is generalized to a non-Newtonian blood flow model (power-law model). When the vessel size reaches the capillary limitation, blood can be modeled using a non-Newtonian constitutive equation. It is assumed two different constraints in addition to the pumping power: the volume constraint or the surface constraint (related to the internal surface of the vessel). For a seek of generality, the relationships are given for an arbitrary number of daughter vessels. It is shown that for a cost function including the volume constraint, classical Murray's law remains valid (i.e. SigmaR(c) = cste with c = 3 is verified and is independent of n, the dimensionless index in the viscosity equation; R being the radius of the vessel). On the contrary, for a cost function including the surface constraint, different values of c may be calculated depending on the value of n. We find that c varies for blood from 2.42 to 3 depending on the constraint and the fluid properties. For the Newtonian model, the surface constraint leads to c = 2.5. The cost function (based on the surface constraint) can be related to entropy generation, by dividing it by the temperature. It is demonstrated that the entropy generated in all the daughter vessels is greater than the entropy generated in the parent vessel. Furthermore, it is shown that the difference of entropy generation between the parent and daughter vessels is smaller for a non-Newtonian fluid than for a Newtonian fluid.

  14. Identification of watershed priority management areas under water quality constraints: A simulation-optimization approach with ideal load reduction

    NASA Astrophysics Data System (ADS)

    Dong, Feifei; Liu, Yong; Wu, Zhen; Chen, Yihui; Guo, Huaicheng

    2018-07-01

    Targeting nonpoint source (NPS) pollution hot spots is of vital importance for placement of best management practices (BMPs). Although physically-based watershed models have been widely used to estimate nutrient emissions, connections between nutrient abatement and compliance of water quality standards have been rarely considered in NPS hotspot ranking, which may lead to ineffective decision-making. It's critical to develop a strategy to identify priority management areas (PMAs) based on water quality response to nutrient load mitigation. A water quality constrained PMA identification framework was thereby proposed in this study, based on the simulation-optimization approach with ideal load reduction (ILR-SO). It integrates the physically-based Soil and Water Assessment Tool (SWAT) model and an optimization model under constraints of site-specific water quality standards. To our knowledge, it was the first effort to identify PMAs with simulation-based optimization. The SWAT model was established to simulate temporal and spatial nutrient loading and evaluate effectiveness of pollution mitigation. A metamodel was trained to establish a quantitative relationship between sources and water quality. Ranking of priority areas is based on required nutrient load reduction in each sub-watershed targeting to satisfy water quality standards in waterbodies, which was calculated with genetic algorithm (GA). The proposed approach was used for identification of PMAs on the basis of diffuse total phosphorus (TP) in Lake Dianchi Watershed, one of the three most eutrophic large lakes in China. The modeling results demonstrated that 85% of diffuse TP came from 30% of the watershed area. Compared with the two conventional targeting strategies based on overland nutrient loss and instream nutrient loading, the ILR-SO model identified distinct PMAs and narrowed down the coverage of management areas. This study addressed the urgent need to incorporate water quality response into PMA identification and showed that the ILR-SO approach is effective to guide watershed management for aquatic ecosystem restoration.

  15. Possibility-based robust design optimization for the structural-acoustic system with fuzzy parameters

    NASA Astrophysics Data System (ADS)

    Yin, Hui; Yu, Dejie; Yin, Shengwen; Xia, Baizhan

    2018-03-01

    The conventional engineering optimization problems considering uncertainties are based on the probabilistic model. However, the probabilistic model may be unavailable because of the lack of sufficient objective information to construct the precise probability distribution of uncertainties. This paper proposes a possibility-based robust design optimization (PBRDO) framework for the uncertain structural-acoustic system based on the fuzzy set model, which can be constructed by expert opinions. The objective of robust design is to optimize the expectation and variability of system performance with respect to uncertainties simultaneously. In the proposed PBRDO, the entropy of the fuzzy system response is used as the variability index; the weighted sum of the entropy and expectation of the fuzzy response is used as the objective function, and the constraints are established in the possibility context. The computations for the constraints and objective function of PBRDO are a triple-loop and a double-loop nested problem, respectively, whose computational costs are considerable. To improve the computational efficiency, the target performance approach is introduced to transform the calculation of the constraints into a double-loop nested problem. To further improve the computational efficiency, a Chebyshev fuzzy method (CFM) based on the Chebyshev polynomials is proposed to estimate the objective function, and the Chebyshev interval method (CIM) is introduced to estimate the constraints, thereby the optimization problem is transformed into a single-loop one. Numerical results on a shell structural-acoustic system verify the effectiveness and feasibility of the proposed methods.

  16. A two-stage path planning approach for multiple car-like robots based on PH curves and a modified harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Zeng, Wenhui; Yi, Jin; Rao, Xiao; Zheng, Yun

    2017-11-01

    In this article, collision-avoidance path planning for multiple car-like robots with variable motion is formulated as a two-stage objective optimization problem minimizing both the total length of all paths and the task's completion time. Accordingly, a new approach based on Pythagorean Hodograph (PH) curves and Modified Harmony Search algorithm is proposed to solve the two-stage path-planning problem subject to kinematic constraints such as velocity, acceleration, and minimum turning radius. First, a method of path planning based on PH curves for a single robot is proposed. Second, a mathematical model of the two-stage path-planning problem for multiple car-like robots with variable motion subject to kinematic constraints is constructed that the first-stage minimizes the total length of all paths and the second-stage minimizes the task's completion time. Finally, a modified harmony search algorithm is applied to solve the two-stage optimization problem. A set of experiments demonstrate the effectiveness of the proposed approach.

  17. Use of an uncertainty analysis for genome-scale models as a prediction tool for microbial growth processes in subsurface environments.

    PubMed

    Klier, Christine

    2012-03-06

    The integration of genome-scale, constraint-based models of microbial cell function into simulations of contaminant transport and fate in complex groundwater systems is a promising approach to help characterize the metabolic activities of microorganisms in natural environments. In constraint-based modeling, the specific uptake flux rates of external metabolites are usually determined by Michaelis-Menten kinetic theory. However, extensive data sets based on experimentally measured values are not always available. In this study, a genome-scale model of Pseudomonas putida was used to study the key issue of uncertainty arising from the parametrization of the influx of two growth-limiting substrates: oxygen and toluene. The results showed that simulated growth rates are highly sensitive to substrate affinity constants and that uncertainties in specific substrate uptake rates have a significant influence on the variability of simulated microbial growth. Michaelis-Menten kinetic theory does not, therefore, seem to be appropriate for descriptions of substrate uptake processes in the genome-scale model of P. putida. Microbial growth rates of P. putida in subsurface environments can only be accurately predicted if the processes of complex substrate transport and microbial uptake regulation are sufficiently understood in natural environments and if data-driven uptake flux constraints can be applied.

  18. An RBF-based reparameterization method for constrained texture mapping.

    PubMed

    Yu, Hongchuan; Lee, Tong-Yee; Yeh, I-Cheng; Yang, Xiaosong; Li, Wenxi; Zhang, Jian J

    2012-07-01

    Texture mapping has long been used in computer graphics to enhance the realism of virtual scenes. However, to match the 3D model feature points with the corresponding pixels in a texture image, surface parameterization must satisfy specific positional constraints. However, despite numerous research efforts, the construction of a mathematically robust, foldover-free parameterization that is subject to positional constraints continues to be a challenge. In the present paper, this foldover problem is addressed by developing radial basis function (RBF)-based reparameterization. Given initial 2D embedding of a 3D surface, the proposed method can reparameterize 2D embedding into a foldover-free 2D mesh, satisfying a set of user-specified constraint points. In addition, this approach is mesh free. Therefore, generating smooth texture mapping results is possible without extra smoothing optimization.

  19. Optimizing Global Coronal Magnetic Field Models Using Image-Based Constraints

    NASA Technical Reports Server (NTRS)

    Jones-Mecholsky, Shaela I.; Davila, Joseph M.; Uritskiy, Vadim

    2016-01-01

    The coronal magnetic field directly or indirectly affects a majority of the phenomena studied in the heliosphere. It provides energy for coronal heating, controls the release of coronal mass ejections, and drives heliospheric and magnetospheric activity, yet the coronal magnetic field itself has proven difficult to measure. This difficulty has prompted a decades-long effort to develop accurate, timely, models of the field, an effort that continues today. We have developed a method for improving global coronal magnetic field models by incorporating the type of morphological constraints that could be derived from coronal images. Here we report promising initial tests of this approach on two theoretical problems, and discuss opportunities for application.

  20. OPTIMIZING GLOBAL CORONAL MAGNETIC FIELD MODELS USING IMAGE-BASED CONSTRAINTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Shaela I.; Davila, Joseph M.; Uritsky, Vadim, E-mail: shaela.i.jonesmecholsky@nasa.gov

    The coronal magnetic field directly or indirectly affects a majority of the phenomena studied in the heliosphere. It provides energy for coronal heating, controls the release of coronal mass ejections, and drives heliospheric and magnetospheric activity, yet the coronal magnetic field itself has proven difficult to measure. This difficulty has prompted a decades-long effort to develop accurate, timely, models of the field—an effort that continues today. We have developed a method for improving global coronal magnetic field models by incorporating the type of morphological constraints that could be derived from coronal images. Here we report promising initial tests of thismore » approach on two theoretical problems, and discuss opportunities for application.« less

  1. Nonlinear model predictive control of a wave energy converter based on differential flatness parameterisation

    NASA Astrophysics Data System (ADS)

    Li, Guang

    2017-01-01

    This paper presents a fast constrained optimization approach, which is tailored for nonlinear model predictive control of wave energy converters (WEC). The advantage of this approach relies on its exploitation of the differential flatness of the WEC model. This can reduce the dimension of the resulting nonlinear programming problem (NLP) derived from the continuous constrained optimal control of WEC using pseudospectral method. The alleviation of computational burden using this approach helps to promote an economic implementation of nonlinear model predictive control strategy for WEC control problems. The method is applicable to nonlinear WEC models, nonconvex objective functions and nonlinear constraints, which are commonly encountered in WEC control problems. Numerical simulations demonstrate the efficacy of this approach.

  2. Potential formulation of sleep dynamics

    NASA Astrophysics Data System (ADS)

    Phillips, A. J. K.; Robinson, P. A.

    2009-02-01

    A physiologically based model of the mechanisms that control the human sleep-wake cycle is formulated in terms of an equivalent nonconservative mechanical potential. The potential is analytically simplified and reduced to a quartic two-well potential, matching the bifurcation structure of the original model. This yields a dynamics-based model that is analytically simpler and has fewer parameters than the original model, allowing easier fitting to experimental data. This model is first demonstrated to semiquantitatively match the dynamics of the physiologically based model from which it is derived, and is then fitted directly to a set of experimentally derived criteria. These criteria place rigorous constraints on the parameter values, and within these constraints the model is shown to reproduce normal sleep-wake dynamics and recovery from sleep deprivation. Furthermore, this approach enables insights into the dynamics by direct analogies to phenomena in well studied mechanical systems. These include the relation between friction in the mechanical system and the timecourse of neurotransmitter action, and the possible relation between stochastic resonance and napping behavior. The model derived here also serves as a platform for future investigations of sleep-wake phenomena from a dynamical perspective.

  3. Structure Constraints in a Constraint-Based Planner

    NASA Technical Reports Server (NTRS)

    Pang, Wan-Lin; Golden, Keith

    2004-01-01

    In this paper we report our work on a new constraint domain, where variables can take structured values. Earth-science data processing (ESDP) is a planning domain that requires the ability to represent and reason about complex constraints over structured data, such as satellite images. This paper reports on a constraint-based planner for ESDP and similar domains. We discuss our approach for translating a planning problem into a constraint satisfaction problem (CSP) and for representing and reasoning about structured objects and constraints over structures.

  4. Formulation analysis and computation of an optimization-based local-to-nonlocal coupling method.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Elia, Marta; Bochev, Pavel Blagoveston

    2017-01-01

    In this paper, we present an optimization-based coupling method for local and nonlocal continuum models. Our approach couches the coupling of the models into a control problem where the states are the solutions of the nonlocal and local equations, the objective is to minimize their mismatch on the overlap of the local and nonlocal problem domains, and the virtual controls are the nonlocal volume constraint and the local boundary condition. We present the method in the context of Local-to-Nonlocal di usion coupling. Numerical examples illustrate the theoretical properties of the approach.

  5. Constrained optimization via simulation models for new product innovation

    NASA Astrophysics Data System (ADS)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  6. Constraint-Based Abstract Semantics for Temporal Logic: A Direct Approach to Design and Implementation

    NASA Astrophysics Data System (ADS)

    Banda, Gourinath; Gallagher, John P.

    interpretation provides a practical approach to verifying properties of infinite-state systems. We apply the framework of abstract interpretation to derive an abstract semantic function for the modal μ-calculus, which is the basis for abstract model checking. The abstract semantic function is constructed directly from the standard concrete semantics together with a Galois connection between the concrete state-space and an abstract domain. There is no need for mixed or modal transition systems to abstract arbitrary temporal properties, as in previous work in the area of abstract model checking. Using the modal μ-calculus to implement CTL, the abstract semantics gives an over-approximation of the set of states in which an arbitrary CTL formula holds. Then we show that this leads directly to an effective implementation of an abstract model checking algorithm for CTL using abstract domains based on linear constraints. The implementation of the abstract semantic function makes use of an SMT solver. We describe an implemented system for proving properties of linear hybrid automata and give some experimental results.

  7. A new approach to mixed H2/H infinity controller synthesis using gradient-based parameter optimization methods

    NASA Technical Reports Server (NTRS)

    Ly, Uy-Loi; Schoemig, Ewald

    1993-01-01

    In the past few years, the mixed H(sub 2)/H-infinity control problem has been the object of much research interest since it allows the incorporation of robust stability into the LQG framework. The general mixed H(sub 2)/H-infinity design problem has yet to be solved analytically. Numerous schemes have considered upper bounds for the H(sub 2)-performance criterion and/or imposed restrictive constraints on the class of systems under investigation. Furthermore, many modern control applications rely on dynamic models obtained from finite-element analysis and thus involve high-order plant models. Hence the capability to design low-order (fixed-order) controllers is of great importance. In this research a new design method was developed that optimizes the exact H(sub 2)-norm of a certain subsystem subject to robust stability in terms of H-infinity constraints and a minimal number of system assumptions. The derived algorithm is based on a differentiable scalar time-domain penalty function to represent the H-infinity constraints in the overall optimization. The scheme is capable of handling multiple plant conditions and hence multiple performance criteria and H-infinity constraints and incorporates additional constraints such as fixed-order and/or fixed structure controllers. The defined penalty function is applicable to any constraint that is expressible in form of a real symmetric matrix-inequity.

  8. Dynamic Distribution and Layouting of Model-Based User Interfaces in Smart Environments

    NASA Astrophysics Data System (ADS)

    Roscher, Dirk; Lehmann, Grzegorz; Schwartze, Veit; Blumendorf, Marco; Albayrak, Sahin

    The developments in computer technology in the last decade change the ways of computer utilization. The emerging smart environments make it possible to build ubiquitous applications that assist users during their everyday life, at any time, in any context. But the variety of contexts-of-use (user, platform and environment) makes the development of such ubiquitous applications for smart environments and especially its user interfaces a challenging and time-consuming task. We propose a model-based approach, which allows adapting the user interface at runtime to numerous (also unknown) contexts-of-use. Based on a user interface modelling language, defining the fundamentals and constraints of the user interface, a runtime architecture exploits the description to adapt the user interface to the current context-of-use. The architecture provides automatic distribution and layout algorithms for adapting the applications also to contexts unforeseen at design time. Designers do not specify predefined adaptations for each specific situation, but adaptation constraints and guidelines. Furthermore, users are provided with a meta user interface to influence the adaptations according to their needs. A smart home energy management system serves as running example to illustrate the approach.

  9. A scale-based approach to interdisciplinary research and expertise in sports.

    PubMed

    Ibáñez-Gijón, Jorge; Buekers, Martinus; Morice, Antoine; Rao, Guillaume; Mascret, Nicolas; Laurin, Jérome; Montagne, Gilles

    2017-02-01

    After more than 20 years since the introduction of ecological and dynamical approaches in sports research, their promising opportunity for interdisciplinary research has not been fulfilled yet. The complexity of the research process and the theoretical and empirical difficulties associated with an integrated ecological-dynamical approach have been the major factors hindering the generalisation of interdisciplinary projects in sports sciences. To facilitate this generalisation, we integrate the major concepts from the ecological and dynamical approaches to study behaviour as a multi-scale process. Our integration gravitates around the distinction between functional (ecological) and execution (organic) scales, and their reciprocal intra- and inter-scale constraints. We propose an (epistemological) scale-based definition of constraints that accounts for the concept of synergies as emergent coordinative structures. To illustrate how we can operationalise the notion of multi-scale synergies we use an interdisciplinary model of locomotor pointing. To conclude, we show the value of this approach for interdisciplinary research in sport sciences, as we discuss two examples of task-specific dimensionality reduction techniques in the context of an ongoing project that aims to unveil the determinants of expertise in basketball free throw shooting. These techniques provide relevant empirical evidence to help bootstrap the challenging modelling efforts required in sport sciences.

  10. Hydrological and biogeochemical constraints on terrestrial carbon cycle feedbacks

    NASA Astrophysics Data System (ADS)

    Mystakidis, Stefanos; Seneviratne, Sonia I.; Gruber, Nicolas; Davin, Edouard L.

    2017-01-01

    The feedbacks between climate, atmospheric CO2 concentration and the terrestrial carbon cycle are a major source of uncertainty in future climate projections with Earth systems models. Here, we use observation-based estimates of the interannual variations in evapotranspiration (ET), net biome productivity (NBP), as well as the present-day sensitivity of NBP to climate variations, to constrain globally the terrestrial carbon cycle feedbacks as simulated by models that participated in the fifth phase of the coupled model intercomparison project (CMIP5). The constraints result in a ca. 40% lower response of NBP to climate change and a ca. 30% reduction in the strength of the CO2 fertilization effect relative to the unconstrained multi-model mean. While the unconstrained CMIP5 models suggest an increase in the cumulative terrestrial carbon storage (477 PgC) in response to an idealized scenario of 1%/year atmospheric CO2 increase, the constraints imply a ca. 19% smaller change. Overall, the applied emerging constraint approach offers a possibility to reduce uncertainties in the projections of the terrestrial carbon cycle, which is a key determinant of the future trajectory of atmospheric CO2 concentration and resulting climate change.

  11. Comparing supply and demand models for future photovoltaic power generation in the USA

    DOE PAGES

    Basore, Paul A.; Cole, Wesley J.

    2018-02-22

    We explore the plausible range of future deployment of photovoltaic generation capacity in the USA using a supply-focused model based on supply-chain growth constraints and a demand-focused model based on minimizing the overall cost of the electricity system. Both approaches require assumptions based on previous experience and anticipated trends. For each of the models, we assign plausible ranges for the key assumptions and then compare the resulting PV deployment over time. Each model was applied to 2 different future scenarios: one in which PV market penetration is ultimately constrained by the uncontrolled variability of solar power and one in whichmore » low-cost energy storage or some equivalent measure largely alleviates this constraint. The supply-focused and demand-focused models are in substantial agreement, not just in the long term, where deployment is largely determined by the assumed market penetration constraints, but also in the interim years. For the future scenario without low-cost energy storage or equivalent measures, the 2 models give an average plausible range of PV generation capacity in the USA of 150 to 530 GWdc in 2030 and 260 to 810 GWdc in 2040. With low-cost energy storage or equivalent measures, the corresponding ranges are 160 to 630 GWdc in 2030 and 280 to 1200 GWdc in 2040. The latter range is enough to supply 10% to 40% of US electricity demand in 2040, based on current demand growth.« less

  12. Comparing supply and demand models for future photovoltaic power generation in the USA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basore, Paul A.; Cole, Wesley J.

    We explore the plausible range of future deployment of photovoltaic generation capacity in the USA using a supply-focused model based on supply-chain growth constraints and a demand-focused model based on minimizing the overall cost of the electricity system. Both approaches require assumptions based on previous experience and anticipated trends. For each of the models, we assign plausible ranges for the key assumptions and then compare the resulting PV deployment over time. Each model was applied to 2 different future scenarios: one in which PV market penetration is ultimately constrained by the uncontrolled variability of solar power and one in whichmore » low-cost energy storage or some equivalent measure largely alleviates this constraint. The supply-focused and demand-focused models are in substantial agreement, not just in the long term, where deployment is largely determined by the assumed market penetration constraints, but also in the interim years. For the future scenario without low-cost energy storage or equivalent measures, the 2 models give an average plausible range of PV generation capacity in the USA of 150 to 530 GWdc in 2030 and 260 to 810 GWdc in 2040. With low-cost energy storage or equivalent measures, the corresponding ranges are 160 to 630 GWdc in 2030 and 280 to 1200 GWdc in 2040. The latter range is enough to supply 10% to 40% of US electricity demand in 2040, based on current demand growth.« less

  13. Mechanistic analysis of multi-omics datasets to generate kinetic parameters for constraint-based metabolic models.

    PubMed

    Cotten, Cameron; Reed, Jennifer L

    2013-01-30

    Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets.

  14. Mechanistic analysis of multi-omics datasets to generate kinetic parameters for constraint-based metabolic models

    PubMed Central

    2013-01-01

    Background Constraint-based modeling uses mass balances, flux capacity, and reaction directionality constraints to predict fluxes through metabolism. Although transcriptional regulation and thermodynamic constraints have been integrated into constraint-based modeling, kinetic rate laws have not been extensively used. Results In this study, an in vivo kinetic parameter estimation problem was formulated and solved using multi-omic data sets for Escherichia coli. To narrow the confidence intervals for kinetic parameters, a series of kinetic model simplifications were made, resulting in fewer kinetic parameters than the full kinetic model. These new parameter values are able to account for flux and concentration data from 20 different experimental conditions used in our training dataset. Concentration estimates from the simplified kinetic model were within one standard deviation for 92.7% of the 790 experimental measurements in the training set. Gibbs free energy changes of reaction were calculated to identify reactions that were often operating close to or far from equilibrium. In addition, enzymes whose activities were positively or negatively influenced by metabolite concentrations were also identified. The kinetic model was then used to calculate the maximum and minimum possible flux values for individual reactions from independent metabolite and enzyme concentration data that were not used to estimate parameter values. Incorporating these kinetically-derived flux limits into the constraint-based metabolic model improved predictions for uptake and secretion rates and intracellular fluxes in constraint-based models of central metabolism. Conclusions This study has produced a method for in vivo kinetic parameter estimation and identified strategies and outcomes of kinetic model simplification. We also have illustrated how kinetic constraints can be used to improve constraint-based model predictions for intracellular fluxes and biomass yield and identify potential metabolic limitations through the integrated analysis of multi-omics datasets. PMID:23360254

  15. Aeroelastic Wing Shaping Control Subject to Actuation Constraints.

    NASA Technical Reports Server (NTRS)

    Swei, Sean Shan-Min; Nguyen, Nhan

    2014-01-01

    This paper considers the control of coupled aeroelastic aircraft model which is configured with Variable Camber Continuous Trailing Edge Flap (VCCTEF) system. The relative deflection between two adjacent flaps is constrained and this actuation constraint is accounted for when designing an effective control law for suppressing the wing vibration. A simple tuned-mass damper mechanism with two attached masses is used as an example to demonstrate the effectiveness of vibration suppression with confined motion of tuned masses. In this paper, a dynamic inversion based pseudo-control hedging (PCH) and bounded control approach is investigated, and for illustration, it is applied to the NASA Generic Transport Model (GTM) configured with VCCTEF system.

  16. A spatially localized architecture for fast and modular DNA computing

    NASA Astrophysics Data System (ADS)

    Chatterjee, Gourab; Dalchau, Neil; Muscat, Richard A.; Phillips, Andrew; Seelig, Georg

    2017-09-01

    Cells use spatial constraints to control and accelerate the flow of information in enzyme cascades and signalling networks. Synthetic silicon-based circuitry similarly relies on spatial constraints to process information. Here, we show that spatial organization can be a similarly powerful design principle for overcoming limitations of speed and modularity in engineered molecular circuits. We create logic gates and signal transmission lines by spatially arranging reactive DNA hairpins on a DNA origami. Signal propagation is demonstrated across transmission lines of different lengths and orientations and logic gates are modularly combined into circuits that establish the universality of our approach. Because reactions preferentially occur between neighbours, identical DNA hairpins can be reused across circuits. Co-localization of circuit elements decreases computation time from hours to minutes compared to circuits with diffusible components. Detailed computational models enable predictive circuit design. We anticipate our approach will motivate using spatial constraints for future molecular control circuit designs.

  17. A Sampling-Based Bayesian Approach for Cooperative Multiagent Online Search With Resource Constraints.

    PubMed

    Xiao, Hu; Cui, Rongxin; Xu, Demin

    2018-06-01

    This paper presents a cooperative multiagent search algorithm to solve the problem of searching for a target on a 2-D plane under multiple constraints. A Bayesian framework is used to update the local probability density functions (PDFs) of the target when the agents obtain observation information. To obtain the global PDF used for decision making, a sampling-based logarithmic opinion pool algorithm is proposed to fuse the local PDFs, and a particle sampling approach is used to represent the continuous PDF. Then the Gaussian mixture model (GMM) is applied to reconstitute the global PDF from the particles, and a weighted expectation maximization algorithm is presented to estimate the parameters of the GMM. Furthermore, we propose an optimization objective which aims to guide agents to find the target with less resource consumptions, and to keep the resource consumption of each agent balanced simultaneously. To this end, a utility function-based optimization problem is put forward, and it is solved by a gradient-based approach. Several contrastive simulations demonstrate that compared with other existing approaches, the proposed one uses less overall resources and shows a better performance of balancing the resource consumption.

  18. Constrained Kalman Filtering Via Density Function Truncation for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2006-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter truncates the PDF (probability density function) of the Kalman filter estimate at the known constraints and then computes the constrained filter estimate as the mean of the truncated PDF. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is demonstrated via simulation results obtained from a turbofan engine model. The turbofan engine model contains 3 state variables, 11 measurements, and 10 component health parameters. It is also shown that the truncated Kalman filter may be a more accurate way of incorporating inequality constraints than other constrained filters (e.g., the projection approach to constrained filtering).

  19. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    NASA Astrophysics Data System (ADS)

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; Woods, Ross A.; Uijlenhoet, Remko; Bennett, Katrina E.; Pauwels, Valentijn R. N.; Cai, Xitian; Wood, Andrew W.; Peters-Lidard, Christa D.

    2017-07-01

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

  20. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    NASA Astrophysics Data System (ADS)

    Clark, M. P.; Nijssen, B.; Wood, A.; Mizukami, N.; Newman, A. J.

    2017-12-01

    The diversity in hydrologic models has historically led to great controversy on the "correct" approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. In this paper, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We illustrate how modeling advances have been made by groups using models of different type and complexity, and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.

  1. An Integrated Constraint Programming Approach to Scheduling Sports Leagues with Divisional and Round-robin Tournaments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey

    Previous approaches for scheduling a league with round-robin and divisional tournaments involved decomposing the problem into easier subproblems. This approach, used to schedule the top Swedish handball league Elitserien, reduces the problem complexity but can result in suboptimal schedules. This paper presents an integrated constraint programming model that allows to perform the scheduling in a single step. Particular attention is given to identifying implied and symmetry-breaking constraints that reduce the computational complexity significantly. The experimental evaluation of the integrated approach takes considerably less computational effort than the previous approach.

  2. Strict Constraint Feasibility in Analysis and Design of Uncertain Systems

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a methodology for the analysis and design optimization of models subject to parametric uncertainty, where hard inequality constraints are present. Hard constraints are those that must be satisfied for all parameter realizations prescribed by the uncertainty model. Emphasis is given to uncertainty models prescribed by norm-bounded perturbations from a nominal parameter value, i.e., hyper-spheres, and by sets of independently bounded uncertain variables, i.e., hyper-rectangles. These models make it possible to consider sets of parameters having comparable as well as dissimilar levels of uncertainty. Two alternative formulations for hyper-rectangular sets are proposed, one based on a transformation of variables and another based on an infinity norm approach. The suite of tools developed enable us to determine if the satisfaction of hard constraints is feasible by identifying critical combinations of uncertain parameters. Since this practice is performed without sampling or partitioning the parameter space, the resulting assessments of robustness are analytically verifiable. Strategies that enable the comparison of the robustness of competing design alternatives, the approximation of the robust design space, and the systematic search for designs with improved robustness characteristics are also proposed. Since the problem formulation is generic and the solution methods only require standard optimization algorithms for their implementation, the tools developed are applicable to a broad range of problems in several disciplines.

  3. Marker optimization for facial motion acquisition and deformation.

    PubMed

    Le, Binh H; Zhu, Mingyang; Deng, Zhigang

    2013-11-01

    A long-standing problem in marker-based facial motion capture is what are the optimal facial mocap marker layouts. Despite its wide range of potential applications, this problem has not yet been systematically explored to date. This paper describes an approach to compute optimized marker layouts for facial motion acquisition as optimization of characteristic control points from a set of high-resolution, ground-truth facial mesh sequences. Specifically, the thin-shell linear deformation model is imposed onto the example pose reconstruction process via optional hard constraints such as symmetry and multiresolution constraints. Through our experiments and comparisons, we validate the effectiveness, robustness, and accuracy of our approach. Besides guiding minimal yet effective placement of facial mocap markers, we also describe and demonstrate its two selected applications: marker-based facial mesh skinning and multiresolution facial performance capture.

  4. Robust fuzzy control subject to state variance and passivity constraints for perturbed nonlinear systems with multiplicative noises.

    PubMed

    Chang, Wen-Jer; Huang, Bo-Jyun

    2014-11-01

    The multi-constrained robust fuzzy control problem is investigated in this paper for perturbed continuous-time nonlinear stochastic systems. The nonlinear system considered in this paper is represented by a Takagi-Sugeno fuzzy model with perturbations and state multiplicative noises. The multiple performance constraints considered in this paper include stability, passivity and individual state variance constraints. The Lyapunov stability theory is employed to derive sufficient conditions to achieve the above performance constraints. By solving these sufficient conditions, the contribution of this paper is to develop a parallel distributed compensation based robust fuzzy control approach to satisfy multiple performance constraints for perturbed nonlinear systems with multiplicative noises. At last, a numerical example for the control of perturbed inverted pendulum system is provided to illustrate the applicability and effectiveness of the proposed multi-constrained robust fuzzy control method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  5. A new look at the simultaneous analysis and design of structures

    NASA Technical Reports Server (NTRS)

    Striz, Alfred G.

    1994-01-01

    The minimum weight optimization of structural systems, subject to strength and displacement constraints as well as size side constraints, was investigated by the Simultaneous ANalysis and Design (SAND) approach. As an optimizer, the code NPSOL was used which is based on a sequential quadratic programming (SQP) algorithm. The structures were modeled by the finite element method. The finite element related input to NPSOL was automatically generated from the input decks of such standard FEM/optimization codes as NASTRAN or ASTROS, with the stiffness matrices, at present, extracted from the FEM code ANALYZE. In order to avoid ill-conditioned matrices that can be encountered when the global stiffness equations are used as additional nonlinear equality constraints in the SAND approach (with the displacements as additional variables), the matrix displacement method was applied. In this approach, the element stiffness equations are used as constraints instead of the global stiffness equations, in conjunction with the nodal force equilibrium equations. This approach adds the element forces as variables to the system. Since, for complex structures and the associated large and very sparce matrices, the execution times of the optimization code became excessive due to the large number of required constraint gradient evaluations, the Kreisselmeier-Steinhauser function approach was used to decrease the computational effort by reducing the nonlinear equality constraint system to essentially a single combined constraint equation. As the linear equality and inequality constraints require much less computational effort to evaluate, they were kept in their previous form to limit the complexity of the KS function evaluation. To date, the standard three-bar, ten-bar, and 72-bar trusses have been tested. For the standard SAND approach, correct results were obtained for all three trusses although convergence became slower for the 72-bar truss. When the matrix displacement method was used, correct results were still obtained, but the execution times became excessive due to the large number of constraint gradient evaluations required. Using the KS function, the computational effort dropped, but the optimization seemed to become less robust. The investigation of this phenomenon is continuing. As an alternate approach, the code MINOS for the optimization of sparse matrices can be applied to the problem in lieu of the Kreisselmeier-Steinhauser function. This investigation is underway.

  6. A model composition for Mars derived from the oxygen isotopic ratios of martian/SNC meteorites. [Abstract only

    NASA Technical Reports Server (NTRS)

    Delaney, J. S.

    1994-01-01

    Oxygen is the most abundant element in most meteorites, yet the ratios of its isotopes are seldom used to constrain the compositional history of achondrites. The two major achondrite groups have O isotope signatures that differ from any plausible chondritic precursors and lie between the ordinary and carbonaceous chondrite domains. If the assumption is made that the present global sampling of chondritic meteorites reflects the variability of O reservoirs at the time of planetessimal/planet aggregation in the early nebula, then the O in these groups must reflect mixing between known chondritic reservoirs. This approach, in combination with constraints based on Fe-Mn-Mg systematics, has been used previously to model the composition of the basaltic achondrite parent body (BAP) and provides a model precursor composition that is generally consistent with previous eucrite parent body (EPB) estimates. The same approach is applied to Mars exploiting the assumption that the SNC and related meteorites sample the martian lithosphere. Model planet and planetesimal compositions can be derived by mixing of known chondritic components using O isotope ratios as the fundamental compositional constraint. The major- and minor-element composition for Mars derived here and that derived previously for the basaltic achondrite parent body are, in many respects, compatible with model compositions generated using completely independent constraints. The role of volatile elements and alkalis in particular remains a major difficulty in applying such models.

  7. Physical consistency of subgrid-scale models for large-eddy simulation of incompressible turbulent flows

    NASA Astrophysics Data System (ADS)

    Silvis, Maurits H.; Remmerswaal, Ronald A.; Verstappen, Roel

    2017-01-01

    We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the mathematical and physical properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is partly due to incompatibilities between model constraints and limitations of velocity-gradient-based subgrid-scale models. However, we also reason that the current framework shows that there is room for improvement in the properties and, hence, the behavior of existing subgrid-scale models. We furthermore show how compatible model constraints can be combined to construct new subgrid-scale models that have desirable properties built into them. We provide a few examples of such new models, of which a new model of eddy viscosity type, that is based on the vortex stretching magnitude, is successfully tested in large-eddy simulations of decaying homogeneous isotropic turbulence and turbulent plane-channel flow.

  8. A two-level approach to large mixed-integer programs with application to cogeneration in energy-efficient buildings

    DOE PAGES

    Lin, Fu; Leyffer, Sven; Munson, Todd

    2016-04-12

    We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less

  9. A two-level approach to large mixed-integer programs with application to cogeneration in energy-efficient buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Fu; Leyffer, Sven; Munson, Todd

    We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less

  10. Causal discovery and inference: concepts and recent methodological advances.

    PubMed

    Spirtes, Peter; Zhang, Kun

    This paper aims to give a broad coverage of central concepts and principles involved in automated causal inference and emerging approaches to causal discovery from i.i.d data and from time series. After reviewing concepts including manipulations, causal models, sample predictive modeling, causal predictive modeling, and structural equation models, we present the constraint-based approach to causal discovery, which relies on the conditional independence relationships in the data, and discuss the assumptions underlying its validity. We then focus on causal discovery based on structural equations models, in which a key issue is the identifiability of the causal structure implied by appropriately defined structural equation models: in the two-variable case, under what conditions (and why) is the causal direction between the two variables identifiable? We show that the independence between the error term and causes, together with appropriate structural constraints on the structural equation, makes it possible. Next, we report some recent advances in causal discovery from time series. Assuming that the causal relations are linear with nonGaussian noise, we mention two problems which are traditionally difficult to solve, namely causal discovery from subsampled data and that in the presence of confounding time series. Finally, we list a number of open questions in the field of causal discovery and inference.

  11. Modeling of control forces for kinematical constraints in the dynamics of multibody systems: A new approach

    NASA Technical Reports Server (NTRS)

    Ider, Sitki Kemal

    1989-01-01

    Conventionally kinematical constraints in multibody systems are treated similar to geometrical constraints and are modeled by constraint reaction forces which are perpendicular to constraint surfaces. However, in reality, one may want to achieve the desired kinematical conditions by control forces having different directions in relation to the constraint surfaces. The conventional equations of motion for multibody systems subject to kinematical constraints are generalized by introducing general direction control forces. Conditions for the selections of the control force directions are also discussed. A redundant robotic system subject to prescribed end-effector motion is analyzed to illustrate the methods proposed.

  12. Task Delegation Based Access Control Models for Workflow Systems

    NASA Astrophysics Data System (ADS)

    Gaaloul, Khaled; Charoy, François

    e-Government organisations are facilitated and conducted using workflow management systems. Role-based access control (RBAC) is recognised as an efficient access control model for large organisations. The application of RBAC in workflow systems cannot, however, grant permissions to users dynamically while business processes are being executed. We currently observe a move away from predefined strict workflow modelling towards approaches supporting flexibility on the organisational level. One specific approach is that of task delegation. Task delegation is a mechanism that supports organisational flexibility, and ensures delegation of authority in access control systems. In this paper, we propose a Task-oriented Access Control (TAC) model based on RBAC to address these requirements. We aim to reason about task from organisational perspectives and resources perspectives to analyse and specify authorisation constraints. Moreover, we present a fine grained access control protocol to support delegation based on the TAC model.

  13. Neurocomputing strategies in decomposition based structural design

    NASA Technical Reports Server (NTRS)

    Szewczyk, Z.; Hajela, P.

    1993-01-01

    The present paper explores the applicability of neurocomputing strategies in decomposition based structural optimization problems. It is shown that the modeling capability of a backpropagation neural network can be used to detect weak couplings in a system, and to effectively decompose it into smaller, more tractable, subsystems. When such partitioning of a design space is possible, parallel optimization can be performed in each subsystem, with a penalty term added to its objective function to account for constraint violations in all other subsystems. Dependencies among subsystems are represented in terms of global design variables, and a neural network is used to map the relations between these variables and all subsystem constraints. A vector quantization technique, referred to as a z-Network, can effectively be used for this purpose. The approach is illustrated with applications to minimum weight sizing of truss structures with multiple design constraints.

  14. 'Constraint consistency' at all orders in cosmological perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nandi, Debottam; Shankaranarayanan, S., E-mail: debottam@iisertvm.ac.in, E-mail: shanki@iisertvm.ac.in

    2015-08-01

    We study the equivalence of two—order-by-order Einstein's equation and Reduced action—approaches to cosmological perturbation theory at all orders for different models of inflation. We point out a crucial consistency check which we refer to as 'Constraint consistency' condition that needs to be satisfied in order for the two approaches to lead to identical single variable equation of motion. The method we propose here is quick and efficient to check the consistency for any model including modified gravity models. Our analysis points out an important feature which is crucial for inflationary model building i.e., all 'constraint' inconsistent models have higher ordermore » Ostrogradsky's instabilities but the reverse is not true. In other words, one can have models with constraint Lapse function and Shift vector, though it may have Ostrogradsky's instabilities. We also obtain single variable equation for non-canonical scalar field in the limit of power-law inflation for the second-order perturbed variables.« less

  15. Using constraints and their value for optimization of large ODE systems

    PubMed Central

    Domijan, Mirela; Rand, David A.

    2015-01-01

    We provide analytical tools to facilitate a rigorous assessment of the quality and value of the fit of a complex model to data. We use this to provide approaches to model fitting, parameter estimation, the design of optimization functions and experimental optimization. This is in the context where multiple constraints are used to select or optimize a large model defined by differential equations. We illustrate the approach using models of circadian clocks and the NF-κB signalling system. PMID:25673300

  16. Complexity Science Applications to Dynamic Trajectory Management: Research Strategies

    NASA Technical Reports Server (NTRS)

    Sawhill, Bruce; Herriot, James; Holmes, Bruce J.; Alexandrov, Natalia

    2009-01-01

    The promise of the Next Generation Air Transportation System (NextGen) is strongly tied to the concept of trajectory-based operations in the national airspace system. Existing efforts to develop trajectory management concepts are largely focused on individual trajectories, optimized independently, then de-conflicted among each other, and individually re-optimized, as possible. The benefits in capacity, fuel, and time are valuable, though perhaps could be greater through alternative strategies. The concept of agent-based trajectories offers a strategy for automation of simultaneous multiple trajectory management. The anticipated result of the strategy would be dynamic management of multiple trajectories with interacting and interdependent outcomes that satisfy multiple, conflicting constraints. These constraints would include the business case for operators, the capacity case for the Air Navigation Service Provider (ANSP), and the environmental case for noise and emissions. The benefits in capacity, fuel, and time might be improved over those possible under individual trajectory management approaches. The proposed approach relies on computational agent-based modeling (ABM), combinatorial mathematics, as well as application of "traffic physics" concepts to the challenge, and modeling and simulation capabilities. The proposed strategy could support transforming air traffic control from managing individual aircraft behaviors to managing systemic behavior of air traffic in the NAS. A system built on the approach could provide the ability to know when regions of airspace approach being "full," that is, having non-viable local solution space for optimizing trajectories in advance.

  17. Constraint-Driven Software Design: An Escape from the Waterfall Model.

    ERIC Educational Resources Information Center

    de Hoog, Robert; And Others

    1994-01-01

    Presents the principles of a development methodology for software design based on a nonlinear, product-driven approach that integrates quality aspects. Two examples are given to show that the flexibility needed for building high quality systems leads to integrated development environments in which methodology, product, and tools are closely…

  18. The application of mean field theory to image motion estimation.

    PubMed

    Zhang, J; Hanauer, G G

    1995-01-01

    Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.

  19. Assessing the Impact of U.S. Food Assistance Delivery Policies on Child Mortality in Northern Kenya.

    PubMed

    Nikulkov, Alex; Barrett, Christopher B; Mude, Andrew G; Wein, Lawrence M

    2016-01-01

    The U.S. is the main country in the world that delivers its food assistance primarily via transoceanic shipments of commodity-based in-kind food. This approach is costlier and less timely than cash-based assistance, which includes cash transfers, food vouchers, and local and regional procurement, where food is bought in or nearby the recipient country. The U.S.'s approach is exacerbated by a requirement that half of its transoceanic food shipments need to be sent on U.S.-flag vessels. We estimate the effect of these U.S. food assistance distribution policies on child mortality in northern Kenya by formulating and optimizing a supply chain model. In our model, monthly orders of transoceanic shipments and cash-based interventions are chosen to minimize child mortality subject to an annual budget constraint and to policy constraints on the allowable proportions of cash-based interventions and non-US-flag shipments. By varying the restrictiveness of these policy constraints, we assess the impact of possible changes in U.S. food aid policies on child mortality. The model includes an existing regression model that uses household survey data and geospatial data to forecast the mean mid-upper-arm circumference Z scores among children in a community, and allows food assistance to increase Z scores, and Z scores to influence mortality rates. We find that cash-based interventions are a much more powerful policy lever than the U.S.-flag vessel requirement: switching to cash-based interventions reduces child mortality from 4.4% to 3.7% (a 16.2% relative reduction) in our model, whereas eliminating the U.S.-flag vessel restriction without increasing the use of cash-based interventions generates a relative reduction in child mortality of only 1.1%. The great majority of the gains achieved by cash-based interventions are due to their reduced cost, not their reduced delivery lead times; i.e., the reduction of shipping expenses allows for more food to be delivered, which reduces child mortality.

  20. Assessing the Impact of U.S. Food Assistance Delivery Policies on Child Mortality in Northern Kenya

    PubMed Central

    Nikulkov, Alex; Barrett, Christopher B.; Mude, Andrew G.; Wein, Lawrence M.

    2016-01-01

    The U.S. is the main country in the world that delivers its food assistance primarily via transoceanic shipments of commodity-based in-kind food. This approach is costlier and less timely than cash-based assistance, which includes cash transfers, food vouchers, and local and regional procurement, where food is bought in or nearby the recipient country. The U.S.’s approach is exacerbated by a requirement that half of its transoceanic food shipments need to be sent on U.S.-flag vessels. We estimate the effect of these U.S. food assistance distribution policies on child mortality in northern Kenya by formulating and optimizing a supply chain model. In our model, monthly orders of transoceanic shipments and cash-based interventions are chosen to minimize child mortality subject to an annual budget constraint and to policy constraints on the allowable proportions of cash-based interventions and non-US-flag shipments. By varying the restrictiveness of these policy constraints, we assess the impact of possible changes in U.S. food aid policies on child mortality. The model includes an existing regression model that uses household survey data and geospatial data to forecast the mean mid-upper-arm circumference Z scores among children in a community, and allows food assistance to increase Z scores, and Z scores to influence mortality rates. We find that cash-based interventions are a much more powerful policy lever than the U.S.-flag vessel requirement: switching to cash-based interventions reduces child mortality from 4.4% to 3.7% (a 16.2% relative reduction) in our model, whereas eliminating the U.S.-flag vessel restriction without increasing the use of cash-based interventions generates a relative reduction in child mortality of only 1.1%. The great majority of the gains achieved by cash-based interventions are due to their reduced cost, not their reduced delivery lead times; i.e., the reduction of shipping expenses allows for more food to be delivered, which reduces child mortality. PMID:27997571

  1. The Probabilistic Admissible Region with Additional Constraints

    NASA Astrophysics Data System (ADS)

    Roscoe, C.; Hussein, I.; Wilkins, M.; Schumacher, P.

    The admissible region, in the space surveillance field, is defined as the set of physically acceptable orbits (e.g., orbits with negative energies) consistent with one or more observations of a space object. Given additional constraints on orbital semimajor axis, eccentricity, etc., the admissible region can be constrained, resulting in the constrained admissible region (CAR). Based on known statistics of the measurement process, one can replace hard constraints with a probabilistic representation of the admissible region. This results in the probabilistic admissible region (PAR), which can be used for orbit initiation in Bayesian tracking and prioritization of tracks in a multiple hypothesis tracking framework. The PAR concept was introduced by the authors at the 2014 AMOS conference. In that paper, a Monte Carlo approach was used to show how to construct the PAR in the range/range-rate space based on known statistics of the measurement, semimajor axis, and eccentricity. An expectation-maximization algorithm was proposed to convert the particle cloud into a Gaussian Mixture Model (GMM) representation of the PAR. This GMM can be used to initialize a Bayesian filter. The PAR was found to be significantly non-uniform, invalidating an assumption frequently made in CAR-based filtering approaches. Using the GMM or particle cloud representations of the PAR, orbits can be prioritized for propagation in a multiple hypothesis tracking (MHT) framework. In this paper, the authors focus on expanding the PAR methodology to allow additional constraints, such as a constraint on perigee altitude, to be modeled in the PAR. This requires re-expressing the joint probability density function for the attributable vector as well as the (constrained) orbital parameters and range and range-rate. The final PAR is derived by accounting for any interdependencies between the parameters. Noting that the concepts presented are general and can be applied to any measurement scenario, the idea will be illustrated using a short-arc, angles-only observation scenario.

  2. Spatiotemporal access model based on reputation for the sensing layer of the IoT.

    PubMed

    Guo, Yunchuan; Yin, Lihua; Li, Chao; Qian, Junyan

    2014-01-01

    Access control is a key technology in providing security in the Internet of Things (IoT). The mainstream security approach proposed for the sensing layer of the IoT concentrates only on authentication while ignoring the more general models. Unreliable communications and resource constraints make the traditional access control techniques barely meet the requirements of the sensing layer of the IoT. In this paper, we propose a model that combines space and time with reputation to control access to the information within the sensing layer of the IoT. This model is called spatiotemporal access control based on reputation (STRAC). STRAC uses a lattice-based approach to decrease the size of policy bases. To solve the problem caused by unreliable communications, we propose both nondeterministic authorizations and stochastic authorizations. To more precisely manage the reputation of nodes, we propose two new mechanisms to update the reputation of nodes. These new approaches are the authority-based update mechanism (AUM) and the election-based update mechanism (EUM). We show how the model checker UPPAAL can be used to analyze the spatiotemporal access control model of an application. Finally, we also implement a prototype system to demonstrate the efficiency of our model.

  3. Chance-Constrained System of Systems Based Operation of Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kargarian, Amin; Fu, Yong; Wu, Hongyu

    In this paper, a chance-constrained system of systems (SoS) based decision-making approach is presented for stochastic scheduling of power systems encompassing active distribution grids. Based on the concept of SoS, the independent system operator (ISO) and distribution companies (DISCOs) are modeled as self-governing systems. These systems collaborate with each other to run the entire power system in a secure and economic manner. Each self-governing system accounts for its local reserve requirements and line flow constraints with respect to the uncertainties of load and renewable energy resources. A set of chance constraints are formulated to model the interactions between the ISOmore » and DISCOs. The proposed model is solved by using analytical target cascading (ATC) method, a distributed optimization algorithm in which only a limited amount of information is exchanged between collaborative ISO and DISCOs. In this paper, a 6-bus and a modified IEEE 118-bus power systems are studied to show the effectiveness of the proposed algorithm.« less

  4. Game-Based Approaches, Pedagogical Principles and Tactical Constraints: Examining Games Modification

    ERIC Educational Resources Information Center

    Serra-Olivares, Jaime; García-López, Luis M.; Calderón, Antonio

    2016-01-01

    The purpose of this study was to analyze the effect of modification strategies based on the pedagogical principles of the Teaching Games for Understanding approach on tactical constraints of four 3v3 soccer small-sided games. The Game performance of 21 U-10 players was analyzed in a game similar to the adult game; one based on keeping-the-ball;…

  5. Model-based control strategies for systems with constraints of the program type

    NASA Astrophysics Data System (ADS)

    Jarzębowska, Elżbieta

    2006-08-01

    The paper presents a model-based tracking control strategy for constrained mechanical systems. Constraints we consider can be material and non-material ones referred to as program constraints. The program constraint equations represent tasks put upon system motions and they can be differential equations of orders higher than one or two, and be non-integrable. The tracking control strategy relies upon two dynamic models: a reference model, which is a dynamic model of a system with arbitrary order differential constraints and a dynamic control model. The reference model serves as a motion planner, which generates inputs to the dynamic control model. It is based upon a generalized program motion equations (GPME) method. The method enables to combine material and program constraints and merge them both into the motion equations. Lagrange's equations with multipliers are the peculiar case of the GPME, since they can be applied to systems with constraints of first orders. Our tracking strategy referred to as a model reference program motion tracking control strategy enables tracking of any program motion predefined by the program constraints. It extends the "trajectory tracking" to the "program motion tracking". We also demonstrate that our tracking strategy can be extended to a hybrid program motion/force tracking.

  6. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less

  7. The evolution of process-based hydrologic models: historical challenges and the collective quest for physical realism

    DOE PAGES

    Clark, Martyn P.; Bierkens, Marc F. P.; Samaniego, Luis; ...

    2017-07-11

    The diversity in hydrologic models has historically led to great controversy on the correct approach to process-based hydrologic modeling, with debates centered on the adequacy of process parameterizations, data limitations and uncertainty, and computational constraints on model analysis. Here, we revisit key modeling challenges on requirements to (1) define suitable model equations, (2) define adequate model parameters, and (3) cope with limitations in computing power. We outline the historical modeling challenges, provide examples of modeling advances that address these challenges, and define outstanding research needs. We also illustrate how modeling advances have been made by groups using models of different type and complexity,more » and we argue for the need to more effectively use our diversity of modeling approaches in order to advance our collective quest for physically realistic hydrologic models.« less

  8. Apprenticeship Learning: Learning to Schedule from Human Experts

    DTIC Science & Technology

    2016-06-09

    approaches to learning such models are based on Markov models, such as reinforcement learning or inverse reinforcement learning (Busoniu, Babuska, and De...via inverse reinforcement learning. In ICML. Barto, A. G., and Mahadevan, S. 2003. Recent advances in hierarchical reinforcement learning. Discrete...of tasks with temporal constraints. In Proc. AAAI, 2110–2116. Odom, P., and Natarajan, S. 2015. Active advice seeking for inverse reinforcement

  9. SATware: A Semantic Approach for Building Sentient Spaces

    NASA Astrophysics Data System (ADS)

    Massaguer, Daniel; Mehrotra, Sharad; Vaisenberg, Ronen; Venkatasubramanian, Nalini

    This chapter describes the architecture of a semantic-based middleware environment for building sensor-driven sentient spaces. The proposed middleware explicitly models sentient space semantics (i.e., entities, spaces, activities) and supports mechanisms to map sensor observations to the state of the sentient space. We argue how such a semantic approach provides a powerful programming environment for building sensor spaces. In addition, the approach provides natural ways to exploit semantics for variety of purposes including scheduling under resource constraints and sensor recalibration.

  10. Spacetime emergence of the robertson-walker universe from a matrix model.

    PubMed

    Erdmenger, Johanna; Meyer, René; Park, Jeong-Hyuck

    2007-06-29

    Using a novel, string theory-inspired formalism based on a Hamiltonian constraint, we obtain a conformal mechanical system for the spatially flat four-dimensional Robertson-Walker Universe. Depending on parameter choices, this system describes either a relativistic particle in the Robertson-Walker background or metric fluctuations of the Robertson-Walker geometry. Moreover, we derive a tree-level M theory matrix model in this time-dependent background. Imposing the Hamiltonian constraint forces the spacetime geometry to be fuzzy near the big bang, while the classical Robertson-Walker geometry emerges as the Universe expands. From our approach, we also derive the temperature of the Universe interpolating between the radiation and matter dominated eras.

  11. A two steps solution approach to solving large nonlinear models: application to a problem of conjunctive use.

    PubMed

    Vieira, J; Cunha, M C

    2011-01-01

    This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.

  12. XMI2USE: A Tool for Transforming XMI to USE Specifications

    NASA Astrophysics Data System (ADS)

    Sun, Wuliang; Song, Eunjee; Grabow, Paul C.; Simmonds, Devon M.

    The UML-based Specification Environment (USE) tool supports syntactic analysis, type checking, consistency checking, and dynamic validation of invariants and pre-/post conditions specified in the Object Constraint Language (OCL). Due to its animation and analysis power, it is useful when checking critical non-functional properties such as security policies. However, the USE tool requires one to specify (i.e., "write") a model using its own textual language and does not allow one to import any model specification files created by other UML modeling tools. Hence, to make the best use of existing UML tools, we often create a model with OCL constraints using a modeling tool such as the IBM Rational Software Architect (RSA) and then use the USE tool for model validation. This approach, however, requires a manual transformation between the specifications of two different tool formats, which is error-prone and diminishes the benefit of automated model-level validations. In this paper, we describe our own implementation of a specification transformation engine that is based on the Model Driven Architecture (MDA) framework and currently supports automatic tool-level transformations from RSA to USE.

  13. Non-iterative distance constraints enforcement for cloth drapes simulation

    NASA Astrophysics Data System (ADS)

    Hidajat, R. L. L. G.; Wibowo, Arifin, Z.; Suyitno

    2016-03-01

    A cloth simulation represents the behavior of cloth objects such as flag, tablecloth, or even garments has application in clothing animation for games and virtual shops. Elastically deformable models have widely used to provide realistic and efficient simulation, however problem of overstretching is encountered. We introduce a new cloth simulation algorithm that replaces iterative distance constraint enforcement steps with non-iterative ones for preventing over stretching in a spring-mass system for cloth modeling. Our method is based on a simple position correction procedure applied at one end of a spring. In our experiments, we developed a rectangle cloth model which is initially at a horizontal position with one point is fixed, and it is allowed to drape by its own weight. Our simulation is able to achieve a plausible cloth drapes as in reality. This paper aims to demonstrate the reliability of our approach to overcome overstretches while decreasing the computational cost of the constraint enforcement process due to an iterative procedure that is eliminated.

  14. Aircraft wing structural design optimization based on automated finite element modelling and ground structure approach

    NASA Astrophysics Data System (ADS)

    Yang, Weizhu; Yue, Zhufeng; Li, Lei; Wang, Peiyan

    2016-01-01

    An optimization procedure combining an automated finite element modelling (AFEM) technique with a ground structure approach (GSA) is proposed for structural layout and sizing design of aircraft wings. The AFEM technique, based on CATIA VBA scripting and PCL programming, is used to generate models automatically considering the arrangement of inner systems. GSA is used for local structural topology optimization. The design procedure is applied to a high-aspect-ratio wing. The arrangement of the integral fuel tank, landing gear and control surfaces is considered. For the landing gear region, a non-conventional initial structural layout is adopted. The positions of components, the number of ribs and local topology in the wing box and landing gear region are optimized to obtain a minimum structural weight. Constraints include tank volume, strength, buckling and aeroelastic parameters. The results show that the combined approach leads to a greater weight saving, i.e. 26.5%, compared with three additional optimizations based on individual design approaches.

  15. A disturbance based control/structure design algorithm

    NASA Technical Reports Server (NTRS)

    Mclaren, Mark D.; Slater, Gary L.

    1989-01-01

    Some authors take a classical approach to the simultaneous structure/control optimization by attempting to simultaneously minimize the weighted sum of the total mass and a quadratic form, subject to all of the structural and control constraints. Here, the optimization will be based on the dynamic response of a structure to an external unknown stochastic disturbance environment. Such a response to excitation approach is common to both the structural and control design phases, and hence represents a more natural control/structure optimization strategy than relying on artificial and vague control penalties. The design objective is to find the structure and controller of minimum mass such that all the prescribed constraints are satisfied. Two alternative solution algorithms are presented which have been applied to this problem. Each algorithm handles the optimization strategy and the imposition of the nonlinear constraints in a different manner. Two controller methodologies, and their effect on the solution algorithm, will be considered. These are full state feedback and direct output feedback, although the problem formulation is not restricted solely to these forms of controller. In fact, although full state feedback is a popular choice among researchers in this field (for reasons that will become apparent), its practical application is severely limited. The controller/structure interaction is inserted by the imposition of appropriate closed-loop constraints, such as closed-loop output response and control effort constraints. Numerical results will be obtained for a representative flexible structure model to illustrate the effectiveness of the solution algorithms.

  16. Reproducibility of tract segmentation between sessions using an unsupervised modelling-based approach.

    PubMed

    Clayden, Jonathan D; Storkey, Amos J; Muñoz Maniega, Susana; Bastin, Mark E

    2009-04-01

    This work describes a reproducibility analysis of scalar water diffusion parameters, measured within white matter tracts segmented using a probabilistic shape modelling method. In common with previously reported neighbourhood tractography (NT) work, the technique optimises seed point placement for fibre tracking by matching the tracts generated using a number of candidate points against a reference tract, which is derived from a white matter atlas in the present study. No direct constraints are applied to the fibre tracking results. An Expectation-Maximisation algorithm is used to fully automate the procedure, and make dramatically more efficient use of data than earlier NT methods. Within-subject and between-subject variances for fractional anisotropy and mean diffusivity within the tracts are then separated using a random effects model. We find test-retest coefficients of variation (CVs) similar to those reported in another study using landmark-guided single seed points; and subject to subject CVs similar to a constraint-based multiple ROI method. We conclude that our approach is at least as effective as other methods for tract segmentation using tractography, whilst also having some additional benefits, such as its provision of a goodness-of-match measure for each segmentation.

  17. A coupling strategy for nonlocal and local diffusion models with mixed volume constraints and boundary conditions

    DOE PAGES

    D'Elia, Marta; Perego, Mauro; Bochev, Pavel B.; ...

    2015-12-21

    We develop and analyze an optimization-based method for the coupling of nonlocal and local diffusion problems with mixed volume constraints and boundary conditions. The approach formulates the coupling as a control problem where the states are the solutions of the nonlocal and local equations, the objective is to minimize their mismatch on the overlap of the nonlocal and local domains, and the controls are virtual volume constraints and boundary conditions. When some assumptions on the kernel functions hold, we prove that the resulting optimization problem is well-posed and discuss its implementation using Sandia’s agile software components toolkit. As a result,more » the latter provides the groundwork for the development of engineering analysis tools, while numerical results for nonlocal diffusion in three-dimensions illustrate key properties of the optimization-based coupling method.« less

  18. Spacecraft command verification: The AI solution

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Stephan, Amy; Smith, Brian K.

    1990-01-01

    Recently, a knowledge-based approach was used to develop a system called the Command Constraint Checker (CCC) for TRW. CCC was created to automate the process of verifying spacecraft command sequences. To check command files by hand for timing and sequencing errors is a time-consuming and error-prone task. Conventional software solutions were rejected when it was estimated that it would require 36 man-months to build an automated tool to check constraints by conventional methods. Using rule-based representation to model the various timing and sequencing constraints of the spacecraft, CCC was developed and tested in only three months. By applying artificial intelligence techniques, CCC designers were able to demonstrate the viability of AI as a tool to transform difficult problems into easily managed tasks. The design considerations used in developing CCC are discussed and the potential impact of this system on future satellite programs is examined.

  19. Estimating net joint torques from kinesiological data using optimal linear system theory.

    PubMed

    Runge, C F; Zajac, F E; Allum, J H; Risher, D W; Bryson, A E; Honegger, F

    1995-12-01

    Net joint torques (NJT) are frequently computed to provide insights into the motor control of dynamic biomechanical systems. An inverse dynamics approach is almost always used, whereby the NJT are computed from 1) kinematic measurements (e.g., position of the segments), 2) kinetic measurements (e.g., ground reaction forces) that are, in effect, constraints defining unmeasured kinematic quantities based on a dynamic segmental model, and 3) numerical differentiation of the measured kinematics to estimate velocities and accelerations that are, in effect, additional constraints. Due to errors in the measurements, the segmental model, and the differentiation process, estimated NJT rarely produce the observed movement in a forward simulation when the dynamics of the segmental system are inherently unstable (e.g., human walking). Forward dynamic simulations are, however, essential to studies of muscle coordination. We have developed an alternative approach, using the linear quadratic follower (LQF) algorithm, which computes the NJT such that a stable simulation of the observed movement is produced and the measurements are replicated as well as possible. The LQF algorithm does not employ constraints depending on explicit differentiation of the kinematic data, but rather employs those depending on specification of a cost function, based on quantitative assumptions about data confidence. We illustrate the usefulness of the LQF approach by using it to estimate NJT exerted by standing humans perturbed by support-surface movements. We show that unless the number of kinematic and force variables recorded is sufficiently high, the confidence that can be placed in the estimates of the NJT, obtained by any method (e.g., LQF, or the inverse dynamics approach), may be unsatisfactorily low.

  20. Connected Component Model for Multi-Object Tracking.

    PubMed

    He, Zhenyu; Li, Xin; You, Xinge; Tao, Dacheng; Tang, Yuan Yan

    2016-08-01

    In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial-temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.

  1. Power-constrained supercomputing

    NASA Astrophysics Data System (ADS)

    Bailey, Peter E.

    As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound. Adaptive power balancing efficiently predicts where critical paths are likely to occur and distributes power to those paths. Greater power, in turn, allows increased thread concurrency levels, CPU frequency/voltage, or both. We describe these techniques in detail and show that, compared to the state-of-the-art technique of using statically predetermined, per-node power caps, Conductor leads to a best-case performance improvement of up to 30%, and an average improvement of 19.1%. At the node level, an accurate power/performance model will aid in selecting the right configuration from a large set of available configurations. We present a novel approach to generate such a model offline using kernel clustering and multivariate linear regression. Our model requires only two iterations to select a configuration, which provides a significant advantage over exhaustive search-based strategies. We apply our model to predict power and performance for different applications using arbitrary configurations, and show that our model, when used with hardware frequency-limiting in a runtime system, selects configurations with significantly higher performance at a given power limit than those chosen by frequency-limiting alone. When applied to a set of 36 computational kernels from a range of applications, our model accurately predicts power and performance; our runtime system based on the model maintains 91% of optimal performance while meeting power constraints 88% of the time. When the runtime system violates a power constraint, it exceeds the constraint by only 6% in the average case, while simultaneously achieving 54% more performance than an oracle. Through the combination of the above contributions, we hope to provide guidance and inspiration to research practitioners working on runtime systems for power-constrained environments. We also hope this dissertation will draw attention to the need for software and runtime-controlled power management under power constraints at various levels, from the processor level to the cluster level.

  2. Agent Based Modeling of Collaboration and Work Practices Onboard the International Space Station

    NASA Technical Reports Server (NTRS)

    Acquisti, Alessandro; Sierhuis, Maarten; Clancey, William J.; Bradshaw, Jeffrey M.; Shaffo, Mike (Technical Monitor)

    2002-01-01

    The International Space Station is one the most complex projects ever, with numerous interdependent constraints affecting productivity and crew safety. This requires planning years before crew expeditions, and the use of sophisticated scheduling tools. Human work practices, however, are difficult to study and represent within traditional planning tools. We present an agent-based model and simulation of the activities and work practices of astronauts onboard the ISS based on an agent-oriented approach. The model represents 'a day in the life' of the ISS crew and is developed in Brahms, an agent-oriented, activity-based language used to model knowledge in situated action and learning in human activities.

  3. Statistical estimation via convex optimization for trending and performance monitoring

    NASA Astrophysics Data System (ADS)

    Samar, Sikandar

    This thesis presents an optimization-based statistical estimation approach to find unknown trends in noisy data. A Bayesian framework is used to explicitly take into account prior information about the trends via trend models and constraints. The main focus is on convex formulation of the Bayesian estimation problem, which allows efficient computation of (globally) optimal estimates. There are two main parts of this thesis. The first part formulates trend estimation in systems described by known detailed models as a convex optimization problem. Statistically optimal estimates are then obtained by maximizing a concave log-likelihood function subject to convex constraints. We consider the problem of increasing problem dimension as more measurements become available, and introduce a moving horizon framework to enable recursive estimation of the unknown trend by solving a fixed size convex optimization problem at each horizon. We also present a distributed estimation framework, based on the dual decomposition method, for a system formed by a network of complex sensors with local (convex) estimation. Two specific applications of the convex optimization-based Bayesian estimation approach are described in the second part of the thesis. Batch estimation for parametric diagnostics in a flight control simulation of a space launch vehicle is shown to detect incipient fault trends despite the natural masking properties of feedback in the guidance and control loops. Moving horizon approach is used to estimate time varying fault parameters in a detailed nonlinear simulation model of an unmanned aerial vehicle. An excellent performance is demonstrated in the presence of winds and turbulence.

  4. Modeling and Measurement Constraints in Fault Diagnostics for HVAC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Najafi, Massieh; Auslander, David M.; Bartlett, Peter L.

    2010-05-30

    Many studies have shown that energy savings of five to fifteen percent are achievable in commercial buildings by detecting and correcting building faults, and optimizing building control systems. However, in spite of good progress in developing tools for determining HVAC diagnostics, methods to detect faults in HVAC systems are still generally undeveloped. Most approaches use numerical filtering or parameter estimation methods to compare data from energy meters and building sensors to predictions from mathematical or statistical models. They are effective when models are relatively accurate and data contain few errors. In this paper, we address the case where models aremore » imperfect and data are variable, uncertain, and can contain error. We apply a Bayesian updating approach that is systematic in managing and accounting for most forms of model and data errors. The proposed method uses both knowledge of first principle modeling and empirical results to analyze the system performance within the boundaries defined by practical constraints. We demonstrate the approach by detecting faults in commercial building air handling units. We find that the limitations that exist in air handling unit diagnostics due to practical constraints can generally be effectively addressed through the proposed approach.« less

  5. Introducing health gains in location-allocation models: A stochastic model for planning the delivery of long-term care

    NASA Astrophysics Data System (ADS)

    Cardoso, T.; Oliveira, M. D.; Barbosa-Póvoa, A.; Nickel, S.

    2015-05-01

    Although the maximization of health is a key objective in health care systems, location-allocation literature has not yet considered this dimension. This study proposes a multi-objective stochastic mathematical programming approach to support the planning of a multi-service network of long-term care (LTC), both in terms of services location and capacity planning. This approach is based on a mixed integer linear programming model with two objectives - the maximization of expected health gains and the minimization of expected costs - with satisficing levels in several dimensions of equity - namely, equity of access, equity of utilization, socioeconomic equity and geographical equity - being imposed as constraints. The augmented ε-constraint method is used to explore the trade-off between these conflicting objectives, with uncertainty in the demand and delivery of care being accounted for. The model is applied to analyze the (re)organization of the LTC network currently operating in the Great Lisbon region in Portugal for the 2014-2016 period. Results show that extending the network of LTC is a cost-effective investment.

  6. A Combined Approach to Cartographic Displacement for Buildings Based on Skeleton and Improved Elastic Beam Algorithm

    PubMed Central

    Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya

    2014-01-01

    Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727

  7. Advantages of soft versus hard constraints in self-modeling curve resolution problems. Penalty alternating least squares (P-ALS) extension to multi-way problems.

    PubMed

    Richards, Selena; Miller, Robert; Gemperline, Paul

    2008-02-01

    An extension to the penalty alternating least squares (P-ALS) method, called multi-way penalty alternating least squares (NWAY P-ALS), is presented. Optionally, hard constraints (no deviation from predefined constraints) or soft constraints (small deviations from predefined constraints) were applied through the application of a row-wise penalty least squares function. NWAY P-ALS was applied to the multi-batch near-infrared (NIR) data acquired from the base catalyzed esterification reaction of acetic anhydride in order to resolve the concentration and spectral profiles of l-butanol with the reaction constituents. Application of the NWAY P-ALS approach resulted in the reduction of the number of active constraints at the solution point, while the batch column-wise augmentation allowed hard constraints in the spectral profiles and resolved rank deficiency problems of the measurement matrix. The results were compared with the multi-way multivariate curve resolution (MCR)-ALS results using hard and soft constraints to determine whether any advantages had been gained through using the weighted least squares function of NWAY P-ALS over the MCR-ALS resolution.

  8. BRST Quantization of the Proca Model Based on the BFT and the BFV Formalism

    NASA Astrophysics Data System (ADS)

    Kim, Yong-Wan; Park, Mu-In; Park, Young-Jai; Yoon, Sean J.

    The BRST quantization of the Abelian Proca model is performed using the Batalin-Fradkin-Tyutin and the Batalin-Fradkin-Vilkovisky formalism. First, the BFT Hamiltonian method is applied in order to systematically convert a second class constraint system of the model into an effectively first class one by introducing new fields. In finding the involutive Hamiltonian we adopt a new approach which is simpler than the usual one. We also show that in our model the Dirac brackets of the phase space variables in the original second class constraint system are exactly the same as the Poisson brackets of the corresponding modified fields in the extended phase space due to the linear character of the constraints comparing the Dirac or Faddeev-Jackiw formalisms. Then, according to the BFV formalism we obtain that the desired resulting Lagrangian preserving BRST symmetry in the standard local gauge fixing procedure naturally includes the Stückelberg scalar related to the explicit gauge symmetry breaking effect due to the presence of the mass term. We also analyze the nonstandard nonlocal gauge fixing procedure.

  9. Incorporating deliverable monitor unit constraints into spot intensity optimization in intensity modulated proton therapy treatment planning

    PubMed Central

    Cao, Wenhua; Lim, Gino; Li, Xiaoqiang; Li, Yupeng; Zhu, X. Ronald; Zhang, Xiaodong

    2014-01-01

    The purpose of this study is to investigate the feasibility and impact of incorporating deliverable monitor unit (MU) constraints into spot intensity optimization in intensity modulated proton therapy (IMPT) treatment planning. The current treatment planning system (TPS) for IMPT disregards deliverable MU constraints in the spot intensity optimization (SIO) routine. It performs a post-processing procedure on an optimized plan to enforce deliverable MU values that are required by the spot scanning proton delivery system. This procedure can create a significant dose distribution deviation between the optimized and post-processed deliverable plans, especially when small spot spacings are used. In this study, we introduce a two-stage linear programming (LP) approach to optimize spot intensities and constrain deliverable MU values simultaneously, i.e., a deliverable spot intensity optimization (DSIO) model. Thus, the post-processing procedure is eliminated and the associated optimized plan deterioration can be avoided. Four prostate cancer cases at our institution were selected for study and two parallel opposed beam angles were planned for all cases. A quadratic programming (QP) based model without MU constraints, i.e., a conventional spot intensity optimization (CSIO) model, was also implemented to emulate the commercial TPS. Plans optimized by both the DSIO and CSIO models were evaluated for five different settings of spot spacing from 3 mm to 7 mm. For all spot spacings, the DSIO-optimized plans yielded better uniformity for the target dose coverage and critical structure sparing than did the CSIO-optimized plans. With reduced spot spacings, more significant improvements in target dose uniformity and critical structure sparing were observed in the DSIO- than in the CSIO-optimized plans. Additionally, better sparing of the rectum and bladder was achieved when reduced spacings were used for the DSIO-optimized plans. The proposed DSIO approach ensures the deliverability of optimized IMPT plans that take into account MU constraints. This eliminates the post-processing procedure required by the TPS as well as the resultant deteriorating effect on ultimate dose distributions. This approach therefore allows IMPT plans to adopt all possible spot spacings optimally. Moreover, dosimetric benefits can be achieved using smaller spot spacings. PMID:23835656

  10. Animal movement constraints improve resource selection inference in the presence of telemetry error

    USGS Publications Warehouse

    Brost, Brian M.; Hooten, Mevin B.; Hanks, Ephraim M.; Small, Robert J.

    2016-01-01

    Multiple factors complicate the analysis of animal telemetry location data. Recent advancements address issues such as temporal autocorrelation and telemetry measurement error, but additional challenges remain. Difficulties introduced by complicated error structures or barriers to animal movement can weaken inference. We propose an approach for obtaining resource selection inference from animal location data that accounts for complicated error structures, movement constraints, and temporally autocorrelated observations. We specify a model for telemetry data observed with error conditional on unobserved true locations that reflects prior knowledge about constraints in the animal movement process. The observed telemetry data are modeled using a flexible distribution that accommodates extreme errors and complicated error structures. Although constraints to movement are often viewed as a nuisance, we use constraints to simultaneously estimate and account for telemetry error. We apply the model to simulated data, showing that it outperforms common ad hoc approaches used when confronted with measurement error and movement constraints. We then apply our framework to an Argos satellite telemetry data set on harbor seals (Phoca vitulina) in the Gulf of Alaska, a species that is constrained to move within the marine environment and adjacent coastlines.

  11. Constraints and spandrels of interareal connectomes

    PubMed Central

    Rubinov, Mikail

    2016-01-01

    Interareal connectomes are whole-brain wiring diagrams of white-matter pathways. Recent studies have identified modules, hubs, module hierarchies and rich clubs as structural hallmarks of these wiring diagrams. An influential current theory postulates that connectome modules are adequately explained by evolutionary pressures for wiring economy, but that the other hallmarks are not explained by such pressures and are therefore less trivial. Here, we use constraint network models to test these postulates in current gold-standard vertebrate and invertebrate interareal-connectome reconstructions. We show that empirical wiring-cost constraints inadequately explain connectome module organization, and that simultaneous module and hub constraints induce the structural byproducts of hierarchies and rich clubs. These byproducts, known as spandrels in evolutionary biology, include the structural substrate of the default-mode network. Our results imply that currently standard connectome characterizations are based on circular analyses or double dipping, and we emphasize an integrative approach to future connectome analyses for avoiding such pitfalls. PMID:27924867

  12. Constraints and spandrels of interareal connectomes.

    PubMed

    Rubinov, Mikail

    2016-12-07

    Interareal connectomes are whole-brain wiring diagrams of white-matter pathways. Recent studies have identified modules, hubs, module hierarchies and rich clubs as structural hallmarks of these wiring diagrams. An influential current theory postulates that connectome modules are adequately explained by evolutionary pressures for wiring economy, but that the other hallmarks are not explained by such pressures and are therefore less trivial. Here, we use constraint network models to test these postulates in current gold-standard vertebrate and invertebrate interareal-connectome reconstructions. We show that empirical wiring-cost constraints inadequately explain connectome module organization, and that simultaneous module and hub constraints induce the structural byproducts of hierarchies and rich clubs. These byproducts, known as spandrels in evolutionary biology, include the structural substrate of the default-mode network. Our results imply that currently standard connectome characterizations are based on circular analyses or double dipping, and we emphasize an integrative approach to future connectome analyses for avoiding such pitfalls.

  13. Linear programming: an alternative approach for developing formulations for emergency food products.

    PubMed

    Sheibani, Ershad; Dabbagh Moghaddam, Arasb; Sharifan, Anousheh; Afshari, Zahra

    2018-03-01

    To minimize the mortality rates of individuals affected by disasters, providing high-quality food relief during the initial stages of an emergency is crucial. The goal of this study was to develop a formulation for a high-energy, nutrient-dense prototype using linear programming (LP) model as a novel method for developing formulations for food products. The model consisted of the objective function and the decision variables, which were the formulation costs and weights of the selected commodities, respectively. The LP constraints were the Institute of Medicine and the World Health Organization specifications of the content of nutrients in the product. Other constraints related to the product's sensory properties were also introduced to the model. Nonlinear constraints for energy ratios of nutrients were linearized to allow their use in the LP. Three focus group studies were conducted to evaluate the palatability and other aspects of the optimized formulation. New constraints were introduced to the LP model based on the focus group evaluations to improve the formulation. LP is an appropriate tool for designing formulations of food products to meet a set of nutritional requirements. This method is an excellent alternative to the traditional 'trial and error' method in designing formulations. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  14. A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling

    NASA Astrophysics Data System (ADS)

    Shapiro, B.; Jin, Q.

    2015-12-01

    Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.

  15. Scheduling double round-robin tournaments with divisional play using constraint programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey

    We study a tournament format that extends a traditional double round-robin format with divisional single round-robin tournaments. Elitserien, the top Swedish handball league, uses such a format for its league schedule. We present a constraint programming model that characterizes the general double round-robin plus divisional single round-robin format. This integrated model allows scheduling to be performed in a single step, as opposed to common multistep approaches that decompose scheduling into smaller problems and possibly miss optimal solutions. In addition to general constraints, we introduce Elitserien-specific requirements for its tournament. These general and league-specific constraints allow us to identify implicit andmore » symmetry-breaking properties that reduce the time to solution from hours to seconds. A scalability study of the number of teams shows that our approach is reasonably fast for even larger league sizes. The experimental evaluation of the integrated approach takes considerably less computational effort to schedule Elitserien than does the previous decomposed approach. (C) 2016 Elsevier B.V. All rights reserved« less

  16. Dynamics and control of quadcopter using linear model predictive control approach

    NASA Astrophysics Data System (ADS)

    Islam, M.; Okasha, M.; Idres, M. M.

    2017-12-01

    This paper investigates the dynamics and control of a quadcopter using the Model Predictive Control (MPC) approach. The dynamic model is of high fidelity and nonlinear, with six degrees of freedom that include disturbances and model uncertainties. The control approach is developed based on MPC to track different reference trajectories ranging from simple ones such as circular to complex helical trajectories. In this control technique, a linearized model is derived and the receding horizon method is applied to generate the optimal control sequence. Although MPC is computer expensive, it is highly effective to deal with the different types of nonlinearities and constraints such as actuators’ saturation and model uncertainties. The MPC parameters (control and prediction horizons) are selected by trial-and-error approach. Several simulation scenarios are performed to examine and evaluate the performance of the proposed control approach using MATLAB and Simulink environment. Simulation results show that this control approach is highly effective to track a given reference trajectory.

  17. Cooperative Management of a Lithium-Ion Battery Energy Storage Network: A Distributed MPC Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Huazhen; Wu, Di; Yang, Tao

    2016-12-12

    This paper presents a study of cooperative power supply and storage for a network of Lithium-ion energy storage systems (LiBESSs). We propose to develop a distributed model predictive control (MPC) approach for two reasons. First, able to account for the practical constraints of a LiBESS, the MPC can enable a constraint-aware operation. Second, a distributed management can cope with a complex network that integrates a large number of LiBESSs over a complex communication topology. With this motivation, we then build a fully distributed MPC algorithm from an optimization perspective, which is based on an extension of the alternating direction methodmore » of multipliers (ADMM) method. A simulation example is provided to demonstrate the effectiveness of the proposed algorithm.« less

  18. Spatiotemporal Access Model Based on Reputation for the Sensing Layer of the IoT

    PubMed Central

    Guo, Yunchuan; Yin, Lihua; Li, Chao

    2014-01-01

    Access control is a key technology in providing security in the Internet of Things (IoT). The mainstream security approach proposed for the sensing layer of the IoT concentrates only on authentication while ignoring the more general models. Unreliable communications and resource constraints make the traditional access control techniques barely meet the requirements of the sensing layer of the IoT. In this paper, we propose a model that combines space and time with reputation to control access to the information within the sensing layer of the IoT. This model is called spatiotemporal access control based on reputation (STRAC). STRAC uses a lattice-based approach to decrease the size of policy bases. To solve the problem caused by unreliable communications, we propose both nondeterministic authorizations and stochastic authorizations. To more precisely manage the reputation of nodes, we propose two new mechanisms to update the reputation of nodes. These new approaches are the authority-based update mechanism (AUM) and the election-based update mechanism (EUM). We show how the model checker UPPAAL can be used to analyze the spatiotemporal access control model of an application. Finally, we also implement a prototype system to demonstrate the efficiency of our model. PMID:25177731

  19. The KATE shell: An implementation of model-based control, monitor and diagnosis

    NASA Technical Reports Server (NTRS)

    Cornell, Matthew

    1987-01-01

    The conventional control and monitor software currently used by the Space Center for Space Shuttle processing has many limitations such as high maintenance costs, limited diagnostic capabilities and simulation support. These limitations have caused the development of a knowledge based (or model based) shell to generically control and monitor electro-mechanical systems. The knowledge base describes the system's structure and function and is used by a software shell to do real time constraints checking, low level control of components, diagnosis of detected faults, sensor validation, automatic generation of schematic diagrams and automatic recovery from failures. This approach is more versatile and more powerful than the conventional hard coded approach and offers many advantages over it, although, for systems which require high speed reaction times or aren't well understood, knowledge based control and monitor systems may not be appropriate.

  20. Inferring the photometric and size evolution of galaxies from image simulations. I. Method

    NASA Astrophysics Data System (ADS)

    Carassou, Sébastien; de Lapparent, Valérie; Bertin, Emmanuel; Le Borgne, Damien

    2017-09-01

    Context. Current constraints on models of galaxy evolution rely on morphometric catalogs extracted from multi-band photometric surveys. However, these catalogs are altered by selection effects that are difficult to model, that correlate in non trivial ways, and that can lead to contradictory predictions if not taken into account carefully. Aims: To address this issue, we have developed a new approach combining parametric Bayesian indirect likelihood (pBIL) techniques and empirical modeling with realistic image simulations that reproduce a large fraction of these selection effects. This allows us to perform a direct comparison between observed and simulated images and to infer robust constraints on model parameters. Methods: We use a semi-empirical forward model to generate a distribution of mock galaxies from a set of physical parameters. These galaxies are passed through an image simulator reproducing the instrumental characteristics of any survey and are then extracted in the same way as the observed data. The discrepancy between the simulated and observed data is quantified, and minimized with a custom sampling process based on adaptive Markov chain Monte Carlo methods. Results: Using synthetic data matching most of the properties of a Canada-France-Hawaii Telescope Legacy Survey Deep field, we demonstrate the robustness and internal consistency of our approach by inferring the parameters governing the size and luminosity functions and their evolutions for different realistic populations of galaxies. We also compare the results of our approach with those obtained from the classical spectral energy distribution fitting and photometric redshift approach. Conclusions: Our pipeline infers efficiently the luminosity and size distribution and evolution parameters with a very limited number of observables (three photometric bands). When compared to SED fitting based on the same set of observables, our method yields results that are more accurate and free from systematic biases.

  1. Task constraints and minimization of muscle effort result in a small number of muscle synergies during gait.

    PubMed

    De Groote, Friedl; Jonkers, Ilse; Duysens, Jacques

    2014-01-01

    Finding muscle activity generating a given motion is a redundant problem, since there are many more muscles than degrees of freedom. The control strategies determining muscle recruitment from a redundant set are still poorly understood. One theory of motor control suggests that motion is produced through activating a small number of muscle synergies, i.e., muscle groups that are activated in a fixed ratio by a single input signal. Because of the reduced number of input signals, synergy-based control is low dimensional. But a major criticism on the theory of synergy-based control of muscles is that muscle synergies might reflect task constraints rather than a neural control strategy. Another theory of motor control suggests that muscles are recruited by optimizing performance. Optimization of performance has been widely used to calculate muscle recruitment underlying a given motion while assuming independent recruitment of muscles. If synergies indeed determine muscle recruitment underlying a given motion, optimization approaches that do not model synergy-based control could result in muscle activations that do not show the synergistic muscle action observed through electromyography (EMG). If, however, synergistic muscle action results from performance optimization and task constraints (joint kinematics and external forces), such optimization approaches are expected to result in low-dimensional synergistic muscle activations that are similar to EMG-based synergies. We calculated muscle recruitment underlying experimentally measured gait patterns by optimizing performance assuming independent recruitment of muscles. We found that the muscle activations calculated without any reference to synergies can be accurately explained by on average four synergies. These synergies are similar to EMG-based synergies. We therefore conclude that task constraints and performance optimization explain synergistic muscle recruitment from a redundant set of muscles.

  2. Modeling languages for biochemical network simulation: reaction vs equation based approaches.

    PubMed

    Wiechert, Wolfgang; Noack, Stephan; Elsheikh, Atya

    2010-01-01

    Biochemical network modeling and simulation is an essential task in any systems biology project. The systems biology markup language (SBML) was established as a standardized model exchange language for mechanistic models. A specific strength of SBML is that numerous tools for formulating, processing, simulation and analysis of models are freely available. Interestingly, in the field of multidisciplinary simulation, the problem of model exchange between different simulation tools occurred much earlier. Several general modeling languages like Modelica have been developed in the 1990s. Modelica enables an equation based modular specification of arbitrary hierarchical differential algebraic equation models. Moreover, libraries for special application domains can be rapidly developed. This contribution compares the reaction based approach of SBML with the equation based approach of Modelica and explains the specific strengths of both tools. Several biological examples illustrating essential SBML and Modelica concepts are given. The chosen criteria for tool comparison are flexibility for constraint specification, different modeling flavors, hierarchical, modular and multidisciplinary modeling. Additionally, support for spatially distributed systems, event handling and network analysis features is discussed. As a major result it is shown that the choice of the modeling tool has a strong impact on the expressivity of the specified models but also strongly depends on the requirements of the application context.

  3. Use of randomized sampling for analysis of metabolic networks.

    PubMed

    Schellenberger, Jan; Palsson, Bernhard Ø

    2009-02-27

    Genome-scale metabolic network reconstructions in microorganisms have been formulated and studied for about 8 years. The constraint-based approach has shown great promise in analyzing the systemic properties of these network reconstructions. Notably, constraint-based models have been used successfully to predict the phenotypic effects of knock-outs and for metabolic engineering. The inherent uncertainty in both parameters and variables of large-scale models is significant and is well suited to study by Monte Carlo sampling of the solution space. These techniques have been applied extensively to the reaction rate (flux) space of networks, with more recent work focusing on dynamic/kinetic properties. Monte Carlo sampling as an analysis tool has many advantages, including the ability to work with missing data, the ability to apply post-processing techniques, and the ability to quantify uncertainty and to optimize experiments to reduce uncertainty. We present an overview of this emerging area of research in systems biology.

  4. Parsing in a Dynamical System: An Attractor-Based Account of the Interaction of Lexical and Structural Constraints in Sentence Processing.

    ERIC Educational Resources Information Center

    Tabor, Whitney; And Others

    1997-01-01

    Proposes a dynamical systems approach to parsing in which syntactic hypotheses are associated with attractors in a metric space. The experiments discussed documented various contingent frequency effects that cut across traditional linguistic grains, each of which was predicted by the dynamical systems model. (47 references) (Author/CK)

  5. Incorporation of physical constraints in optimal surface search for renal cortex segmentation

    NASA Astrophysics Data System (ADS)

    Li, Xiuli; Chen, Xinjian; Yao, Jianhua; Zhang, Xing; Tian, Jie

    2012-02-01

    In this paper, we propose a novel approach for multiple surfaces segmentation based on the incorporation of physical constraints in optimal surface searching. We apply our new approach to solve the renal cortex segmentation problem, an important but not sufficiently researched issue. In this study, in order to better restrain the intensity proximity of the renal cortex and renal column, we extend the optimal surface search approach to allow for varying sampling distance and physical separation constraints, instead of the traditional fixed sampling distance and numerical separation constraints. The sampling distance of each vertex-column is computed according to the sparsity of the local triangular mesh. Then the physical constraint learned from a priori renal cortex thickness is applied to the inter-surface arcs as the separation constraints. Appropriate varying sampling distance and separation constraints were learnt from 6 clinical CT images. After training, the proposed approach was tested on a test set of 10 images. The manual segmentation of renal cortex was used as the reference standard. Quantitative analysis of the segmented renal cortex indicates that overall segmentation accuracy was increased after introducing the varying sampling distance and physical separation constraints (the average true positive volume fraction (TPVF) and false positive volume fraction (FPVF) were 83.96% and 2.80%, respectively, by using varying sampling distance and physical separation constraints compared to 74.10% and 0.18%, respectively, by using fixed sampling distance and numerical separation constraints). The experimental results demonstrated the effectiveness of the proposed approach.

  6. Dynamic Domains in Data Production Planning

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Pang, Wanlin

    2005-01-01

    This paper discusses a planner-based approach to automating data production tasks, such as producing fire forecasts from satellite imagery and weather station data. Since the set of available data products is large, dynamic and mostly unknown, planning techniques developed for closed worlds are unsuitable. We discuss a number of techniques we have developed to cope with data production domains, including a novel constraint propagation algorithm based on planning graphs and a constraint-based approach to interleaved planning, sensing and execution.

  7. Genome-Scale, Constraint-Based Modeling of Nitrogen Oxide Fluxes during Coculture of Nitrosomonas europaea and Nitrobacter winogradskyi

    PubMed Central

    Giguere, Andrew T.; Murthy, Ganti S.; Bottomley, Peter J.; Sayavedra-Soto, Luis A.

    2018-01-01

    ABSTRACT Nitrification, the aerobic oxidation of ammonia to nitrate via nitrite, emits nitrogen (N) oxide gases (NO, NO2, and N2O), which are potentially hazardous compounds that contribute to global warming. To better understand the dynamics of nitrification-derived N oxide production, we conducted culturing experiments and used an integrative genome-scale, constraint-based approach to model N oxide gas sources and sinks during complete nitrification in an aerobic coculture of two model nitrifying bacteria, the ammonia-oxidizing bacterium Nitrosomonas europaea and the nitrite-oxidizing bacterium Nitrobacter winogradskyi. The model includes biotic genome-scale metabolic models (iFC578 and iFC579) for each nitrifier and abiotic N oxide reactions. Modeling suggested both biotic and abiotic reactions are important sources and sinks of N oxides, particularly under microaerobic conditions predicted to occur in coculture. In particular, integrative modeling suggested that previous models might have underestimated gross NO production during nitrification due to not taking into account its rapid oxidation in both aqueous and gas phases. The integrative model may be found at https://github.com/chaplenf/microBiome-v2.1. IMPORTANCE Modern agriculture is sustained by application of inorganic nitrogen (N) fertilizer in the form of ammonium (NH4+). Up to 60% of NH4+-based fertilizer can be lost through leaching of nitrifier-derived nitrate (NO3−), and through the emission of N oxide gases (i.e., nitric oxide [NO], N dioxide [NO2], and nitrous oxide [N2O] gases), the latter being a potent greenhouse gas. Our approach to modeling of nitrification suggests that both biotic and abiotic mechanisms function as important sources and sinks of N oxides during microaerobic conditions and that previous models might have underestimated gross NO production during nitrification. PMID:29577088

  8. Genome-Scale, Constraint-Based Modeling of Nitrogen Oxide Fluxes during Coculture of Nitrosomonas europaea and Nitrobacter winogradskyi.

    PubMed

    Mellbye, Brett L; Giguere, Andrew T; Murthy, Ganti S; Bottomley, Peter J; Sayavedra-Soto, Luis A; Chaplen, Frank W R

    2018-01-01

    Nitrification, the aerobic oxidation of ammonia to nitrate via nitrite, emits nitrogen (N) oxide gases (NO, NO 2 , and N 2 O), which are potentially hazardous compounds that contribute to global warming. To better understand the dynamics of nitrification-derived N oxide production, we conducted culturing experiments and used an integrative genome-scale, constraint-based approach to model N oxide gas sources and sinks during complete nitrification in an aerobic coculture of two model nitrifying bacteria, the ammonia-oxidizing bacterium Nitrosomonas europaea and the nitrite-oxidizing bacterium Nitrobacter winogradskyi . The model includes biotic genome-scale metabolic models (iFC578 and iFC579) for each nitrifier and abiotic N oxide reactions. Modeling suggested both biotic and abiotic reactions are important sources and sinks of N oxides, particularly under microaerobic conditions predicted to occur in coculture. In particular, integrative modeling suggested that previous models might have underestimated gross NO production during nitrification due to not taking into account its rapid oxidation in both aqueous and gas phases. The integrative model may be found at https://github.com/chaplenf/microBiome-v2.1. IMPORTANCE Modern agriculture is sustained by application of inorganic nitrogen (N) fertilizer in the form of ammonium (NH 4 + ). Up to 60% of NH 4 + -based fertilizer can be lost through leaching of nitrifier-derived nitrate (NO 3 - ), and through the emission of N oxide gases (i.e., nitric oxide [NO], N dioxide [NO 2 ], and nitrous oxide [N 2 O] gases), the latter being a potent greenhouse gas. Our approach to modeling of nitrification suggests that both biotic and abiotic mechanisms function as important sources and sinks of N oxides during microaerobic conditions and that previous models might have underestimated gross NO production during nitrification.

  9. Input and output constraints-based stabilisation of switched nonlinear systems with unstable subsystems and its application

    NASA Astrophysics Data System (ADS)

    Chen, Chao; Liu, Qian; Zhao, Jun

    2018-01-01

    This paper studies the problem of stabilisation of switched nonlinear systems with output and input constraints. We propose a recursive approach to solve this issue. None of the subsystems are assumed to be stablisable while the switched system is stabilised by dual design of controllers for subsystems and a switching law. When only dealing with bounded input, we provide nested switching controllers using an extended backstepping procedure. If both input and output constraints are taken into consideration, a Barrier Lyapunov Function is employed during operation to construct multiple Lyapunov functions for switched nonlinear system in the backstepping procedure. As a practical example, the control design of an equilibrium manifold expansion model of aero-engine is given to demonstrate the effectiveness of the proposed design method.

  10. SU-F-T-340: Direct Editing of Dose Volume Histograms: Algorithms and a Unified Convex Formulation for Treatment Planning with Dose Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ungun, B; Stanford University School of Medicine, Stanford, CA; Fu, A

    2016-06-15

    Purpose: To develop a procedure for including dose constraints in convex programming-based approaches to treatment planning, and to support dynamic modification of such constraints during planning. Methods: We present a mathematical approach that allows mean dose, maximum dose, minimum dose and dose volume (i.e., percentile) constraints to be appended to any convex formulation of an inverse planning problem. The first three constraint types are convex and readily incorporated. Dose volume constraints are not convex, however, so we introduce a convex restriction that is related to CVaR-based approaches previously proposed in the literature. To compensate for the conservatism of this restriction,more » we propose a new two-pass algorithm that solves the restricted problem on a first pass and uses this solution to form exact constraints on a second pass. In another variant, we introduce slack variables for each dose constraint to prevent the problem from becoming infeasible when the user specifies an incompatible set of constraints. We implement the proposed methods in Python using the convex programming package cvxpy in conjunction with the open source convex solvers SCS and ECOS. Results: We show, for several cases taken from the clinic, that our proposed method meets specified constraints (often with margin) when they are feasible. Constraints are met exactly when we use the two-pass method, and infeasible constraints are replaced with the nearest feasible constraint when slacks are used. Finally, we introduce ConRad, a Python-embedded free software package for convex radiation therapy planning. ConRad implements the methods described above and offers a simple interface for specifying prescriptions and dose constraints. Conclusion: This work demonstrates the feasibility of using modifiable dose constraints in a convex formulation, making it practical to guide the treatment planning process with interactively specified dose constraints. This work was supported by the Stanford BioX Graduate Fellowship and NIH Grant 5R01CA176553.« less

  11. Identification of potential compensatory muscle strategies in a breast cancer survivor population: A combined computational and experimental approach.

    PubMed

    Chopp-Hurley, Jaclyn N; Brookham, Rebecca L; Dickerson, Clark R

    2016-12-01

    Biomechanical models are often used to estimate the muscular demands of various activities. However, specific muscle dysfunctions typical of unique clinical populations are rarely considered. Due to iatrogenic tissue damage, pectoralis major capability is markedly reduced in breast cancer population survivors, which could influence arm internal and external rotation muscular strategies. Accordingly, an optimization-based muscle force prediction model was systematically modified to emulate breast cancer population survivors through adjusting pectoralis capability and enforcing an empirical muscular co-activation relationship. Model permutations were evaluated through comparisons between predicted muscle forces and empirically measured muscle activations in survivors. Similarities between empirical data and model outputs were influenced by muscle type, hand force, pectoralis major capability and co-activation constraints. Differences in magnitude were lower when the co-activation constraint was enforced (-18.4% [31.9]) than unenforced (-23.5% [27.6]) (p<0.0001). This research demonstrates that muscle dysfunction in breast cancer population survivors can be reflected through including a capability constraint for pectoralis major. Further refinement of the co-activation constraint for survivors could improve its generalizability across this population and activities. Improving biomechanical models to more accurately represent clinical populations can provide novel information that can help in the development of optimal treatment programs for breast cancer population survivors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. A unified EM approach to bladder wall segmentation with coupled level-set constraints

    PubMed Central

    Han, Hao; Li, Lihong; Duan, Chaijie; Zhang, Hao; Zhao, Yang; Liang, Zhengrong

    2013-01-01

    Magnetic resonance (MR) imaging-based virtual cystoscopy (VCys), as a non-invasive, safe and cost-effective technique, has shown its promising virtue for early diagnosis and recurrence management of bladder carcinoma. One primary goal of VCys is to identify bladder lesions with abnormal bladder wall thickness, and consequently a precise segmentation of the inner and outer borders of the wall is required. In this paper, we propose a unified expectation-maximization (EM) approach to the maximum-a-posteriori (MAP) solution of bladder wall segmentation, by integrating a novel adaptive Markov random field (AMRF) model and the coupled level-set (CLS) information into the prior term. The proposed approach is applied to the segmentation of T1-weighted MR images, where the wall is enhanced while the urine and surrounding soft tissues are suppressed. By introducing scale-adaptive neighborhoods as well as adaptive weights into the conventional MRF model, the AMRF model takes into account the local information more accurately. In order to mitigate the influence of image artifacts adjacent to the bladder wall and to preserve the continuity of the wall surface, we apply geometrical constraints on the wall using our previously developed CLS method. This paper not only evaluates the robustness of the presented approach against the known ground truth of simulated digital phantoms, but further compares its performance with our previous CLS approach via both volunteer and patient studies. Statistical analysis on experts’ scores of the segmented borders from both approaches demonstrates that our new scheme is more effective in extracting the bladder wall. Based on the wall thickness calibrated from the segmented single-layer borders, a three-dimensional virtual bladder model can be constructed and the wall thickness can be mapped on to the model, where the bladder lesions will be eventually detected via experts’ visualization and/or computer-aided detection. PMID:24001932

  13. Swarm Intelligence for Urban Dynamics Modelling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghnemat, Rawan; Bertelle, Cyrille; Duchamp, Gerard H. E.

    2009-04-16

    In this paper, we propose swarm intelligence algorithms to deal with dynamical and spatial organization emergence. The goal is to model and simulate the developement of spatial centers using multi-criteria. We combine a decentralized approach based on emergent clustering mixed with spatial constraints or attractions. We propose an extension of the ant nest building algorithm with multi-center and adaptive process. Typically, this model is suitable to analyse and simulate urban dynamics like gentrification or the dynamics of the cultural equipment in urban area.

  14. Swarm Intelligence for Urban Dynamics Modelling

    NASA Astrophysics Data System (ADS)

    Ghnemat, Rawan; Bertelle, Cyrille; Duchamp, Gérard H. E.

    2009-04-01

    In this paper, we propose swarm intelligence algorithms to deal with dynamical and spatial organization emergence. The goal is to model and simulate the developement of spatial centers using multi-criteria. We combine a decentralized approach based on emergent clustering mixed with spatial constraints or attractions. We propose an extension of the ant nest building algorithm with multi-center and adaptive process. Typically, this model is suitable to analyse and simulate urban dynamics like gentrification or the dynamics of the cultural equipment in urban area.

  15. Speededness and Adaptive Testing

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Xiong, Xinhui

    2013-01-01

    Two simple constraints on the item parameters in a response--time model are proposed to control the speededness of an adaptive test. As the constraints are additive, they can easily be included in the constraint set for a shadow-test approach (STA) to adaptive testing. Alternatively, a simple heuristic is presented to control speededness in plain…

  16. Modeling formalisms in Systems Biology

    PubMed Central

    2011-01-01

    Systems Biology has taken advantage of computational tools and high-throughput experimental data to model several biological processes. These include signaling, gene regulatory, and metabolic networks. However, most of these models are specific to each kind of network. Their interconnection demands a whole-cell modeling framework for a complete understanding of cellular systems. We describe the features required by an integrated framework for modeling, analyzing and simulating biological processes, and review several modeling formalisms that have been used in Systems Biology including Boolean networks, Bayesian networks, Petri nets, process algebras, constraint-based models, differential equations, rule-based models, interacting state machines, cellular automata, and agent-based models. We compare the features provided by different formalisms, and discuss recent approaches in the integration of these formalisms, as well as possible directions for the future. PMID:22141422

  17. Markov random field model-based edge-directed image interpolation.

    PubMed

    Li, Min; Nguyen, Truong Q

    2008-07-01

    This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.

  18. Model-based diagnostics for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Stephan, Amy; Martin, Eric R.; Lerutte, Marcel G.

    1991-01-01

    An innovative approach to fault management was recently demonstrated for the NASA LeRC Space Station Freedom (SSF) power system testbed. This project capitalized on research in model-based reasoning, which uses knowledge of a system's behavior to monitor its health. The fault management system (FMS) can isolate failures online, or in a post analysis mode, and requires no knowledge of failure symptoms to perform its diagnostics. An in-house tool called MARPLE was used to develop and run the FMS. MARPLE's capabilities are similar to those available from commercial expert system shells, although MARPLE is designed to build model-based as opposed to rule-based systems. These capabilities include functions for capturing behavioral knowledge, a reasoning engine that implements a model-based technique known as constraint suspension, and a tool for quickly generating new user interfaces. The prototype produced by applying MARPLE to SSF not only demonstrated that model-based reasoning is a valuable diagnostic approach, but it also suggested several new applications of MARPLE, including an integration and testing aid, and a complement to state estimation.

  19. Shortest-path constraints for 3D multiobject semiautomatic segmentation via clustering and Graph Cut.

    PubMed

    Kéchichian, Razmig; Valette, Sébastien; Desvignes, Michel; Prost, Rémy

    2013-11-01

    We derive shortest-path constraints from graph models of structure adjacency relations and introduce them in a joint centroidal Voronoi image clustering and Graph Cut multiobject semiautomatic segmentation framework. The vicinity prior model thus defined is a piecewise-constant model incurring multiple levels of penalization capturing the spatial configuration of structures in multiobject segmentation. Qualitative and quantitative analyses and comparison with a Potts prior-based approach and our previous contribution on synthetic, simulated, and real medical images show that the vicinity prior allows for the correct segmentation of distinct structures having identical intensity profiles and improves the precision of segmentation boundary placement while being fairly robust to clustering resolution. The clustering approach we take to simplify images prior to segmentation strikes a good balance between boundary adaptivity and cluster compactness criteria furthermore allowing to control the trade-off. Compared with a direct application of segmentation on voxels, the clustering step improves the overall runtime and memory footprint of the segmentation process up to an order of magnitude without compromising the quality of the result.

  20. A tree-like Bayesian structure learning algorithm for small-sample datasets from complex biological model systems.

    PubMed

    Yin, Weiwei; Garimalla, Swetha; Moreno, Alberto; Galinski, Mary R; Styczynski, Mark P

    2015-08-28

    There are increasing efforts to bring high-throughput systems biology techniques to bear on complex animal model systems, often with a goal of learning about underlying regulatory network structures (e.g., gene regulatory networks). However, complex animal model systems typically have significant limitations on cohort sizes, number of samples, and the ability to perform follow-up and validation experiments. These constraints are particularly problematic for many current network learning approaches, which require large numbers of samples and may predict many more regulatory relationships than actually exist. Here, we test the idea that by leveraging the accuracy and efficiency of classifiers, we can construct high-quality networks that capture important interactions between variables in datasets with few samples. We start from a previously-developed tree-like Bayesian classifier and generalize its network learning approach to allow for arbitrary depth and complexity of tree-like networks. Using four diverse sample networks, we demonstrate that this approach performs consistently better at low sample sizes than the Sparse Candidate Algorithm, a representative approach for comparison because it is known to generate Bayesian networks with high positive predictive value. We develop and demonstrate a resampling-based approach to enable the identification of a viable root for the learned tree-like network, important for cases where the root of a network is not known a priori. We also develop and demonstrate an integrated resampling-based approach to the reduction of variable space for the learning of the network. Finally, we demonstrate the utility of this approach via the analysis of a transcriptional dataset of a malaria challenge in a non-human primate model system, Macaca mulatta, suggesting the potential to capture indicators of the earliest stages of cellular differentiation during leukopoiesis. We demonstrate that by starting from effective and efficient approaches for creating classifiers, we can identify interesting tree-like network structures with significant ability to capture the relationships in the training data. This approach represents a promising strategy for inferring networks with high positive predictive value under the constraint of small numbers of samples, meeting a need that will only continue to grow as more high-throughput studies are applied to complex model systems.

  1. Behavior systems and reinforcement: an integrative approach.

    PubMed Central

    Timberlake, W

    1993-01-01

    Most traditional conceptions of reinforcement are based on a simple causal model in which responding is strengthened by the presentation of a reinforcer. I argue that reinforcement is better viewed as the outcome of constraint of a functioning causal system comprised of multiple interrelated causal sequences, complex linkages between causes and effects, and a set of initial conditions. Using a simplified system conception of the reinforcement situation, I review the similarities and drawbacks of traditional reinforcement models and analyze the recent contributions of cognitive, regulatory, and ecological approaches. Finally, I show how the concept of behavior systems can begin to incorporate both traditional and recent conceptions of reinforcement in an integrative approach. PMID:8354963

  2. A collaborative vendor-buyer production-inventory systems with imperfect quality items, inspection errors, and stochastic demand under budget capacity constraint: a Karush-Kuhn-Tucker conditions approach

    NASA Astrophysics Data System (ADS)

    Kurdhi, N. A.; Nurhayati, R. A.; Wiyono, S. B.; Handajani, S. S.; Martini, T. S.

    2017-01-01

    In this paper, we develop an integrated inventory model considering the imperfect quality items, inspection error, controllable lead time, and budget capacity constraint. The imperfect items were uniformly distributed and detected on the screening process. However there are two types of possibilities. The first is type I of inspection error (when a non-defective item classified as defective) and the second is type II of inspection error (when a defective item classified as non-defective). The demand during the lead time is unknown, and it follows the normal distribution. The lead time can be controlled by adding the crashing cost. Furthermore, the existence of the budget capacity constraint is caused by the limited purchasing cost. The purposes of this research are: to modify the integrated vendor and buyer inventory model, to establish the optimal solution using Kuhn-Tucker’s conditions, and to apply the models. Based on the result of application and the sensitivity analysis, it can be obtained minimum integrated inventory total cost rather than separated inventory.

  3. Cognitive dissonance reduction as constraint satisfaction.

    PubMed

    Shultz, T R; Lepper, M R

    1996-04-01

    A constraint satisfaction neural network model (the consonance model) simulated data from the two major cognitive dissonance paradigms of insufficient justification and free choice. In several cases, the model fit the human data better than did cognitive dissonance theory. Superior fits were due to the inclusion of constraints that were not part of dissonance theory and to the increased precision inherent to this computational approach. Predictions generated by the model for a free choice between undesirable alternatives were confirmed in a new psychological experiment. The success of the consonance model underscores important, unforeseen similarities between what had been formerly regarded as the rather exotic process of dissonance reduction and a variety of other, more mundane psychological processes. Many of these processes can be understood as the progressive application of constraints supplied by beliefs and attitudes.

  4. Using neutral models to identify constraints on low-severity fire regimes.

    Treesearch

    Donald McKenzie; Amy E. Hessl; Lara-Karena B. Kellogg

    2006-01-01

    Climate, topography, fuel loadings, and human activities all affect spatial and temporal patterns of fire occurrence. Because fire is modeled as a stochastic process, for which each fire history is only one realization, a simulation approach is necessary to understand baseline variability, thereby identifying constraints, or forcing functions, that affect fire regimes...

  5. Fast and Easy 3D Reconstruction with the Help of Geometric Constraints and Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Annich, Afafe; El Abderrahmani, Abdellatif; Satori, Khalid

    2017-09-01

    The purpose of the work presented in this paper is to describe new method of 3D reconstruction from one or more uncalibrated images. This method is based on two important concepts: geometric constraints and genetic algorithms (GAs). At first, we are going to discuss the combination between bundle adjustment and GAs that we have proposed in order to improve 3D reconstruction efficiency and success. We used GAs in order to improve fitness quality of initial values that are used in the optimization problem. It will increase surely convergence rate. Extracted geometric constraints are used first to obtain an estimated value of focal length that helps us in the initialization step. Matching homologous points and constraints is used to estimate the 3D model. In fact, our new method gives us a lot of advantages: reducing the estimated parameter number in optimization step, decreasing used image number, winning time and stabilizing good quality of 3D results. At the end, without any prior information about our 3D scene, we obtain an accurate calibration of the cameras, and a realistic 3D model that strictly respects the geometric constraints defined before in an easy way. Various data and examples will be used to highlight the efficiency and competitiveness of our present approach.

  6. Surrogate-based optimization of hydraulic fracturing in pre-existing fracture networks

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Sun, Yunwei; Fu, Pengcheng; Carrigan, Charles R.; Lu, Zhiming; Tong, Charles H.; Buscheck, Thomas A.

    2013-08-01

    Hydraulic fracturing has been used widely to stimulate production of oil, natural gas, and geothermal energy in formations with low natural permeability. Numerical optimization of fracture stimulation often requires a large number of evaluations of objective functions and constraints from forward hydraulic fracturing models, which are computationally expensive and even prohibitive in some situations. Moreover, there are a variety of uncertainties associated with the pre-existing fracture distributions and rock mechanical properties, which affect the optimized decisions for hydraulic fracturing. In this study, a surrogate-based approach is developed for efficient optimization of hydraulic fracturing well design in the presence of natural-system uncertainties. The fractal dimension is derived from the simulated fracturing network as the objective for maximizing energy recovery sweep efficiency. The surrogate model, which is constructed using training data from high-fidelity fracturing models for mapping the relationship between uncertain input parameters and the fractal dimension, provides fast approximation of the objective functions and constraints. A suite of surrogate models constructed using different fitting methods is evaluated and validated for fast predictions. Global sensitivity analysis is conducted to gain insights into the impact of the input variables on the output of interest, and further used for parameter screening. The high efficiency of the surrogate-based approach is demonstrated for three optimization scenarios with different and uncertain ambient conditions. Our results suggest the critical importance of considering uncertain pre-existing fracture networks in optimization studies of hydraulic fracturing.

  7. Learning Robust and Discriminative Subspace With Low-Rank Constraints.

    PubMed

    Li, Sheng; Fu, Yun

    2016-11-01

    In this paper, we aim at learning robust and discriminative subspaces from noisy data. Subspace learning is widely used in extracting discriminative features for classification. However, when data are contaminated with severe noise, the performance of most existing subspace learning methods would be limited. Recent advances in low-rank modeling provide effective solutions for removing noise or outliers contained in sample sets, which motivates us to take advantage of low-rank constraints in order to exploit robust and discriminative subspace for classification. In particular, we present a discriminative subspace learning method called the supervised regularization-based robust subspace (SRRS) approach, by incorporating the low-rank constraint. SRRS seeks low-rank representations from the noisy data, and learns a discriminative subspace from the recovered clean data jointly. A supervised regularization function is designed to make use of the class label information, and therefore to enhance the discriminability of subspace. Our approach is formulated as a constrained rank-minimization problem. We design an inexact augmented Lagrange multiplier optimization algorithm to solve it. Unlike the existing sparse representation and low-rank learning methods, our approach learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information. Our approach and some baselines are evaluated on the COIL-100, ALOI, Extended YaleB, FERET, AR, and KinFace databases. The experimental results demonstrate the effectiveness of our approach, especially when the data contain considerable noise or variations.

  8. Detached eddy simulation for turbulent fluid-structure interaction of moving bodies using the constraint-based immersed boundary method

    NASA Astrophysics Data System (ADS)

    Nangia, Nishant; Bhalla, Amneet P. S.; Griffith, Boyce E.; Patankar, Neelesh A.

    2016-11-01

    Flows over bodies of industrial importance often contain both an attached boundary layer region near the structure and a region of massively separated flow near its trailing edge. When simulating these flows with turbulence modeling, the Reynolds-averaged Navier-Stokes (RANS) approach is more efficient in the former, whereas large-eddy simulation (LES) is more accurate in the latter. Detached-eddy simulation (DES), based on the Spalart-Allmaras model, is a hybrid method that switches from RANS mode of solution in attached boundary layers to LES in detached flow regions. Simulations of turbulent flows over moving structures on a body-fitted mesh incur an enormous remeshing cost every time step. The constraint-based immersed boundary (cIB) method eliminates this operation by placing the structure on a Cartesian mesh and enforcing a rigidity constraint as an additional forcing in the Navier-Stokes momentum equation. We outline the formulation and development of a parallel DES-cIB method using adaptive mesh refinement. We show preliminary validation results for flows past stationary bodies with both attached and separated boundary layers along with results for turbulent flows past moving bodies. This work is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1324585.

  9. Application of constraint-based satellite mission planning model in forest fire monitoring

    NASA Astrophysics Data System (ADS)

    Guo, Bingjun; Wang, Hongfei; Wu, Peng

    2017-10-01

    In this paper, a constraint-based satellite mission planning model is established based on the thought of constraint satisfaction. It includes target, request, observation, satellite, payload and other elements, with constraints linked up. The optimization goal of the model is to make full use of time and resources, and improve the efficiency of target observation. Greedy algorithm is used in the model solving to make observation plan and data transmission plan. Two simulation experiments are designed and carried out, which are routine monitoring of global forest fire and emergency monitoring of forest fires in Australia. The simulation results proved that the model and algorithm perform well. And the model is of good emergency response capability. Efficient and reasonable plan can be worked out to meet users' needs under complex cases of multiple payloads, multiple targets and variable priorities with this model.

  10. Chondrules: The canonical and noncanonical views

    NASA Astrophysics Data System (ADS)

    Connolly, Harold C.; Jones, Rhian H.

    2016-10-01

    Millimeter-scale rock particles called chondrules are the principal components of the most common meteorites, chondrites. Hence, chondrules were arguably the most abundant components of the early solar system at the time of planetesimal accretion. Despite their fundamental importance, the existence of chondrules would not be predicted from current observations and models of young planetary systems. There are many different models for chondrule formation, but no single model satisfies the many constraints determined from their mineralogical and chemical properties and from chondrule analog experiments. Significant recent progress has shown that several models can satisfy first-order constraints and successfully reproduce chondrule thermal histories. However, second- and third-order constraints such as chondrule size ranges, open system behavior, oxidation states, reheating, and chemical diversity have not generally been addressed. Chondrule formation models include those based on processes that are known to occur in protoplanetary disk environments, including interactions with the early active Sun, impacts and collisions between planetary bodies, and radiative heating. Other models for chondrule heating mechanisms are based on hypothetical processes that are possible but have not been observed, like shock waves, planetesimal bow shocks, and lightning. We examine the evidence for the canonical view of chondrule formation, in which chondrules were free-floating particles in the protoplanetary disk, and the noncanonical view, in which chondrules were the by-products of planetesimal formation. The fundamental difference between these approaches has a bearing on the importance of chondrules during planet formation and the relevance of chondrules to interpreting the evolution of protoplanetary disks and planetary systems.

  11. A systematic linear space approach to solving partially described inverse eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Hu, Sau-Lon James; Li, Haujun

    2008-06-01

    Most applications of the inverse eigenvalue problem (IEP), which concerns the reconstruction of a matrix from prescribed spectral data, are associated with special classes of structured matrices. Solving the IEP requires one to satisfy both the spectral constraint and the structural constraint. If the spectral constraint consists of only one or few prescribed eigenpairs, this kind of inverse problem has been referred to as the partially described inverse eigenvalue problem (PDIEP). This paper develops an efficient, general and systematic approach to solve the PDIEP. Basically, the approach, applicable to various structured matrices, converts the PDIEP into an ordinary inverse problem that is formulated as a set of simultaneous linear equations. While solving simultaneous linear equations for model parameters, the singular value decomposition method is applied. Because of the conversion to an ordinary inverse problem, other constraints associated with the model parameters can be easily incorporated into the solution procedure. The detailed derivation and numerical examples to implement the newly developed approach to symmetric Toeplitz and quadratic pencil (including mass, damping and stiffness matrices of a linear dynamic system) PDIEPs are presented. Excellent numerical results for both kinds of problem are achieved under the situations that have either unique or infinitely many solutions.

  12. Validating EHR clinical models using ontology patterns.

    PubMed

    Martínez-Costa, Catalina; Schulz, Stefan

    2017-12-01

    Clinical models are artefacts that specify how information is structured in electronic health records (EHRs). However, the makeup of clinical models is not guided by any formal constraint beyond a semantically vague information model. We address this gap by advocating ontology design patterns as a mechanism that makes the semantics of clinical models explicit. This paper demonstrates how ontology design patterns can validate existing clinical models using SHACL. Based on the Clinical Information Modelling Initiative (CIMI), we show how ontology patterns detect both modeling and terminology binding errors in CIMI models. SHACL, a W3C constraint language for the validation of RDF graphs, builds on the concept of "Shape", a description of data in terms of expected cardinalities, datatypes and other restrictions. SHACL, as opposed to OWL, subscribes to the Closed World Assumption (CWA) and is therefore more suitable for the validation of clinical models. We have demonstrated the feasibility of the approach by manually describing the correspondences between six CIMI clinical models represented in RDF and two SHACL ontology design patterns. Using a Java-based SHACL implementation, we found at least eleven modeling and binding errors within these CIMI models. This demonstrates the usefulness of ontology design patterns not only as a modeling tool but also as a tool for validation. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Knowledge acquisition and learning process description in context of e-learning

    NASA Astrophysics Data System (ADS)

    Kiselev, B. G.; Yakutenko, V. A.; Yuriev, M. A.

    2017-01-01

    This paper investigates the problem of design of e-learning and MOOC systems. It describes instructional design-based approaches to e-learning systems design: IMS Learning Design, MISA and TELOS. To solve this problem we present Knowledge Field of Educational Environment with Competence boundary conditions - instructional engineering method for self-learning systems design. It is based on the simplified TELOS approach and enables a user to create their individual learning path by choosing prerequisite and target competencies. The paper provides the ontology model for the described instructional engineering method, real life use cases and the classification of the presented model. Ontology model consists of 13 classes and 15 properties. Some of them are inherited from Knowledge Field of Educational Environment and some are new and describe competence boundary conditions and knowledge validation objects. Ontology model uses logical constraints and is described using OWL 2 standard. To give TELOS users better understanding of our approach we list mapping between TELOS and KFEEC.

  14. Mixtures of GAMs for habitat suitability analysis with overdispersed presence / absence data

    PubMed Central

    Pleydell, David R.J.; Chrétien, Stéphane

    2009-01-01

    A new approach to species distribution modelling based on unsupervised classification via a finite mixture of GAMs incorporating habitat suitability curves is proposed. A tailored EM algorithm is outlined for computing maximum likelihood estimates. Several submodels incorporating various parameter constraints are explored. Simulation studies confirm, that under certain constraints, the habitat suitability curves are recovered with good precision. The method is also applied to a set of real data concerning presence/absence of observable small mammal indices collected on the Tibetan plateau. The resulting classification was found to correspond to species-level differences in habitat preference described in previous ecological work. PMID:20401331

  15. Multicriteria approaches for a private equity fund

    NASA Astrophysics Data System (ADS)

    Tammer, Christiane; Tannert, Johannes

    2012-09-01

    We develop a new model for a Private Equity Fund based on stochastic differential equations. In order to find efficient strategies for the fund manager we formulate a multicriteria optimization problem for a Private Equity Fund. Using the e-constraint method we solve this multicriteria optimization problem. Furthermore, a genetic algorithm is applied in order to get an approximation of the efficient frontier.

  16. Smart Grid as Multi-layer Interacting System for Complex Decision Makings

    NASA Astrophysics Data System (ADS)

    Bompard, Ettore; Han, Bei; Masera, Marcelo; Pons, Enrico

    This chapter presents an approach to the analysis of Smart Grids based on a multi-layer representation of their technical, cyber, social and decision-making aspects, as well as the related environmental constraints. In the Smart Grid paradigm, self-interested active customers (prosumers), system operators and market players interact among themselves making use of an extensive cyber infrastructure. In addition, policy decision makers define regulations, incentives and constraints to drive the behavior of the competing operators and prosumers, with the objective of ensuring the global desired performance (e.g. system stability, fair prices). For these reasons, the policy decision making is more complicated than in traditional power systems, and needs proper modeling and simulation tools for assessing "in vitro" and ex-ante the possible impacts of the decisions assumed. In this chapter, we consider the smart grids as multi-layered interacting complex systems. The intricacy of the framework, characterized by several interacting layers, cannot be captured by closed-form mathematical models. Therefore, a new approach using Multi Agent Simulation is described. With case studies we provide some indications about how to develop agent-based simulation tools presenting some preliminary examples.

  17. Incorporating Topic Assignment Constraint and Topic Correlation Limitation into Clinical Goal Discovering for Clinical Pathway Mining.

    PubMed

    Xu, Xiao; Jin, Tao; Wei, Zhijie; Wang, Jianmin

    2017-01-01

    Clinical pathways are widely used around the world for providing quality medical treatment and controlling healthcare cost. However, the expert-designed clinical pathways can hardly deal with the variances among hospitals and patients. It calls for more dynamic and adaptive process, which is derived from various clinical data. Topic-based clinical pathway mining is an effective approach to discover a concise process model. Through this approach, the latent topics found by latent Dirichlet allocation (LDA) represent the clinical goals. And process mining methods are used to extract the temporal relations between these topics. However, the topic quality is usually not desirable due to the low performance of the LDA in clinical data. In this paper, we incorporate topic assignment constraint and topic correlation limitation into the LDA to enhance the ability of discovering high-quality topics. Two real-world datasets are used to evaluate the proposed method. The results show that the topics discovered by our method are with higher coherence, informativeness, and coverage than the original LDA. These quality topics are suitable to represent the clinical goals. Also, we illustrate that our method is effective in generating a comprehensive topic-based clinical pathway model.

  18. Incorporating Topic Assignment Constraint and Topic Correlation Limitation into Clinical Goal Discovering for Clinical Pathway Mining

    PubMed Central

    Xu, Xiao; Wei, Zhijie

    2017-01-01

    Clinical pathways are widely used around the world for providing quality medical treatment and controlling healthcare cost. However, the expert-designed clinical pathways can hardly deal with the variances among hospitals and patients. It calls for more dynamic and adaptive process, which is derived from various clinical data. Topic-based clinical pathway mining is an effective approach to discover a concise process model. Through this approach, the latent topics found by latent Dirichlet allocation (LDA) represent the clinical goals. And process mining methods are used to extract the temporal relations between these topics. However, the topic quality is usually not desirable due to the low performance of the LDA in clinical data. In this paper, we incorporate topic assignment constraint and topic correlation limitation into the LDA to enhance the ability of discovering high-quality topics. Two real-world datasets are used to evaluate the proposed method. The results show that the topics discovered by our method are with higher coherence, informativeness, and coverage than the original LDA. These quality topics are suitable to represent the clinical goals. Also, we illustrate that our method is effective in generating a comprehensive topic-based clinical pathway model. PMID:29065617

  19. On the exact solvability of the anisotropic central spin model: An operator approach

    NASA Astrophysics Data System (ADS)

    Wu, Ning

    2018-07-01

    Using an operator approach based on a commutator scheme that has been previously applied to Richardson's reduced BCS model and the inhomogeneous Dicke model, we obtain general exact solvability requirements for an anisotropic central spin model with XXZ-type hyperfine coupling between the central spin and the spin bath, without any prior knowledge of integrability of the model. We outline basic steps of the usage of the operators approach, and pedagogically summarize them into two Lemmas and two Constraints. Through a step-by-step construction of the eigen-problem, we show that the condition gj‧2 - gj2 = c naturally arises for the model to be exactly solvable, where c is a constant independent of the bath-spin index j, and {gj } and { gj‧ } are the longitudinal and transverse hyperfine interactions, respectively. The obtained conditions and the resulting Bethe ansatz equations are consistent with that in previous literature.

  20. Models of Sector Flows Under Local, Regional and Airport Weather Constraints

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak

    2017-01-01

    Recently, the ATM community has made important progress in collaborative trajectory management through the introduction of a new FAA traffic management initiative called a Collaborative Trajectory Options Program (CTOP). FAA can use CTOPs to manage air traffic under multiple constraints (manifested as flow constrained areas or FCAs) in the system, and it allows flight operators to indicate their preferences for routing and delay options. CTOPs also permits better management of the overall trajectory of flights by considering both routing and departure delay options simultaneously. However, adoption of CTOPs in airspace has been hampered by many factors that include challenges in how to identify constrained areas and how to set rates for the FCAs. Decision support tools providing assistance would be particularly helpful in effective use of CTOPs. Such DSTs tools would need models of demand and capacity in the presence of multiple constraints. This study examines different approaches to using historical data to create and validate models of maximum flows in sectors and other airspace regions in the presence of multiple constraints. A challenge in creating an empirical model of flows under multiple constraints is a lack of sufficient historical data that captures diverse situations involving combinations of multiple constraints especially those with severe weather. The approach taken here to deal with this is two-fold. First, we create a generalized sector model encompassing multiple sectors rather than individual sectors in order to increase the amount of data used for creating the model by an order of magnitude. Secondly, we decompose the problem so that the amount of data needed is reduced. This involves creating a baseline demand model plus a separate weather constrained flow reduction model and then composing these into a single integrated model. A nominal demand model is a flow model (gdem) in the presence of clear local weather. This defines the flow as a function of weather constraints in neighboring regions, airport constraints and weather in locations that can cause re-routes to the location of interest. A weather constrained flow reduction model (fwx-red) is a model of reduction in baseline counts as a function of local weather. Because the number of independent variables associated with each of the two decomposed models is smaller than that with a single model, need for amount of data is reduced. Finally, a composite model that combines these two can be represented as fwx-red (gdem(e), l) where e represents non-local constraints and l represents local weather. The approaches studied to developing these models are divided into three categories: (1) Point estimation models (2) Empirical models (3) Theoretical models. Errors in predictions of these different types of models have been estimated. In situations when there is abundant data, point estimation models tend to be very accurate. In contrast, empirical models do better than theoretical models when there is some data available. The biggest benefit of theoretical models is their general applicability in wider range situations once the degree of accuracy of these has been established.

  1. Models of Sector Aircraft Counts in the Presence of Local, Regional and Airport Constraints

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak

    2017-01-01

    Recently, the ATM community has made important progress in collaborative trajectory management through the introduction of a new FAA traffic management initiative called a Collaborative Trajectory Options Program (CTOP). FAA can use CTOPs to manage air traffic under multiple constraints (manifested as flow constrained areas or FCAs) in the system, and it allows flight operators to indicate their preferences for routing and delay options. CTOPs also permits better management of the overall trajectory of flights by considering both routing and departure delay options simultaneously. However, adoption of CTOPs in airspace has been hampered by many factors that include challenges in how to identify constrained areas and how to set rates for the FCAs. Decision support tools providing assistance would be particularly helpful in effective use of CTOPs. Such DSTs tools would need models of demand and capacity in the presence of multiple constraints. This study examines different approaches to using historical data to create and validate models of maximum flows in sectors and other airspace regions in the presence of multiple constraints. A challenge in creating an empirical model of flows under multiple constraints is a lack of sufficient historical data that captures diverse situations involving combinations of multiple constraints especially those with severe weather. The approach taken here to deal with this is two-fold. First, we create a generalized sector model encompassing multiple sectors rather than individual sectors in order to increase the amount of data used for creating the model by an order of magnitude. Secondly, we decompose the problem so that the amount of data needed is reduced. This involves creating a baseline demand model plus a separate weather constrained flow reduction model and then composing these into a single integrated model. A nominal demand model is a flow model (gdem) in the presence of clear local weather. This defines the flow as a function of weather constraints in neighboring regions, airport constraints and weather in locations that can cause re-routes to the location of interest. A weather constrained flow reduction model (fwx-red) is a model of reduction in baseline counts as a function of local weather. Because the number of independent variables associated with each of the two decomposed models is smaller than that with a single model, need for amount of data is reduced. Finally, a composite model that combines these two can be represented as fwx-red (gdem(e), l) where e represents non-local constraints and l represents local weather. The approaches studied to developing these models are divided into three categories: (1) Point estimation models (2) Empirical models (3) Theoretical models. Errors in predictions of these different types of models have been estimated. In situations when there is abundant data, point estimation models tend to be very accurate. In contrast, empirical models do better than theoretical models when there is some data available. The biggest benefit of theoretical models is their general applicability in wider range situations once the degree of accuracy of these has been established.

  2. Experimental validation of a numerical 3-D finite model applied to wind turbines design under vibration constraints: TREVISE platform

    NASA Astrophysics Data System (ADS)

    Sellami, Takwa; Jelassi, Sana; Darcherif, Abdel Moumen; Berriri, Hanen; Mimouni, Med Faouzi

    2018-04-01

    With the advancement of wind turbines towards complex structures, the requirement of trusty structural models has become more apparent. Hence, the vibration characteristics of the wind turbine components, like the blades and the tower, have to be extracted under vibration constraints. Although extracting the modal properties of blades is a simple task, calculating precise modal data for the whole wind turbine coupled to its tower/foundation is still a perplexing task. In this framework, this paper focuses on the investigation of the structural modeling approach of modern commercial micro-turbines. Thus, the structural model a complex designed wind turbine, which is Rutland 504, is established based on both experimental and numerical methods. A three-dimensional (3-D) numerical model of the structure was set up based on the finite volume method (FVM) using the academic finite element analysis software ANSYS. To validate the created model, experimental vibration tests were carried out using the vibration test system of TREVISE platform at ECAM-EPMI. The tests were based on the experimental modal analysis (EMA) technique, which is one of the most efficient techniques for identifying structures parameters. Indeed, the poles and residues of the frequency response functions (FRF), between input and output spectra, were calculated to extract the mode shapes and the natural frequencies of the structure. Based on the obtained modal parameters, the numerical designed model was up-dated.

  3. An Improved Hierarchical Genetic Algorithm for Sheet Cutting Scheduling with Process Constraints

    PubMed Central

    Rao, Yunqing; Qi, Dezhong; Li, Jinling

    2013-01-01

    For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony—hierarchical genetic algorithm) is developed for better solution, and a hierarchical coding method is used based on the characteristics of the problem. Furthermore, to speed up convergence rates and resolve local convergence issues, a kind of adaptive crossover probability and mutation probability is used in this algorithm. The computational result and comparison prove that the presented approach is quite effective for the considered problem. PMID:24489491

  4. An improved hierarchical genetic algorithm for sheet cutting scheduling with process constraints.

    PubMed

    Rao, Yunqing; Qi, Dezhong; Li, Jinling

    2013-01-01

    For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony--hierarchical genetic algorithm) is developed for better solution, and a hierarchical coding method is used based on the characteristics of the problem. Furthermore, to speed up convergence rates and resolve local convergence issues, a kind of adaptive crossover probability and mutation probability is used in this algorithm. The computational result and comparison prove that the presented approach is quite effective for the considered problem.

  5. Approximate Bayesian computation in large-scale structure: constraining the galaxy-halo connection

    NASA Astrophysics Data System (ADS)

    Hahn, ChangHoon; Vakili, Mohammadjavad; Walsh, Kilian; Hearin, Andrew P.; Hogg, David W.; Campbell, Duncan

    2017-08-01

    Standard approaches to Bayesian parameter inference in large-scale structure assume a Gaussian functional form (chi-squared form) for the likelihood. This assumption, in detail, cannot be correct. Likelihood free inferences such as approximate Bayesian computation (ABC) relax these restrictions and make inference possible without making any assumptions on the likelihood. Instead ABC relies on a forward generative model of the data and a metric for measuring the distance between the model and data. In this work, we demonstrate that ABC is feasible for LSS parameter inference by using it to constrain parameters of the halo occupation distribution (HOD) model for populating dark matter haloes with galaxies. Using specific implementation of ABC supplemented with population Monte Carlo importance sampling, a generative forward model using HOD and a distance metric based on galaxy number density, two-point correlation function and galaxy group multiplicity function, we constrain the HOD parameters of mock observation generated from selected 'true' HOD parameters. The parameter constraints we obtain from ABC are consistent with the 'true' HOD parameters, demonstrating that ABC can be reliably used for parameter inference in LSS. Furthermore, we compare our ABC constraints to constraints we obtain using a pseudo-likelihood function of Gaussian form with MCMC and find consistent HOD parameter constraints. Ultimately, our results suggest that ABC can and should be applied in parameter inference for LSS analyses.

  6. Lyapunov-Based Sensor Failure Detection And Recovery For The Reverse Water Gas Shift Process

    NASA Technical Reports Server (NTRS)

    Haralambous, Michael G.

    2001-01-01

    Livingstone, a model-based AI software system, is planned for use in the autonomous fault diagnosis, reconfiguration, and control of the oxygen-producing reverse water gas shift (RWGS) process test-bed located in the Applied Chemistry Laboratory at KSC. In this report the RWGS process is first briefly described and an overview of Livingstone is given. Next, a Lyapunov-based approach for detecting and recovering from sensor failures, differing significantly from that used by Livingstone, is presented. In this new method, models used are in terms of the defining differential equations of system components, thus differing from the qualitative, static models used by Livingstone. An easily computed scalar inequality constraint, expressed in terms of sensed system variables, is used to determine the existence of sensor failures. In the event of sensor failure, an observer/estimator is used for determining which sensors have failed. The theory underlying the new approach is developed. Finally, a recommendation is made to use the Lyapunov-based approach to complement the capability of Livingstone and to use this combination in the RWGS process.

  7. LYAPUNOV-Based Sensor Failure Detection and Recovery for the Reverse Water Gas Shift Process

    NASA Technical Reports Server (NTRS)

    Haralambous, Michael G.

    2002-01-01

    Livingstone, a model-based AI software system, is planned for use in the autonomous fault diagnosis, reconfiguration, and control of the oxygen-producing reverse water gas shift (RWGS) process test-bed located in the Applied Chemistry Laboratory at KSC. In this report the RWGS process is first briefly described and an overview of Livingstone is given. Next, a Lyapunov-based approach for detecting and recovering from sensor failures, differing significantly from that used by Livingstone, is presented. In this new method, models used are in t e m of the defining differential equations of system components, thus differing from the qualitative, static models used by Livingstone. An easily computed scalar inequality constraint, expressed in terms of sensed system variables, is used to determine the existence of sensor failures. In the event of sensor failure, an observer/estimator is used for determining which sensors have failed. The theory underlying the new approach is developed. Finally, a recommendation is made to use the Lyapunov-based approach to complement the capability of Livingstone and to use this combination in the RWGS process.

  8. A classification procedure for the effective management of changes during the maintenance process

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Basili, Victor R.

    1992-01-01

    During software operation, maintainers are often faced with numerous change requests. Given available resources such as effort and calendar time, changes, if approved, have to be planned to fit within budget and schedule constraints. In this paper, we address the issue of assessing the difficulty of a change based on known or predictable data. This paper should be considered as a first step towards the construction of customized economic models for maintainers. In it, we propose a modeling approach, based on regular statistical techniques, that can be used in a variety of software maintenance environments. The approach can be easily automated, and is simple for people with limited statistical experience to use. Moreover, it deals effectively with the uncertainty usually associated with both model inputs and outputs. The modeling approach is validated on a data set provided by NASA/GSFC which shows it was effective in classifying changes with respect to the effort involved in implementing them. Other advantages of the approach are discussed along with additional steps to improve the results.

  9. Efficient dynamic modeling of manipulators containing closed kinematic loops

    NASA Astrophysics Data System (ADS)

    Ferretti, Gianni; Rocco, Paolo

    An approach to efficiently solve the forward dynamics problem for manipulators containing closed chains is proposed. The two main distinctive features of this approach are: the dynamics of the equivalent open loop tree structures (any closed loop can be in general modeled by imposing some additional kinematic constraints to a suitable tree structure) is computed through an efficient Newton Euler formulation; the constraint equations relative to the most commonly adopted closed chains in industrial manipulators are explicitly solved, thus, overcoming the redundancy of Lagrange's multipliers method while avoiding the inefficiency due to a numerical solution of the implicit constraint equations. The constraint equations considered for an explicit solution are those imposed by articulated gear mechanisms and planar closed chains (pantograph type structures). Articulated gear mechanisms are actually used in all industrial robots to transmit motion from actuators to links, while planar closed chains are usefully employed to increase the stiffness of the manipulators and their load capacity, as well to reduce the kinematic coupling of joint axes. The accuracy and the efficiency of the proposed approach are shown through a simulation test.

  10. A study of pH-dependent photodegradation of amiloride by a multivariate curve resolution approach to combined kinetic and acid-base titration UV data.

    PubMed

    De Luca, Michele; Ioele, Giuseppina; Mas, Sílvia; Tauler, Romà; Ragno, Gaetano

    2012-11-21

    Amiloride photostability at different pH values was studied in depth by applying Multivariate Curve Resolution Alternating Least Squares (MCR-ALS) to the UV spectrophotometric data from drug solutions exposed to stressing irradiation. Resolution of all degradation photoproducts was possible by simultaneous spectrophotometric analysis of kinetic photodegradation and acid-base titration experiments. Amiloride photodegradation showed to be strongly dependent on pH. Two hard modelling constraints were sequentially used in MCR-ALS for the unambiguous resolution of all the species involved in the photodegradation process. An amiloride acid-base system was defined by using the equilibrium constraint, and the photodegradation pathway was modelled taking into account the kinetic constraint. The simultaneous analysis of photodegradation and titration experiments revealed the presence of eight different species, which were differently distributed according to pH and time. Concentration profiles of all the species as well as their pure spectra were resolved and kinetic rate constants were estimated. The values of rate constants changed with pH and under alkaline conditions the degradation pathway and photoproducts also changed. These results were compared to those obtained by LC-MS analysis from drug photodegradation experiments. MS analysis allowed the identification of up to five species and showed the simultaneous presence of more than one acid-base equilibrium.

  11. Beyond mechanistic interaction: value-based constraints on meaning in language.

    PubMed

    Rączaszek-Leonardi, Joanna; Nomikou, Iris

    2015-01-01

    According to situated, embodied, and distributed approaches to cognition, language is a crucial means for structuring social interactions. Recent approaches that emphasize this coordinative function treat language as a system of replicable constraints on individual and interactive dynamics. In this paper, we argue that the integration of the replicable-constraints approach to language with the ecological view on values allows for a deeper insight into processes of meaning creation in interaction. Such a synthesis of these frameworks draws attention to important sources of structuring interactions beyond the sheer efficiency of a collective system in its current task situation. Most importantly, the workings of linguistic constraints will be shown as embedded in more general fields of values, which are realized on multiple timescales. Because the ontogenetic timescale offers a convenient window into the emergence of linguistic constraints, we present illustrations of concrete mechanisms through which values may become embodied in language use in development.

  12. Missile Guidance Law Based on Robust Model Predictive Control Using Neural-Network Optimization.

    PubMed

    Li, Zhijun; Xia, Yuanqing; Su, Chun-Yi; Deng, Jun; Fu, Jun; He, Wei

    2015-08-01

    In this brief, the utilization of robust model-based predictive control is investigated for the problem of missile interception. Treating the target acceleration as a bounded disturbance, novel guidance law using model predictive control is developed by incorporating missile inside constraints. The combined model predictive approach could be transformed as a constrained quadratic programming (QP) problem, which may be solved using a linear variational inequality-based primal-dual neural network over a finite receding horizon. Online solutions to multiple parametric QP problems are used so that constrained optimal control decisions can be made in real time. Simulation studies are conducted to illustrate the effectiveness and performance of the proposed guidance control law for missile interception.

  13. From population viability analysis to coviability of farmland biodiversity and agriculture.

    PubMed

    Mouysset, L; Doyen, L; Jiguet, F

    2014-02-01

    Substantial declines in farmland biodiversity have been reported in Europe for several decades. Agricultural changes have been identified as a main driver of these declines. Although different agrienvironmental schemes have been implemented, their positive effect on biodiversity is relatively unknown. This raises the question as to how to reconcile farming production and biodiversity conservation to operationalize a sustainable and multifunctional agriculture. We devised a bioeconomic model and conducted an analysis based on coviability of farmland biodiversity and agriculture. The coviability approach extended population viability analyses by including bioeconomic risk. Our model coupled stochastic dynamics of both biodiversity and farming land-uses selected at the microlevel with public policies at the macrolevel on the basis of financial incentives (taxes or subsidies) for land uses. The coviability approach made it possible for us to evaluate bioeconomic risks of these public incentives through the probability of satisfying a mix of biodiversity and economic constraints over time. We calibrated the model and applied it to a community of 34 common birds in metropolitan France at the small agricultural regions scale. We identified different public policies and scenarios with tolerable (0-0%) agroecological risk and modeled their outcomes up to 2050. Budgetary, economic, and ecological (based on Farmland Bird Index) constraints were essential to understanding the set of viable public policies. Our results suggest that some combinations of taxes on cereals and subsidies on grasslands could be relevant to develop a multifunctional agriculture. Moreover, the flexibility and multicriteria viewpoint underlying the coviability approach may help in the implementation of adaptive management. © 2013 Society for Conservation Biology.

  14. Estimation of in-situ bioremediation system cost using a hybrid Extreme Learning Machine (ELM)-particle swarm optimization approach

    NASA Astrophysics Data System (ADS)

    Yadav, Basant; Ch, Sudheer; Mathur, Shashi; Adamowski, Jan

    2016-12-01

    In-situ bioremediation is the most common groundwater remediation procedure used for treating organically contaminated sites. A simulation-optimization approach, which incorporates a simulation model for groundwaterflow and transport processes within an optimization program, could help engineers in designing a remediation system that best satisfies management objectives as well as regulatory constraints. In-situ bioremediation is a highly complex, non-linear process and the modelling of such a complex system requires significant computational exertion. Soft computing techniques have a flexible mathematical structure which can generalize complex nonlinear processes. In in-situ bioremediation management, a physically-based model is used for the simulation and the simulated data is utilized by the optimization model to optimize the remediation cost. The recalling of simulator to satisfy the constraints is an extremely tedious and time consuming process and thus there is need for a simulator which can reduce the computational burden. This study presents a simulation-optimization approach to achieve an accurate and cost effective in-situ bioremediation system design for groundwater contaminated with BTEX (Benzene, Toluene, Ethylbenzene, and Xylenes) compounds. In this study, the Extreme Learning Machine (ELM) is used as a proxy simulator to replace BIOPLUME III for the simulation. The selection of ELM is done by a comparative analysis with Artificial Neural Network (ANN) and Support Vector Machine (SVM) as they were successfully used in previous studies of in-situ bioremediation system design. Further, a single-objective optimization problem is solved by a coupled Extreme Learning Machine (ELM)-Particle Swarm Optimization (PSO) technique to achieve the minimum cost for the in-situ bioremediation system design. The results indicate that ELM is a faster and more accurate proxy simulator than ANN and SVM. The total cost obtained by the ELM-PSO approach is held to a minimum while successfully satisfying all the regulatory constraints of the contaminated site.

  15. The Influence of Individual Driver Characteristics on Congestion Formation

    NASA Astrophysics Data System (ADS)

    Wang, Lanjun; Zhang, Hao; Meng, Huadong; Wang, Xiqin

    Previous works have pointed out that one of the reasons for the formation of traffic congestion is instability in traffic flow. In this study, we investigate theoretically how the characteristics of individual drivers influence the instability of traffic flow. The discussions are based on the optimal velocity model, which has three parameters related to individual driver characteristics. We specify the mappings between the model parameters and driver characteristics in this study. With linear stability analysis, we obtain a condition for when instability occurs and a constraint about how the model parameters influence the unstable traffic flow. Meanwhile, we also determine how the region of unstable flow densities depends on these parameters. Additionally, the Langevin approach theoretically validates that under the constraint, the macroscopic characteristics of the unstable traffic flow becomes a mixture of free flows and congestions. All of these results imply that both overly aggressive and overly conservative drivers are capable of triggering traffic congestion.

  16. Modeling mechanical interactions in growing populations of rod-shaped bacteria

    NASA Astrophysics Data System (ADS)

    Winkle, James J.; Igoshin, Oleg A.; Bennett, Matthew R.; Josić, Krešimir; Ott, William

    2017-10-01

    Advances in synthetic biology allow us to engineer bacterial collectives with pre-specified characteristics. However, the behavior of these collectives is difficult to understand, as cellular growth and division as well as extra-cellular fluid flow lead to complex, changing arrangements of cells within the population. To rationally engineer and control the behavior of cell collectives we need theoretical and computational tools to understand their emergent spatiotemporal dynamics. Here, we present an agent-based model that allows growing cells to detect and respond to mechanical interactions. Crucially, our model couples the dynamics of cell growth to the cell’s environment: Mechanical constraints can affect cellular growth rate and a cell may alter its behavior in response to these constraints. This coupling links the mechanical forces that influence cell growth and emergent behaviors in cell assemblies. We illustrate our approach by showing how mechanical interactions can impact the dynamics of bacterial collectives growing in microfluidic traps.

  17. Constrained off-line synthesis approach of model predictive control for networked control systems with network-induced delays.

    PubMed

    Tang, Xiaoming; Qu, Hongchun; Wang, Ping; Zhao, Meng

    2015-03-01

    This paper investigates the off-line synthesis approach of model predictive control (MPC) for a class of networked control systems (NCSs) with network-induced delays. A new augmented model which can be readily applied to time-varying control law, is proposed to describe the NCS where bounded deterministic network-induced delays may occur in both sensor to controller (S-A) and controller to actuator (C-A) links. Based on this augmented model, a sufficient condition of the closed-loop stability is derived by applying the Lyapunov method. The off-line synthesis approach of model predictive control is addressed using the stability results of the system, which explicitly considers the satisfaction of input and state constraints. Numerical example is given to illustrate the effectiveness of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Application of a prospective model for calculating worker exposure due to the air pathway for operations in a laboratory.

    PubMed

    Grimbergen, T W M; Wiegman, M M

    2007-01-01

    In order to arrive at recommendations for guidelines on maximum allowable quantities of radioactive material in laboratories, a proposed mathematical model was used for the calculation of transfer fractions for the air pathway. A set of incident scenarios was defined, including spilling, leakage and failure of the fume hood. For these 'common incidents', dose constraints of 1 mSv and 0.1 mSv are proposed in case the operations are being performed in a controlled area and supervised area, respectively. In addition, a dose constraint of 1 microSv is proposed for each operation under regular working conditions. Combining these dose constraints and the transfer fractions calculated with the proposed model, maximum allowable quantities were calculated for different laboratory operations and situations. Provided that the calculated transfer fractions can be experimentally validated and the dose constraints are acceptable, it can be concluded from the results that the dose constraint for incidents is the most restrictive one. For non-volatile materials this approach leads to quantities much larger than commonly accepted. In those cases, the results of the calculations in this study suggest that limitation of the quantity of radioactive material, which can be handled safely, should be based on other considerations than the inhalation risks. Examples of such considerations might be the level of external exposure, uncontrolled spread of radioactive material by surface contamination, emissions in the environment and severe accidents like fire.

  19. Optimization of focality and direction in dense electrode array transcranial direct current stimulation (tDCS)

    NASA Astrophysics Data System (ADS)

    Guler, Seyhmus; Dannhauer, Moritz; Erem, Burak; Macleod, Rob; Tucker, Don; Turovets, Sergei; Luu, Phan; Erdogmus, Deniz; Brooks, Dana H.

    2016-06-01

    Objective. Transcranial direct current stimulation (tDCS) aims to alter brain function non-invasively via electrodes placed on the scalp. Conventional tDCS uses two relatively large patch electrodes to deliver electrical current to the brain region of interest (ROI). Recent studies have shown that using dense arrays containing up to 512 smaller electrodes may increase the precision of targeting ROIs. However, this creates a need for methods to determine effective and safe stimulus patterns as the number of degrees of freedom is much higher with such arrays. Several approaches to this problem have appeared in the literature. In this paper, we describe a new method for calculating optimal electrode stimulus patterns for targeted and directional modulation in dense array tDCS which differs in some important aspects with methods reported to date. Approach. We optimize stimulus pattern of dense arrays with fixed electrode placement to maximize the current density in a particular direction in the ROI. We impose a flexible set of safety constraints on the current power in the brain, individual electrode currents, and total injected current, to protect subject safety. The proposed optimization problem is convex and thus efficiently solved using existing optimization software to find unique and globally optimal electrode stimulus patterns. Main results. Solutions for four anatomical ROIs based on a realistic head model are shown as exemplary results. To illustrate the differences between our approach and previously introduced methods, we compare our method with two of the other leading methods in the literature. We also report on extensive simulations that show the effect of the values chosen for each proposed safety constraint bound on the optimized stimulus patterns. Significance. The proposed optimization approach employs volume based ROIs, easily adapts to different sets of safety constraints, and takes negligible time to compute. An in-depth comparison study gives insight into the relationship between different objective criteria and optimized stimulus patterns. In addition, the analysis of the interaction between optimized stimulus patterns and safety constraint bounds suggests that more precise current localization in the ROI, with improved safety criterion, may be achieved by careful selection of the constraint bounds.

  20. Interrelations between different canonical descriptions of dissipative systems

    NASA Astrophysics Data System (ADS)

    Schuch, D.; Guerrero, J.; López-Ruiz, F. F.; Aldaya, V.

    2015-04-01

    There are many approaches for the description of dissipative systems coupled to some kind of environment. This environment can be described in different ways; only effective models are being considered here. In the Bateman model, the environment is represented by one additional degree of freedom and the corresponding momentum. In two other canonical approaches, no environmental degree of freedom appears explicitly, but the canonical variables are connected with the physical ones via non-canonical transformations. The link between the Bateman approach and those without additional variables is achieved via comparison with a canonical approach using expanding coordinates, as, in this case, both Hamiltonians are constants of motion. This leads to constraints that allow for the elimination of the additional degree of freedom in the Bateman approach. These constraints are not unique. Several choices are studied explicitly, and the consequences for the physical interpretation of the additional variable in the Bateman model are discussed.

  1. The mechanism and design of sequencing batch reactor systems for nutrient removal--the state of the art.

    PubMed

    Artan, N; Wilderer, P; Orhon, D; Morgenroth, E; Ozgür, N

    2001-01-01

    The Sequencing Batch Reactor (SBR) process for carbon and nutrient removal is subject to extensive research, and it is finding a wider application in full-scale installations. Despite the growing popularity, however, a widely accepted approach to process analysis and modeling, a unified design basis, and even a common terminology are still lacking; this situation is now regarded as the major obstacle hindering broader practical application of the SBR. In this paper a rational dimensioning approach is proposed for nutrient removal SBRs based on scientific information on process stoichiometry and modelling, also emphasizing practical constraints in design and operation.

  2. Energetic constraints, size gradients, and size limits in benthic marine invertebrates.

    PubMed

    Sebens, Kenneth P

    2002-08-01

    Populations of marine benthic organisms occupy habitats with a range of physical and biological characteristics. In the intertidal zone, energetic costs increase with temperature and aerial exposure, and prey intake increases with immersion time, generating size gradients with small individuals often found at upper limits of distribution. Wave action can have similar effects, limiting feeding time or success, although certain species benefit from wave dislodgment of their prey; this also results in gradients of size and morphology. The difference between energy intake and metabolic (and/or behavioral) costs can be used to determine an energetic optimal size for individuals in such populations. Comparisons of the energetic optimal size to the maximum predicted size based on mechanical constraints, and the ensuing mortality schedule, provides a mechanism to study and explain organism size gradients in intertidal and subtidal habitats. For species where the energetic optimal size is well below the maximum size that could persist under a certain set of wave/flow conditions, it is probable that energetic constraints dominate. When the opposite is true, populations of small individuals can dominate habitats with strong dislodgment or damage probability. When the maximum size of individuals is far below either energetic optima or mechanical limits, other sources of mortality (e.g., predation) may favor energy allocation to early reproduction rather than to continued growth. Predictions based on optimal size models have been tested for a variety of intertidal and subtidal invertebrates including sea anemones, corals, and octocorals. This paper provides a review of the optimal size concept, and employs a combination of the optimal energetic size model and life history modeling approach to explore energy allocation to growth or reproduction as the optimal size is approached.

  3. Teaching Database Design with Constraint-Based Tutors

    ERIC Educational Resources Information Center

    Mitrovic, Antonija; Suraweera, Pramuditha

    2016-01-01

    Design tasks are difficult to teach, due to large, unstructured solution spaces, underspecified problems, non-existent problem solving algorithms and stopping criteria. In this paper, we comment on our approach to develop KERMIT, a constraint-based tutor that taught database design. In later work, we re-implemented KERMIT as EER-Tutor, and…

  4. A Monte Carlo Approach for Adaptive Testing with Content Constraints

    ERIC Educational Resources Information Center

    Belov, Dmitry I.; Armstrong, Ronald D.; Weissman, Alexander

    2008-01-01

    This article presents a new algorithm for computerized adaptive testing (CAT) when content constraints are present. The algorithm is based on shadow CAT methodology to meet content constraints but applies Monte Carlo methods and provides the following advantages over shadow CAT: (a) lower maximum item exposure rates, (b) higher utilization of the…

  5. A Constraint-Based Approach to Acquisition of Word-Final Consonant Clusters in Turkish Children

    ERIC Educational Resources Information Center

    Gokgoz-Kurt, Burcu

    2017-01-01

    The current study provides a constraint-based analysis of L1 word-final consonant cluster acquisition in Turkish child language, based on the data originally presented by Topbas and Kopkalli-Yavuz (2008). The present analysis was done using [?]+obstruent consonant cluster acquisition. A comparison of Gradual Learning Algorithm (GLA) under…

  6. Dynamic Constraint Satisfaction with Reasonable Global Constraints

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy

    2003-01-01

    Previously studied theoretical frameworks for dynamic constraint satisfaction problems (DCSPs) employ a small set of primitive operators to modify a problem instance. They do not address the desire to model problems using sophisticated global constraints, and do not address efficiency questions related to incremental constraint enforcement. In this paper, we extend a DCSP framework to incorporate global constraints with flexible scope. A simple approach to incremental propagation after scope modification can be inefficient under some circumstances. We characterize the cases when this inefficiency can occur, and discuss two ways to alleviate this problem: adding rejection variables to the scope of flexible constraints, and adding new features to constraints that permit increased control over incremental propagation.

  7. Generating subtour elimination constraints for the TSP from pure integer solutions.

    PubMed

    Pferschy, Ulrich; Staněk, Rostislav

    2017-01-01

    The traveling salesman problem ( TSP ) is one of the most prominent combinatorial optimization problems. Given a complete graph [Formula: see text] and non-negative distances d for every edge, the TSP asks for a shortest tour through all vertices with respect to the distances d. The method of choice for solving the TSP to optimality is a branch and cut approach . Usually the integrality constraints are relaxed first and all separation processes to identify violated inequalities are done on fractional solutions . In our approach we try to exploit the impressive performance of current ILP-solvers and work only with integer solutions without ever interfering with fractional solutions. We stick to a very simple ILP-model and relax the subtour elimination constraints only. The resulting problem is solved to integer optimality, violated constraints (which are trivial to find) are added and the process is repeated until a feasible solution is found. In order to speed up the algorithm we pursue several attempts to find as many relevant subtours as possible. These attempts are based on the clustering of vertices with additional insights gained from empirical observations and random graph theory. Computational results are performed on test instances taken from the TSPLIB95 and on random Euclidean graphs .

  8. Synchronic interval Gaussian mixed-integer programming for air quality management.

    PubMed

    Cheng, Guanhui; Huang, Guohe Gordon; Dong, Cong

    2015-12-15

    To reveal the synchronism of interval uncertainties, the tradeoff between system optimality and security, the discreteness of facility-expansion options, the uncertainty of pollutant dispersion processes, and the seasonality of wind features in air quality management (AQM) systems, a synchronic interval Gaussian mixed-integer programming (SIGMIP) approach is proposed in this study. A robust interval Gaussian dispersion model is developed for approaching the pollutant dispersion process under interval uncertainties and seasonal variations. The reflection of synchronic effects of interval uncertainties in the programming objective is enabled through introducing interval functions. The proposition of constraint violation degrees helps quantify the tradeoff between system optimality and constraint violation under interval uncertainties. The overall optimality of system profits of an SIGMIP model is achieved based on the definition of an integrally optimal solution. Integer variables in the SIGMIP model are resolved by the existing cutting-plane method. Combining these efforts leads to an effective algorithm for the SIGMIP model. An application to an AQM problem in a region in Shandong Province, China, reveals that the proposed SIGMIP model can facilitate identifying the desired scheme for AQM. The enhancement of the robustness of optimization exercises may be helpful for increasing the reliability of suggested schemes for AQM under these complexities. The interrelated tradeoffs among control measures, emission sources, flow processes, receptors, influencing factors, and economic and environmental goals are effectively balanced. Interests of many stakeholders are reasonably coordinated. The harmony between economic development and air quality control is enabled. Results also indicate that the constraint violation degree is effective at reflecting the compromise relationship between constraint-violation risks and system optimality under interval uncertainties. This can help decision makers mitigate potential risks, e.g. insufficiency of pollutant treatment capabilities, exceedance of air quality standards, deficiency of pollution control fund, or imbalance of economic or environmental stress, in the process of guiding AQM. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Shingle 2.0: generalising self-consistent and automated domain discretisation for multi-scale geophysical models

    NASA Astrophysics Data System (ADS)

    Candy, Adam S.; Pietrzak, Julie D.

    2018-01-01

    The approaches taken to describe and develop spatial discretisations of the domains required for geophysical simulation models are commonly ad hoc, model- or application-specific, and under-documented. This is particularly acute for simulation models that are flexible in their use of multi-scale, anisotropic, fully unstructured meshes where a relatively large number of heterogeneous parameters are required to constrain their full description. As a consequence, it can be difficult to reproduce simulations, to ensure a provenance in model data handling and initialisation, and a challenge to conduct model intercomparisons rigorously. This paper takes a novel approach to spatial discretisation, considering it much like a numerical simulation model problem of its own. It introduces a generalised, extensible, self-documenting approach to carefully describe, and necessarily fully, the constraints over the heterogeneous parameter space that determine how a domain is spatially discretised. This additionally provides a method to accurately record these constraints, using high-level natural language based abstractions that enable full accounts of provenance, sharing, and distribution. Together with this description, a generalised consistent approach to unstructured mesh generation for geophysical models is developed that is automated, robust and repeatable, quick-to-draft, rigorously verified, and consistent with the source data throughout. This interprets the description above to execute a self-consistent spatial discretisation process, which is automatically validated to expected discrete characteristics and metrics. Library code, verification tests, and examples available in the repository at https://github.com/shingleproject/Shingle. Further details of the project presented at http://shingleproject.org.

  10. Improving 3D Genome Reconstructions Using Orthologous and Functional Constraints

    PubMed Central

    Diament, Alon; Tuller, Tamir

    2015-01-01

    The study of the 3D architecture of chromosomes has been advancing rapidly in recent years. While a number of methods for 3D reconstruction of genomic models based on Hi-C data were proposed, most of the analyses in the field have been performed on different 3D representation forms (such as graphs). Here, we reproduce most of the previous results on the 3D genomic organization of the eukaryote Saccharomyces cerevisiae using analysis of 3D reconstructions. We show that many of these results can be reproduced in sparse reconstructions, generated from a small fraction of the experimental data (5% of the data), and study the properties of such models. Finally, we propose for the first time a novel approach for improving the accuracy of 3D reconstructions by introducing additional predicted physical interactions to the model, based on orthologous interactions in an evolutionary-related organism and based on predicted functional interactions between genes. We demonstrate that this approach indeed leads to the reconstruction of improved models. PMID:26000633

  11. Gas solubility in dilute solutions: A novel molecular thermodynamic perspective

    NASA Astrophysics Data System (ADS)

    Chialvo, Ariel A.

    2018-05-01

    We present an explicit molecular-based interpretation of the thermodynamic phase equilibrium underlying gas solubility in liquids, through rigorous links between the microstructure of the dilute systems and the relevant macroscopic quantities that characterize their solution thermodynamics. We apply the formal analysis to unravel and highlight the molecular-level nature of the approximations behind the widely used Krichevsky-Kasarnovsky [J. Am. Chem. Soc. 57, 2168 (1935)] and Krichevsky-Ilinskaya [Acta Physicochim. 20, 327 (1945)] equations for the modeling of gas solubility. Then, we implement a general molecular-based approach to gas solubility and illustrate it by studying Lennard-Jones binary systems whose microstructure and thermodynamic properties were consistently generated via integral equation calculations. Furthermore, guided by the molecular-based analysis, we propose a novel macroscopic modeling approach to gas solubility, emphasize some usually overlook modeling subtleties, and identify novel interdependences among relevant solubility quantities that can be used as either handy modeling constraints or tools for consistency tests.

  12. Gas solubility in dilute solutions: A novel molecular thermodynamic perspective.

    PubMed

    Chialvo, Ariel A

    2018-05-07

    We present an explicit molecular-based interpretation of the thermodynamic phase equilibrium underlying gas solubility in liquids, through rigorous links between the microstructure of the dilute systems and the relevant macroscopic quantities that characterize their solution thermodynamics. We apply the formal analysis to unravel and highlight the molecular-level nature of the approximations behind the widely used Krichevsky-Kasarnovsky [J. Am. Chem. Soc. 57, 2168 (1935)] and Krichevsky-Ilinskaya [Acta Physicochim. 20, 327 (1945)] equations for the modeling of gas solubility. Then, we implement a general molecular-based approach to gas solubility and illustrate it by studying Lennard-Jones binary systems whose microstructure and thermodynamic properties were consistently generated via integral equation calculations. Furthermore, guided by the molecular-based analysis, we propose a novel macroscopic modeling approach to gas solubility, emphasize some usually overlook modeling subtleties, and identify novel interdependences among relevant solubility quantities that can be used as either handy modeling constraints or tools for consistency tests.

  13. Plant architecture, growth and radiative transfer for terrestrial and space environments

    NASA Technical Reports Server (NTRS)

    Norman, John M.; Goel, Narendra S.

    1993-01-01

    The overall objective of this research was to develop a hardware implemented model that would incorporate realistic and dynamic descriptions of canopy architecture in physiologically based models of plant growth and functioning, with an emphasis on radiative transfer while accommodating other environmental constraints. The general approach has five parts: a realistic mathematical treatment of canopy architecture, a methodology for combining this general canopy architectural description with a general radiative transfer model, the inclusion of physiological and environmental aspects of plant growth, inclusion of plant phenology, and integration.

  14. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    NASA Astrophysics Data System (ADS)

    Shaw, Amelia R.; Smith Sawyer, Heather; LeBoeuf, Eugene J.; McDonald, Mark P.; Hadjerioua, Boualem

    2017-11-01

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2 is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. The reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.

  15. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    DOE PAGES

    Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.; ...

    2017-10-24

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less

  16. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less

  17. Pareto-Optimal Estimates of California Precipitation Change

    NASA Astrophysics Data System (ADS)

    Langenbrunner, Baird; Neelin, J. David

    2017-12-01

    In seeking constraints on global climate model projections under global warming, one commonly finds that different subsets of models perform well under different objective functions, and these trade-offs are difficult to weigh. Here a multiobjective approach is applied to a large set of subensembles generated from the Climate Model Intercomparison Project phase 5 ensemble. We use observations and reanalyses to constrain tropical Pacific sea surface temperatures, upper level zonal winds in the midlatitude Pacific, and California precipitation. An evolutionary algorithm identifies the set of Pareto-optimal subensembles across these three measures, and these subensembles are used to constrain end-of-century California wet season precipitation change. This methodology narrows the range of projections throughout California, increasing confidence in estimates of positive mean precipitation change. Finally, we show how this technique complements and generalizes emergent constraint approaches for restricting uncertainty in end-of-century projections within multimodel ensembles using multiple criteria for observational constraints.

  18. Space-time modeling using environmental constraints in a mobile robot system

    NASA Technical Reports Server (NTRS)

    Slack, Marc G.

    1990-01-01

    Grid-based models of a robot's local environment have been used by many researchers building mobile robot control systems. The attraction of grid-based models is their clear parallel between the internal model and the external world. However, the discrete nature of such representations does not match well with the continuous nature of actions and usually serves to limit the abilities of the robot. This work describes a spatial modeling system that extracts information from a grid-based representation to form a symbolic representation of the robot's local environment. The approach makes a separation between the representation provided by the sensing system and the representation used by the action system. Separation allows asynchronous operation between sensing and action in a mobile robot, as well as the generation of a more continuous representation upon which to base actions.

  19. An Algorithm for Interactive Modeling of Space-Transportation Engine Simulations: A Constraint Satisfaction Approach

    NASA Technical Reports Server (NTRS)

    Mitra, Debasis; Thomas, Ajai; Hemminger, Joseph; Sakowski, Barbara

    2001-01-01

    In this research we have developed an algorithm for the purpose of constraint processing by utilizing relational algebraic operators. Van Beek and others have investigated in the past this type of constraint processing from within a relational algebraic framework, producing some unique results. Apart from providing new theoretical angles, this approach also gives the opportunity to use the existing efficient implementations of relational database management systems as the underlying data structures for any relevant algorithm. Our algorithm here enhances that framework. The algorithm is quite general in its current form. Weak heuristics (like forward checking) developed within the Constraint-satisfaction problem (CSP) area could be also plugged easily within this algorithm for further enhancements of efficiency. The algorithm as developed here is targeted toward a component-oriented modeling problem that we are currently working on, namely, the problem of interactive modeling for batch-simulation of engineering systems (IMBSES). However, it could be adopted for many other CSP problems as well. The research addresses the algorithm and many aspects of the problem IMBSES that we are currently handling.

  20. A Determination of the Intergalactic Redshift Dependent UV-Optical-NIR Photon Density Using Deep Galaxy Survey Data and the Gamma-Ray Opacity of the Universe

    NASA Technical Reports Server (NTRS)

    Stecker, Floyd W.

    2012-01-01

    We calculate the intensity and photon spectrum of the intergalactic background light (IBL) as a function of red shift using an approach based on observational data obtained at in different wavelength bands from local to deep galaxy surveys. Our empirically based approach allows us, for the firs.t time, to obtain a completely model independent determination of the IBL and to quantify its uncertainties. Using our results on the IBL, we then place upper and lower limits on the opacity of the universe to gamma-rays, independent of previous constraints.

  1. Trajectory-Oriented Approach to Managing Traffic Complexity: Operational Concept and Preliminary Metrics Definition

    NASA Technical Reports Server (NTRS)

    Idris, Husni; Vivona, Robert; Garcia-Chico, Jose L.

    2008-01-01

    This document describes preliminary research on a distributed, trajectory-oriented approach for traffic complexity management. The approach is to manage traffic complexity in a distributed control environment, based on preserving trajectory flexibility and minimizing constraints. In particular, the document presents an analytical framework to study trajectory flexibility and the impact of trajectory constraints on it. The document proposes preliminary flexibility metrics that can be interpreted and measured within the framework.

  2. Nonlinear dynamic analysis of flexible multibody systems

    NASA Technical Reports Server (NTRS)

    Bauchau, Olivier A.; Kang, Nam Kook

    1991-01-01

    Two approaches are developed to analyze the dynamic behavior of flexible multibody systems. In the first approach each body is modeled with a modal methodology in a local non-inertial frame of reference, whereas in the second approach, each body is modeled with a finite element methodology in the inertial frame. In both cases, the interaction among the various elastic bodies is represented by constraint equations. The two approaches were compared for accuracy and efficiency: the first approach is preferable when the nonlinearities are not too strong but it becomes cumbersome and expensive to use when many modes must be used. The second approach is more general and easier to implement but could result in high computation costs for a large system. The constraints should be enforced in a time derivative fashion for better accuracy and stability.

  3. A Fatty Acid Based Bayesian Approach for Inferring Diet in Aquatic Consumers

    PubMed Central

    Holtgrieve, Gordon W.; Ward, Eric J.; Ballantyne, Ashley P.; Burns, Carolyn W.; Kainz, Martin J.; Müller-Navarra, Doerthe C.; Persson, Jonas; Ravet, Joseph L.; Strandberg, Ursula; Taipale, Sami J.; Alhgren, Gunnel

    2015-01-01

    We modified the stable isotope mixing model MixSIR to infer primary producer contributions to consumer diets based on their fatty acid composition. To parameterize the algorithm, we generated a ‘consumer-resource library’ of FA signatures of Daphnia fed different algal diets, using 34 feeding trials representing diverse phytoplankton lineages. This library corresponds to the resource or producer file in classic Bayesian mixing models such as MixSIR or SIAR. Because this library is based on the FA profiles of zooplankton consuming known diets, and not the FA profiles of algae directly, trophic modification of consumer lipids is directly accounted for. To test the model, we simulated hypothetical Daphnia comprised of 80% diatoms, 10% green algae, and 10% cryptophytes and compared the FA signatures of these known pseudo-mixtures to outputs generated by the mixing model. The algorithm inferred these simulated consumers were comprised of 82% (63-92%) [median (2.5th to 97.5th percentile credible interval)] diatoms, 11% (4-22%) green algae, and 6% (0-25%) cryptophytes. We used the same model with published phytoplankton stable isotope (SI) data for δ13C and δ15N to examine how a SI based approach resolved a similar scenario. With SI, the algorithm inferred that the simulated consumer assimilated 52% (4-91%) diatoms, 23% (1-78%) green algae, and 18% (1-73%) cyanobacteria. The accuracy and precision of SI based estimates was extremely sensitive to both resource and consumer uncertainty, as well as the trophic fractionation assumption. These results indicate that when using only two tracers with substantial uncertainty for the putative resources, as is often the case in this class of analyses, the underdetermined constraint in consumer-resource SI analyses may be intractable. The FA based approach alleviated the underdetermined constraint because many more FA biomarkers were utilized (n < 20), different primary producers (e.g., diatoms, green algae, and cryptophytes) have very characteristic FA compositions, and the FA profiles of many aquatic primary consumers are strongly influenced by their diets. PMID:26114945

  4. Improving stability of regional numerical ocean models

    NASA Astrophysics Data System (ADS)

    Herzfeld, Mike

    2009-02-01

    An operational limited-area ocean modelling system was developed to supply forecasts of ocean state out to 3 days. This system is designed to allow non-specialist users to locate the model domain anywhere within the Australasian region with minimum user input. The model is required to produce a stable simulation every time it is invoked. This paper outlines the methodology used to ensure the model remains stable over the wide range of circumstances it might encounter. Central to the model configuration is an alternative approach to implementing open boundary conditions in a one-way nesting environment. Approximately 170 simulations were performed on limited areas in the Australasian region to assess the model stability; of these, 130 ran successfully with a static model parameterisation allowing a statistical estimate of the model’s approach toward instability to be determined. Based on this, when the model was deemed to be approaching instability a strategy of adaptive intervention in the form of constraint on velocity and elevation was invoked to maintain stability.

  5. Solution of underdetermined systems of equations with gridded a priori constraints.

    PubMed

    Stiros, Stathis C; Saltogianni, Vasso

    2014-01-01

    The TOPINV, Topological Inversion algorithm (or TGS, Topological Grid Search) initially developed for the inversion of highly non-linear redundant systems of equations, can solve a wide range of underdetermined systems of non-linear equations. This approach is a generalization of a previous conclusion that this algorithm can be used for the solution of certain integer ambiguity problems in Geodesy. The overall approach is based on additional (a priori) information for the unknown variables. In the past, such information was used either to linearize equations around approximate solutions, or to expand systems of observation equations solved on the basis of generalized inverses. In the proposed algorithm, the a priori additional information is used in a third way, as topological constraints to the unknown n variables, leading to an R(n) grid containing an approximation of the real solution. The TOPINV algorithm does not focus on point-solutions, but exploits the structural and topological constraints in each system of underdetermined equations in order to identify an optimal closed space in the R(n) containing the real solution. The centre of gravity of the grid points defining this space corresponds to global, minimum-norm solutions. The rationale and validity of the overall approach are demonstrated on the basis of examples and case studies, including fault modelling, in comparison with SVD solutions and true (reference) values, in an accuracy-oriented approach.

  6. Modal kinematics for multisection continuum arms.

    PubMed

    Godage, Isuru S; Medrano-Cerda, Gustavo A; Branson, David T; Guglielmino, Emanuele; Caldwell, Darwin G

    2015-05-13

    This paper presents a novel spatial kinematic model for multisection continuum arms based on mode shape functions (MSF). Modal methods have been used in many disciplines from finite element methods to structural analysis to approximate complex and nonlinear parametric variations with simple mathematical functions. Given certain constraints and required accuracy, this helps to simplify complex phenomena with numerically efficient implementations leading to fast computations. A successful application of the modal approximation techniques to develop a new modal kinematic model for general variable length multisection continuum arms is discussed. The proposed method solves the limitations associated with previous models and introduces a new approach for readily deriving exact, singularity-free and unique MSF's that simplifies the approach and avoids mode switching. The model is able to simulate spatial bending as well as straight arm motions (i.e., pure elongation/contraction), and introduces inverse position and orientation kinematics for multisection continuum arms. A kinematic decoupling feature, splitting position and orientation inverse kinematics is introduced. This type of decoupling has not been presented for these types of robotic arms before. The model also carefully accounts for physical constraints in the joint space to provide enhanced insight into practical mechanics and impose actuator mechanical limitations onto the kinematics thus generating fully realizable results. The proposed method is easily applicable to a broad spectrum of continuum arm designs.

  7. Dense motion estimation using regularization constraints on local parametric models.

    PubMed

    Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein

    2004-11-01

    This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.

  8. Processing scalar implicature: a Constraint-Based approach

    PubMed Central

    Degen, Judith; Tanenhaus, Michael K.

    2014-01-01

    Three experiments investigated the processing of the implicature associated with some using a “gumball paradigm”. On each trial participants saw an image of a gumball machine with an upper chamber with 13 gumballs and an empty lower chamber. Gumballs then dropped to the lower chamber and participants evaluated statements, such as “You got some of the gumballs”. Experiment 1 established that some is less natural for reference to small sets (1, 2 and 3 of the 13 gumballs) and unpartitioned sets (all 13 gumballs) compared to intermediate sets (6–8). Partitive some of was less natural than simple some when used with the unpartitioned set. In Experiment 2, including exact number descriptions lowered naturalness ratings for some with small sets but not for intermediate size sets and the unpartitioned set. In Experiment 3 the naturalness ratings from Experiment 2 predicted response times. The results are interpreted as evidence for a Constraint-Based account of scalar implicature processing and against both two-stage, Literal-First models and pragmatic Default models. PMID:25265993

  9. Revisit of cosmic ray antiprotons from dark matter annihilation with updated constraints on the background model from AMS-02 and collider data

    NASA Astrophysics Data System (ADS)

    Cui, Ming-Yang; Pan, Xu; Yuan, Qiang; Fan, Yi-Zhong; Zong, Hong-Shi

    2018-06-01

    We study the cosmic ray antiprotons with updated constraints on the propagation, proton injection, and solar modulation parameters based on the newest AMS-02 data near the Earth and Voyager data in the local interstellar space, and on the cross section of antiproton production due to proton-proton collisions based on new collider data. We use a Bayesian approach to properly consider the uncertainties of the model predictions of both the background and the dark matter (DM) annihilation components of antiprotons. We find that including an extra component of antiprotons from the annihilation of DM particles into a pair of quarks can improve the fit to the AMS-02 antiproton data considerably. The favored mass of DM particles is about 60~100 GeV, and the annihilation cross section is just at the level of the thermal production of DM (langleσvrangle ~ O(10‑26) cm3 s‑1).

  10. A Constraint Generation Approach to Learning Stable Linear Dynamical Systems

    DTIC Science & Technology

    2008-01-01

    task of learning dynamic textures from image sequences as well as to modeling biosurveillance drug-sales data. The constraint generation approach...previous methods in our experiments. One application of LDSs in computer vision is learning dynamic textures from video data [8]. An advantage of...over-the-counter (OTC) drug sales for biosurveillance , and sunspot numbers from the UCR archive [9]. Comparison to the best alternative methods [7, 10

  11. Incorporating Resource Protection Constraints in an Analysis of Landscape Fuel-Treatment Effectiveness in the Northern Sierra Nevada, CA, USA.

    PubMed

    Dow, Christopher B; Collins, Brandon M; Stephens, Scott L

    2016-03-01

    Finding novel ways to plan and implement landscape-level forest treatments that protect sensitive wildlife and other key ecosystem components, while also reducing the risk of large-scale, high-severity fires, can prove to be difficult. We examined alternative approaches to landscape-scale fuel-treatment design for the same landscape. These approaches included two different treatment scenarios generated from an optimization algorithm that reduces modeled fire spread across the landscape, one with resource-protection constrains and one without the same. We also included a treatment scenario that was the actual fuel-treatment network implemented, as well as a no-treatment scenario. For all the four scenarios, we modeled hazardous fire potential based on conditional burn probabilities, and projected fire emissions. Results demonstrate that in all the three active treatment scenarios, hazardous fire potential, fire area, and emissions were reduced by approximately 50 % relative to the untreated condition. Results depict that incorporation of constraints is more effective at reducing modeled fire outputs, possibly due to the greater aggregation of treatments, creating greater continuity of fuel-treatment blocks across the landscape. The implementation of fuel-treatment networks using different planning techniques that incorporate real-world constraints can reduce the risk of large problematic fires, allow for landscape-level heterogeneity that can provide necessary ecosystem services, create mixed forest stand structures on a landscape, and promote resilience in the uncertain future of climate change.

  12. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, and produce modified schedules quickly. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. These experiments were performed within the domain of Space Shuttle ground processing.

  13. An innovative outcome-based care and procurement model of hemophilia management.

    PubMed

    Gringeri, Alessandro; Doralt, Jennifer; Valentino, Leonard A; Crea, Roberto; Reininger, Armin J

    2016-06-01

    Hemophilia is a rare bleeding disorder associated with spontaneous and post-traumatic bleeding. Each hemophilia patient requires a personalized approach to episodic or prophylactic treatment, but self-management can be challenging for patients, and avoidable bleeding may occur. Patient-tailored care may provide more effective prevention of bleeding, which in turn, may decrease the likelihood of arthropathy and associated chronic pain, missed time from school or work, and progressive loss of mobility. A strategy is presented here aiming to reduce or eliminate bleeding altogether through a holistic approach based on individual patient characteristics. In an environment of budget constraints, this approach would link procurement to patient outcome, adding incentives for all stakeholders to strive for optimal care and, ultimately, a bleed-free world.

  14. Disassemblability modeling technology of configurable product based on disassembly constraint relation weighted design structure matrix(DSM)

    NASA Astrophysics Data System (ADS)

    Qiu, Lemiao; Liu, Xiaojian; Zhang, Shuyou; Sun, Liangfeng

    2014-05-01

    The current research of configurable product disassemblability focuses on disassemblability evaluation and disassembly sequence planning. Little work has been done on quantitative analysis of configurable product disassemblability. The disassemblability modeling technology for configurable product based on disassembly constraint relation weighted design structure matrix (DSM) is proposed. Major factors affecting the disassemblability of configurable product are analyzed, and the disassembling degrees between components in configurable product are obtained by calculating disassembly entropies such as joint type, joint quantity, disassembly path, disassembly accessibility and material compatibility. The disassembly constraint relation weighted DSM of configurable product is constructed and configuration modules are formed by matrix decomposition and tearing operations. The disassembly constraint relation in configuration modules is strong coupling, and the disassembly constraint relation between modules is weak coupling, and the disassemblability configuration model is constructed based on configuration module. Finally, taking a hydraulic forging press as an example, the decomposed weak coupling components are used as configuration modules alone, components with a strong coupling are aggregated into configuration modules, and the disassembly sequence of components inside configuration modules is optimized by tearing operation. A disassemblability configuration model of the hydraulic forging press is constructed. By researching the disassemblability modeling technology of product configuration design based on disassembly constraint relation weighted DSM, the disassembly property in maintenance, recycling and reuse of configurable product are optimized.

  15. Accounting for costs, QALYs, and capacity constraints: using discrete-event simulation to evaluate alternative service delivery and organizational scenarios for hospital-based glaucoma services.

    PubMed

    Crane, Glenis J; Kymes, Steven M; Hiller, Janet E; Casson, Robert; Martin, Adam; Karnon, Jonathan D

    2013-11-01

    Decision-analytic models are routinely used as a framework for cost-effectiveness analyses of health care services and technologies; however, these models mostly ignore resource constraints. In this study, we use a discrete-event simulation model to inform a cost-effectiveness analysis of alternative options for the organization and delivery of clinical services in the ophthalmology department of a public hospital. The model is novel, given that it represents both disease outcomes and resource constraints in a routine clinical setting. A 5-year discrete-event simulation model representing glaucoma patient services at the Royal Adelaide Hospital (RAH) was implemented and calibrated to patient-level data. The data were sourced from routinely collected waiting and appointment lists, patient record data, and the published literature. Patient-level costs and quality-adjusted life years were estimated for a range of alternative scenarios, including combinations of alternate follow-up times, booking cycles, and treatment pathways. The model shows that a) extending booking cycle length from 4 to 6 months, b) extending follow-up visit times by 2 to 3 months, and c) using laser in preference to medication are more cost-effective than current practice at the RAH eye clinic. The current simulation model provides a useful tool for informing improvements in the organization and delivery of glaucoma services at a local level (e.g., within a hospital), on the basis of expected effects on costs and health outcomes while accounting for current capacity constraints. Our model may be adapted to represent glaucoma services at other hospitals, whereas the general modeling approach could be applied to many other clinical service areas.

  16. Finite element method for optimal guidance of an advanced launch vehicle

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.; Calise, Anthony J.; Leung, Martin

    1992-01-01

    A temporal finite element based on a mixed form of Hamilton's weak principle is summarized for optimal control problems. The resulting weak Hamiltonian finite element method is extended to allow for discontinuities in the states and/or discontinuities in the system equations. An extension of the formulation to allow for control inequality constraints is also presented. The formulation does not require element quadrature, and it produces a sparse system of nonlinear algebraic equations. To evaluate its feasibility for real-time guidance applications, this approach is applied to the trajectory optimization of a four-state, two-stage model with inequality constraints for an advanced launch vehicle. Numerical results for this model are presented and compared to results from a multiple-shooting code. The results show the accuracy and computational efficiency of the finite element method.

  17. Planning and Scheduling for Fleets of Earth Observing Satellites

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Jonsson, Ari; Morris, Robert; Smith, David E.; Norvig, Peter (Technical Monitor)

    2001-01-01

    We address the problem of scheduling observations for a collection of earth observing satellites. This scheduling task is a difficult optimization problem, potentially involving many satellites, hundreds of requests, constraints on when and how to service each request, and resources such as instruments, recording devices, transmitters, and ground stations. High-fidelity models are required to ensure the validity of schedules; at the same time, the size and complexity of the problem makes it unlikely that systematic optimization search methods will be able to solve them in a reasonable time. This paper presents a constraint-based approach to solving the Earth Observing Satellites (EOS) scheduling problem, and proposes a stochastic heuristic search method for solving it.

  18. Glass Property Models, Constraints, and Formulation Approaches for Vitrification of High-Level Nuclear Wastes at the US Hanford Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Dong-Sang

    2015-03-02

    The legacy nuclear wastes stored in underground tanks at the US Department of Energy’s Hanford site is planned to be separated into high-level waste and low-activity waste fractions and vitrified separately. Formulating optimized glass compositions that maximize the waste loading in glass is critical for successful and economical treatment and immobilization of nuclear wastes. Glass property-composition models have been developed and applied to formulate glass compositions for various objectives for the past several decades. The property models with associated uncertainties and combined with composition and property constraints have been used to develop preliminary glass formulation algorithms designed for vitrification processmore » control and waste form qualification at the planned waste vitrification plant. This paper provides an overview of current status of glass property-composition models, constraints applicable to Hanford waste vitrification, and glass formulation approaches that have been developed for vitrification of hazardous and highly radioactive wastes stored at the Hanford site.« less

  19. A distributed predictive control approach for periodic flow-based networks: application to drinking water systems

    NASA Astrophysics Data System (ADS)

    Grosso, Juan M.; Ocampo-Martinez, Carlos; Puig, Vicenç

    2017-10-01

    This paper proposes a distributed model predictive control approach designed to work in a cooperative manner for controlling flow-based networks showing periodic behaviours. Under this distributed approach, local controllers cooperate in order to enhance the performance of the whole flow network avoiding the use of a coordination layer. Alternatively, controllers use both the monolithic model of the network and the given global cost function to optimise the control inputs of the local controllers but taking into account the effect of their decisions over the remainder subsystems conforming the entire network. In this sense, a global (all-to-all) communication strategy is considered. Although the Pareto optimality cannot be reached due to the existence of non-sparse coupling constraints, the asymptotic convergence to a Nash equilibrium is guaranteed. The resultant strategy is tested and its effectiveness is shown when applied to a large-scale complex flow-based network: the Barcelona drinking water supply system.

  20. Active shape models unleashed

    NASA Astrophysics Data System (ADS)

    Kirschner, Matthias; Wesarg, Stefan

    2011-03-01

    Active Shape Models (ASMs) are a popular family of segmentation algorithms which combine local appearance models for boundary detection with a statistical shape model (SSM). They are especially popular in medical imaging due to their ability for fast and accurate segmentation of anatomical structures even in large and noisy 3D images. A well-known limitation of ASMs is that the shape constraints are over-restrictive, because the segmentations are bounded by the Principal Component Analysis (PCA) subspace learned from the training data. To overcome this limitation, we propose a new energy minimization approach which combines an external image energy with an internal shape model energy. Our shape energy uses the Distance From Feature Space (DFFS) concept to allow deviations from the PCA subspace in a theoretically sound and computationally fast way. In contrast to previous approaches, our model does not rely on post-processing with constrained free-form deformation or additional complex local energy models. In addition to the energy minimization approach, we propose a new method for liver detection, a new method for initializing an SSM and an improved k-Nearest Neighbour (kNN)-classifier for boundary detection. Our ASM is evaluated with leave-one-out tests on a data set with 34 tomographic CT scans of the liver and is compared to an ASM with standard shape constraints. The quantitative results of our experiments show that we achieve higher segmentation accuracy with our energy minimization approach than with standard shape constraints.nym

  1. Toward quantitative understanding on microbial community structure and functioning: a modeling-centered approach using degradation of marine oil spills as example

    PubMed Central

    Röling, Wilfred F. M.; van Bodegom, Peter M.

    2014-01-01

    Molecular ecology approaches are rapidly advancing our insights into the microorganisms involved in the degradation of marine oil spills and their metabolic potentials. Yet, many questions remain open: how do oil-degrading microbial communities assemble in terms of functional diversity, species abundances and organization and what are the drivers? How do the functional properties of microorganisms scale to processes at the ecosystem level? How does mass flow among species, and which factors and species control and regulate fluxes, stability and other ecosystem functions? Can generic rules on oil-degradation be derived, and what drivers underlie these rules? How can we engineer oil-degrading microbial communities such that toxic polycyclic aromatic hydrocarbons are degraded faster? These types of questions apply to the field of microbial ecology in general. We outline how recent advances in single-species systems biology might be extended to help answer these questions. We argue that bottom-up mechanistic modeling allows deciphering the respective roles and interactions among microorganisms. In particular constraint-based, metagenome-derived community-scale flux balance analysis appears suited for this goal as it allows calculating degradation-related fluxes based on physiological constraints and growth strategies, without needing detailed kinetic information. We subsequently discuss what is required to make these approaches successful, and identify a need to better understand microbial physiology in order to advance microbial ecology. We advocate the development of databases containing microbial physiological data. Answering the posed questions is far from trivial. Oil-degrading communities are, however, an attractive setting to start testing systems biology-derived models and hypotheses as they are relatively simple in diversity and key activities, with several key players being isolated and a high availability of experimental data and approaches. PMID:24723922

  2. Toward quantitative understanding on microbial community structure and functioning: a modeling-centered approach using degradation of marine oil spills as example.

    PubMed

    Röling, Wilfred F M; van Bodegom, Peter M

    2014-01-01

    Molecular ecology approaches are rapidly advancing our insights into the microorganisms involved in the degradation of marine oil spills and their metabolic potentials. Yet, many questions remain open: how do oil-degrading microbial communities assemble in terms of functional diversity, species abundances and organization and what are the drivers? How do the functional properties of microorganisms scale to processes at the ecosystem level? How does mass flow among species, and which factors and species control and regulate fluxes, stability and other ecosystem functions? Can generic rules on oil-degradation be derived, and what drivers underlie these rules? How can we engineer oil-degrading microbial communities such that toxic polycyclic aromatic hydrocarbons are degraded faster? These types of questions apply to the field of microbial ecology in general. We outline how recent advances in single-species systems biology might be extended to help answer these questions. We argue that bottom-up mechanistic modeling allows deciphering the respective roles and interactions among microorganisms. In particular constraint-based, metagenome-derived community-scale flux balance analysis appears suited for this goal as it allows calculating degradation-related fluxes based on physiological constraints and growth strategies, without needing detailed kinetic information. We subsequently discuss what is required to make these approaches successful, and identify a need to better understand microbial physiology in order to advance microbial ecology. We advocate the development of databases containing microbial physiological data. Answering the posed questions is far from trivial. Oil-degrading communities are, however, an attractive setting to start testing systems biology-derived models and hypotheses as they are relatively simple in diversity and key activities, with several key players being isolated and a high availability of experimental data and approaches.

  3. Robust Design Optimization via Failure Domain Bounding

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2007-01-01

    This paper extends and applies the strategies recently developed by the authors for handling constraints under uncertainty to robust design optimization. For the scope of this paper, robust optimization is a methodology aimed at problems for which some parameters are uncertain and are only known to belong to some uncertainty set. This set can be described by either a deterministic or a probabilistic model. In the methodology developed herein, optimization-based strategies are used to bound the constraint violation region using hyper-spheres and hyper-rectangles. By comparing the resulting bounding sets with any given uncertainty model, it can be determined whether the constraints are satisfied for all members of the uncertainty model (i.e., constraints are feasible) or not (i.e., constraints are infeasible). If constraints are infeasible and a probabilistic uncertainty model is available, upper bounds to the probability of constraint violation can be efficiently calculated. The tools developed enable approximating not only the set of designs that make the constraints feasible but also, when required, the set of designs for which the probability of constraint violation is below a prescribed admissible value. When constraint feasibility is possible, several design criteria can be used to shape the uncertainty model of performance metrics of interest. Worst-case, least-second-moment, and reliability-based design criteria are considered herein. Since the problem formulation is generic and the tools derived only require standard optimization algorithms for their implementation, these strategies are easily applicable to a broad range of engineering problems.

  4. Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation

    NASA Technical Reports Server (NTRS)

    Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R

    2006-01-01

    The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.

  5. Nursing constraint models for electronic health records: a vision for domain knowledge governance.

    PubMed

    Hovenga, Evelyn; Garde, Sebastian; Heard, Sam

    2005-12-01

    Various forms of electronic health records (EHRs) are currently being introduced in several countries. Nurses are primary stakeholders and need to ensure that their information and knowledge needs are being met by such systems information sharing between health care providers to enable them to improve the quality and efficiency of health care service delivery for all subjects of care. The latest international EHR standards have adopted the openEHR approach of two-level modelling. The first level is a stable information model determining structure, while the second level consists of constraint models or 'archetypes' that reflect the specifications or clinician rules for how clinical information needs to be represented to enable unambiguous data sharing. The current state of play in terms of international health informatics standards development activities is providing the nursing profession with a unique opportunity and challenge. Much work has been undertaken internationally in the area of nursing terminologies and evidence-based practice. This paper argues that to make the most of these emerging technologies and EHRs we must now concentrate on developing a process to identify, document, implement, manage and govern our nursing domain knowledge as well as contribute to the development of relevant international standards. It is argued that one comprehensive nursing terminology, such as the ICNP or SNOMED CT is simply too complex and too difficult to maintain. As the openEHR archetype approach does not rely heavily on big standardised terminologies, it offers more flexibility during standardisation of clinical concepts and it ensures open, future-proof electronic health records. We conclude that it is highly desirable for the nursing profession to adopt this openEHR approach as a means of documenting and governing the nursing profession's domain knowledge. It is essential for the nursing profession to develop its domain knowledge constraint models (archetypes) collaboratively in an international context.

  6. Developing Soil Moisture Profiles Utilizing Remotely Sensed MW and TIR Based SM Estimates Through Principle of Maximum Entropy

    NASA Astrophysics Data System (ADS)

    Mishra, V.; Cruise, J. F.; Mecikalski, J. R.

    2015-12-01

    Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Earlier studies show that the principle of maximum entropy (POME) can be utilized to develop vertical soil moisture profiles with accuracy (MAE of about 1% for a monotonically dry profile; nearly 2% for monotonically wet profiles and 3.8% for mixed profiles) with minimum constraints (surface, mean and bottom soil moisture contents). In this study, the constraints for the vertical soil moisture profiles were obtained from remotely sensed data. Low resolution (25 km) MW soil moisture estimates (AMSR-E) were downscaled to 4 km using a soil evaporation efficiency index based disaggregation approach. The downscaled MW soil moisture estimates served as a surface boundary condition, while 4 km resolution TIR based Atmospheric Land Exchange Inverse (ALEXI) estimates provided the required mean root-zone soil moisture content. Bottom soil moisture content is assumed to be a soil dependent constant. Mulit-year (2002-2011) gridded profiles were developed for the southeastern United States using the POME method. The soil moisture profiles were compared to those generated in land surface models (Land Information System (LIS) and an agricultural model DSSAT) along with available NRCS SCAN sites in the study region. The end product, spatial soil moisture profiles, can be assimilated into agricultural and hydrologic models in lieu of precipitation for data scarce regions.Developing accurate vertical soil moisture profiles with minimum input requirements is important to agricultural as well as land surface modeling. Previous studies have shown that the principle of maximum entropy (POME) can be utilized with minimal constraints to develop vertical soil moisture profiles with accuracy (MAE = 1% for monotonically dry profiles; MAE = 2% for monotonically wet profiles and MAE = 3.8% for mixed profiles) when compared to laboratory and field data. In this study, vertical soil moisture profiles were developed using the POME model to evaluate an irrigation schedule over a maze field in north central Alabama (USA). The model was validated using both field data and a physically based mathematical model. The results demonstrate that a simple two-constraint entropy model under the assumption of a uniform initial soil moisture distribution can simulate most soil moisture profiles within the field area for 6 different soil types. The results of the irrigation simulation demonstrated that the POME model produced a very efficient irrigation strategy with loss of about 1.9% of the total applied irrigation water. However, areas of fine-textured soil (i.e. silty clay) resulted in plant stress of nearly 30% of the available moisture content due to insufficient water supply on the last day of the drying phase of the irrigation cycle. Overall, the POME approach showed promise as a general strategy to guide irrigation in humid environments, with minimum input requirements.

  7. Efficient robust reconstruction of dynamic PET activity maps with radioisotope decay constraints.

    PubMed

    Gao, Fei; Liu, Huafeng; Shi, Pengcheng

    2010-01-01

    Dynamic PET imaging performs sequence of data acquisition in order to provide visualization and quantification of physiological changes in specific tissues and organs. The reconstruction of activity maps is generally the first step in dynamic PET. State space Hinfinity approaches have been proved to be a robust method for PET image reconstruction where, however, temporal constraints are not considered during the reconstruction process. In addition, the state space strategies for PET image reconstruction have been computationally prohibitive for practical usage because of the need for matrix inversion. In this paper, we present a minimax formulation of the dynamic PET imaging problem where a radioisotope decay model is employed as physics-based temporal constraints on the photon counts. Furthermore, a robust steady state Hinfinity filter is developed to significantly improve the computational efficiency with minimal loss of accuracy. Experiments are conducted on Monte Carlo simulated image sequences for quantitative analysis and validation.

  8. Restoring Consistency In Subjective Information For Groundwater Driven Health Risk Assessment

    NASA Astrophysics Data System (ADS)

    Ozbek, M. M.; Pinder, G. F.

    2004-12-01

    In an earlier work (Ozbek and Pinder, 2003), we constructed a fuzzy rule-based knowledge base that uses subjective expert opinion to calculate risk-based design constraints (i.e., dose and pattern of exposure) to sustain the groundwater-driven individual health risk at a desired level. Ideally, our system must be capable to produce for any individual a meaningful risk result or for any given risk a meaningful design constraint, in the sense that the result is neither the empty set nor the whole domain of the variable of interest. Otherwise we consider our system as inconsistent. We present a method based on fuzzy similarity relations to restore consistency in our implicative fuzzy rule based system used for the risk-based groundwater remediation design problem. Both a global and a local approach are considered. Even though straightforward and computationally less demanding, the global approach can affect pieces of knowledge negatively by inducing unwarranted imprecision into the knowledge base. On the other hand, the local approach, given a family of parameterized similarity relations, determines a parameter for each inference such that consistent results are computed which may not be feasible in real time applications of our knowledge base. Several scenarios are considered for comparing the two approaches that suggest that for specific applications one or several approaches ranging from a completely global to a completely local one will be more suitable than others while calculating the design constraints.

  9. Constraint satisfaction adaptive neural network and heuristics combined approaches for generalized job-shop scheduling.

    PubMed

    Yang, S; Wang, D

    2000-01-01

    This paper presents a constraint satisfaction adaptive neural network, together with several heuristics, to solve the generalized job-shop scheduling problem, one of NP-complete constraint satisfaction problems. The proposed neural network can be easily constructed and can adaptively adjust its weights of connections and biases of units based on the sequence and resource constraints of the job-shop scheduling problem during its processing. Several heuristics that can be combined with the neural network are also presented. In the combined approaches, the neural network is used to obtain feasible solutions, the heuristic algorithms are used to improve the performance of the neural network and the quality of the obtained solutions. Simulations have shown that the proposed neural network and its combined approaches are efficient with respect to the quality of solutions and the solving speed.

  10. Using activity-based costing and theory of constraints to guide continuous improvement in managed care.

    PubMed

    Roybal, H; Baxendale, S J; Gupta, M

    1999-01-01

    Activity-based costing and the theory of constraints have been applied successfully in many manufacturing organizations. Recently, those concepts have been applied in service organizations. This article describes the application of activity-based costing and the theory of constraints in a managed care mental health and substance abuse organization. One of the unique aspects of this particular application was the integration of activity-based costing and the theory of constraints to guide process improvement efforts. This article describes the activity-based costing model and the application of the theory of constraint's focusing steps with an emphasis on unused capacities of activities in the organization.

  11. Modeling the Structure of Helical Assemblies with Experimental Constraints in Rosetta.

    PubMed

    André, Ingemar

    2018-01-01

    Determining high-resolution structures of proteins with helical symmetry can be challenging due to limitations in experimental data. In such instances, structure-based protein simulations driven by experimental data can provide a valuable approach for building models of helical assemblies. This chapter describes how the Rosetta macromolecular package can be used to model homomeric protein assemblies with helical symmetry in a range of modeling scenarios including energy refinement, symmetrical docking, comparative modeling, and de novo structure prediction. Data-guided structure modeling of helical assemblies with experimental information from electron density, X-ray fiber diffraction, solid-state NMR, and chemical cross-linking mass spectrometry is also described.

  12. Modeling Multivalent Ligand-Receptor Interactions with Steric Constraints on Configurations of Cell-Surface Receptor Aggregates

    PubMed Central

    Monine, Michael I.; Posner, Richard G.; Savage, Paul B.; Faeder, James R.; Hlavacek, William S.

    2010-01-01

    Abstract We use flow cytometry to characterize equilibrium binding of a fluorophore-labeled trivalent model antigen to bivalent IgE-FcεRI complexes on RBL cells. We find that flow cytometric measurements are consistent with an equilibrium model for ligand-receptor binding in which binding sites are assumed to be equivalent and ligand-induced receptor aggregates are assumed to be acyclic. However, this model predicts extensive receptor aggregation at antigen concentrations that yield strong cellular secretory responses, which is inconsistent with the expectation that large receptor aggregates should inhibit such responses. To investigate possible explanations for this discrepancy, we evaluate four rule-based models for interaction of a trivalent ligand with a bivalent cell-surface receptor that relax simplifying assumptions of the equilibrium model. These models are simulated using a rule-based kinetic Monte Carlo approach to investigate the kinetics of ligand-induced receptor aggregation and to study how the kinetics and equilibria of ligand-receptor interaction are affected by steric constraints on receptor aggregate configurations and by the formation of cyclic receptor aggregates. The results suggest that formation of linear chains of cyclic receptor dimers may be important for generating secretory signals. Steric effects that limit receptor aggregation and transient formation of small receptor aggregates may also be important. PMID:20085718

  13. An Optimization-based Atomistic-to-Continuum Coupling Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally,more » we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.« less

  14. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    NASA Astrophysics Data System (ADS)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  15. Network approaches for expert decisions in sports.

    PubMed

    Glöckner, Andreas; Heinen, Thomas; Johnson, Joseph G; Raab, Markus

    2012-04-01

    This paper focuses on a model comparison to explain choices based on gaze behavior via simulation procedures. We tested two classes of models, a parallel constraint satisfaction (PCS) artificial neuronal network model and an accumulator model in a handball decision-making task from a lab experiment. Both models predict action in an option-generation task in which options can be chosen from the perspective of a playmaker in handball (i.e., passing to another player or shooting at the goal). Model simulations are based on a dataset of generated options together with gaze behavior measurements from 74 expert handball players for 22 pieces of video footage. We implemented both classes of models as deterministic vs. probabilistic models including and excluding fitted parameters. Results indicated that both classes of models can fit and predict participants' initially generated options based on gaze behavior data, and that overall, the classes of models performed about equally well. Early fixations were thereby particularly predictive for choices. We conclude that the analyses of complex environments via network approaches can be successfully applied to the field of experts' decision making in sports and provide perspectives for further theoretical developments. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Concentric Tube Robot Design and Optimization Based on Task and Anatomical Constraints

    PubMed Central

    Bergeles, Christos; Gosline, Andrew H.; Vasilyev, Nikolay V.; Codd, Patrick J.; del Nido, Pedro J.; Dupont, Pierre E.

    2015-01-01

    Concentric tube robots are catheter-sized continuum robots that are well suited for minimally invasive surgery inside confined body cavities. These robots are constructed from sets of pre-curved superelastic tubes and are capable of assuming complex 3D curves. The family of 3D curves that the robot can assume depends on the number, curvatures, lengths and stiffnesses of the tubes in its tube set. The robot design problem involves solving for a tube set that will produce the family of curves necessary to perform a surgical procedure. At a minimum, these curves must enable the robot to smoothly extend into the body and to manipulate tools over the desired surgical workspace while respecting anatomical constraints. This paper introduces an optimization framework that utilizes procedureor patient-specific image-based anatomical models along with surgical workspace requirements to generate robot tube set designs. The algorithm searches for designs that minimize robot length and curvature and for which all paths required for the procedure consist of stable robot configurations. Two mechanics-based kinematic models are used. Initial designs are sought using a model assuming torsional rigidity. These designs are then refined using a torsionally-compliant model. The approach is illustrated with clinically relevant examples from neurosurgery and intracardiac surgery. PMID:26380575

  17. A Bayesian approach to the modelling of α Cen A

    NASA Astrophysics Data System (ADS)

    Bazot, M.; Bourguignon, S.; Christensen-Dalsgaard, J.

    2012-12-01

    Determining the physical characteristics of a star is an inverse problem consisting of estimating the parameters of models for the stellar structure and evolution, and knowing certain observable quantities. We use a Bayesian approach to solve this problem for α Cen A, which allows us to incorporate prior information on the parameters to be estimated, in order to better constrain the problem. Our strategy is based on the use of a Markov chain Monte Carlo (MCMC) algorithm to estimate the posterior probability densities of the stellar parameters: mass, age, initial chemical composition, etc. We use the stellar evolutionary code ASTEC to model the star. To constrain this model both seismic and non-seismic observations were considered. Several different strategies were tested to fit these values, using either two free parameters or five free parameters in ASTEC. We are thus able to show evidence that MCMC methods become efficient with respect to more classical grid-based strategies when the number of parameters increases. The results of our MCMC algorithm allow us to derive estimates for the stellar parameters and robust uncertainties thanks to the statistical analysis of the posterior probability densities. We are also able to compute odds for the presence of a convective core in α Cen A. When using core-sensitive seismic observational constraints, these can rise above ˜40 per cent. The comparison of results to previous studies also indicates that these seismic constraints are of critical importance for our knowledge of the structure of this star.

  18. Rubber airplane: Constraint-based component-modeling for knowledge representation in computer-aided conceptual design

    NASA Technical Reports Server (NTRS)

    Kolb, Mark A.

    1990-01-01

    Viewgraphs on Rubber Airplane: Constraint-based Component-Modeling for Knowledge Representation in Computer Aided Conceptual Design are presented. Topics covered include: computer aided design; object oriented programming; airfoil design; surveillance aircraft; commercial aircraft; aircraft design; and launch vehicles.

  19. Designing optimal stimuli to control neuronal spike timing

    PubMed Central

    Packer, Adam M.; Yuste, Rafael; Paninski, Liam

    2011-01-01

    Recent advances in experimental stimulation methods have raised the following important computational question: how can we choose a stimulus that will drive a neuron to output a target spike train with optimal precision, given physiological constraints? Here we adopt an approach based on models that describe how a stimulating agent (such as an injected electrical current or a laser light interacting with caged neurotransmitters or photosensitive ion channels) affects the spiking activity of neurons. Based on these models, we solve the reverse problem of finding the best time-dependent modulation of the input, subject to hardware limitations as well as physiologically inspired safety measures, that causes the neuron to emit a spike train that with highest probability will be close to a target spike train. We adopt fast convex constrained optimization methods to solve this problem. Our methods can potentially be implemented in real time and may also be generalized to the case of many cells, suitable for neural prosthesis applications. With the use of biologically sensible parameters and constraints, our method finds stimulation patterns that generate very precise spike trains in simulated experiments. We also tested the intracellular current injection method on pyramidal cells in mouse cortical slices, quantifying the dependence of spiking reliability and timing precision on constraints imposed on the applied currents. PMID:21511704

  20. Two-dimensional probabilistic inversion of plane-wave electromagnetic data: methodology, model constraints and joint inversion with electrical resistivity data

    NASA Astrophysics Data System (ADS)

    Rosas-Carbajal, Marina; Linde, Niklas; Kalscheuer, Thomas; Vrugt, Jasper A.

    2014-03-01

    Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.

  1. Towards generalised reference condition models for environmental assessment: a case study on rivers in Atlantic Canada.

    PubMed

    Armanini, D G; Monk, W A; Carter, L; Cote, D; Baird, D J

    2013-08-01

    Evaluation of the ecological status of river sites in Canada is supported by building models using the reference condition approach. However, geography, data scarcity and inter-operability constraints have frustrated attempts to monitor national-scale status and trends. This issue is particularly true in Atlantic Canada, where no ecological assessment system is currently available. Here, we present a reference condition model based on the River Invertebrate Prediction and Classification System approach with regional-scale applicability. To achieve this, we used biological monitoring data collected from wadeable streams across Atlantic Canada together with freely available, nationally consistent geographic information system (GIS) environmental data layers. For the first time, we demonstrated that it is possible to use data generated from different studies, even when collected using different sampling methods, to generate a robust predictive model. This model was successfully generated and tested using GIS-based rather than local habitat variables and showed improved performance when compared to a null model. In addition, ecological quality ratio data derived from the model responded to observed stressors in a test dataset. Implications for future large-scale implementation of river biomonitoring using a standardised approach with global application are presented.

  2. The Robust Software Feedback Model: An Effective Waterfall Model Tailoring for Space SW

    NASA Astrophysics Data System (ADS)

    Tipaldi, Massimo; Gotz, Christoph; Ferraguto, Massimo; Troiano, Luigi; Bruenjes, Bernhard

    2013-08-01

    The selection of the most suitable software life cycle process is of paramount importance in any space SW project. Despite being the preferred choice, the waterfall model is often exposed to some criticism. As matter of fact, its main assumption of moving to a phase only when the preceding one is completed and perfected (and under the demanding SW schedule constraints) is not easily attainable. In this paper, a tailoring of the software waterfall model (named “Robust Software Feedback Model”) is presented. The proposed methodology sorts out these issues by combining a SW waterfall model with a SW prototyping approach. The former is aligned with the SW main production line and is based on the full ECSS-E-ST-40C life-cycle reviews, whereas the latter is carried out in advance versus the main SW streamline (so as to inject its lessons learnt into the main streamline) and is based on a lightweight approach.

  3. A new approach for investigating protein flexibility based on Constraint Logic Programming. The first application in the case of the estrogen receptor.

    PubMed

    Dal Palú, Alessandro; Spyrakis, Francesca; Cozzini, Pietro

    2012-03-01

    We describe the potential of a novel method, based on Constraint Logic Programming (CLP), developed for an exhaustive sampling of protein conformational space. The CLP framework proposed here has been tested and applied to the estrogen receptor, whose activity and function is strictly related to its intrinsic, and well known, dynamics. We have investigated in particular the flexibility of H12, focusing on the pathways followed by the helix when moving from one stable crystallographic conformation to the others. Millions of geometrically feasible conformations were generated, selected and the traces connecting the different forms were determined by using a shortest path algorithm. The preliminary analyses showed a marked agreement between the crystallographic agonist-like, antagonist-like and hypothetical apo forms, and the corresponding conformations identified by the CLP framework. These promising results, together with the short computational time required to perform the analyses, make this constraint-based approach a valuable tool for the study of protein folding prediction. The CLP framework enables one to consider various structural and energetic scenarious, without changing the core algorithm. To show the feasibility of the method, we intentionally choose a pure geometric setting, neglecting the energetic evaluation of the poses, in order to be independent from a specific force field and to provide the possibility of comparing different behaviours associated with various energy models. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  4. Satellite land use acquisition and applications to hydrologic planning models

    NASA Technical Reports Server (NTRS)

    Algazi, V. R.; Suk, M.

    1977-01-01

    A developing operational procedure for use by the Corps of Engineers in the acquisition of land use information for hydrologic planning purposes was described. The operational conditions preclude the use of dedicated, interactive image processing facilities. Given the constraints, an approach to land use classification based on clustering seems promising and was explored in detail. The procedure is outlined and examples of application to two watersheds given.

  5. A High Precision Approach to Calibrate a Structured Light Vision Sensor in a Robot-Based Three-Dimensional Measurement System.

    PubMed

    Wu, Defeng; Chen, Tianfei; Li, Aiguo

    2016-08-30

    A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.

  6. An adaptive random search for short term generation scheduling with network constraints.

    PubMed

    Marmolejo, J A; Velasco, Jonás; Selley, Héctor J

    2017-01-01

    This paper presents an adaptive random search approach to address a short term generation scheduling with network constraints, which determines the startup and shutdown schedules of thermal units over a given planning horizon. In this model, we consider the transmission network through capacity limits and line losses. The mathematical model is stated in the form of a Mixed Integer Non Linear Problem with binary variables. The proposed heuristic is a population-based method that generates a set of new potential solutions via a random search strategy. The random search is based on the Markov Chain Monte Carlo method. The main key of the proposed method is that the noise level of the random search is adaptively controlled in order to exploring and exploiting the entire search space. In order to improve the solutions, we consider coupling a local search into random search process. Several test systems are presented to evaluate the performance of the proposed heuristic. We use a commercial optimizer to compare the quality of the solutions provided by the proposed method. The solution of the proposed algorithm showed a significant reduction in computational effort with respect to the full-scale outer approximation commercial solver. Numerical results show the potential and robustness of our approach.

  7. Cellular trade-offs and optimal resource allocation during cyanobacterial diurnal growth

    PubMed Central

    Knoop, Henning; Bockmayr, Alexander; Steuer, Ralf

    2017-01-01

    Cyanobacteria are an integral part of Earth’s biogeochemical cycles and a promising resource for the synthesis of renewable bioproducts from atmospheric CO2. Growth and metabolism of cyanobacteria are inherently tied to the diurnal rhythm of light availability. As yet, however, insight into the stoichiometric and energetic constraints of cyanobacterial diurnal growth is limited. Here, we develop a computational framework to investigate the optimal allocation of cellular resources during diurnal phototrophic growth using a genome-scale metabolic reconstruction of the cyanobacterium Synechococcus elongatus PCC 7942. We formulate phototrophic growth as an autocatalytic process and solve the resulting time-dependent resource allocation problem using constraint-based analysis. Based on a narrow and well-defined set of parameters, our approach results in an ab initio prediction of growth properties over a full diurnal cycle. The computational model allows us to study the optimality of metabolite partitioning during diurnal growth. The cyclic pattern of glycogen accumulation, an emergent property of the model, has timing characteristics that are in qualitative agreement with experimental findings. The approach presented here provides insight into the time-dependent resource allocation problem of phototrophic diurnal growth and may serve as a general framework to assess the optimality of metabolic strategies that evolved in phototrophic organisms under diurnal conditions. PMID:28720699

  8. Deterioration, death and the evolution of reproductive restraint in late life.

    PubMed

    McNamara, John M; Houston, Alasdair I; Barta, Zoltan; Scheuerlein, Alexander; Fromhage, Lutz

    2009-11-22

    Explaining why organisms schedule reproduction over their lifetimes in the various ways that they do is an enduring challenge in biology. An influential theoretical prediction states that organisms should increasingly invest in reproduction as they approach the end of their life. An apparent mismatch of empirical data with this prediction has been attributed to age-related constraints on the ability to reproduce. Here we present a general framework for the evolution of age-related reproductive trajectories. Instead of characterizing an organism by its age, we characterize it by its physiological condition. We develop a common currency that if maximized at each time guarantees the whole life history is optimal. This currency integrates reproduction, mortality and changes in condition. We predict that under broad conditions it will be optimal for organisms to invest less in reproduction as they age, thus challenging traditional interpretations of age-related traits and renewing debate about the extent to which observed life histories are shaped by constraint versus adaptation. Our analysis gives a striking illustration of the differences between an age-based and a condition-based approach to life-history theory. It also provides a unified account of not only standard life-history models but of related models involving the allocation of limited resources.

  9. Phasor Domain Steady-State Modeling and Design of the DC–DC Modular Multilevel Converter

    DOE PAGES

    Yang, Heng; Qin, Jiangchao; Debnath, Suman; ...

    2016-01-06

    The DC-DC Modular Multilevel Converter (MMC), which originated from the AC-DC MMC, is an attractive converter topology for interconnection of medium-/high-voltage DC grids. This paper presents design considerations for the DC-DC MMC to achieve high efficiency and reduced component sizes. A steady-state mathematical model of the DC-DC MMC in the phasor-domain is developed. Based on the developed model, a design approach is proposed to size the components and to select the operating frequency of the converter to satisfy a set of design constraints while achieving high efficiency. The design approach includes sizing of the arm inductor, Sub-Module (SM) capacitor, andmore » phase filtering inductor along with the selection of AC operating frequency of the converter. The accuracy of the developed model and the effectiveness of the design approach are validated based on the simulation studies in the PSCAD/EMTDC software environment. The analysis and developments of this paper can be used as a guideline for design of the DC-DC MMC.« less

  10. Mixed models, linear dependency, and identification in age-period-cohort models.

    PubMed

    O'Brien, Robert M

    2017-07-20

    This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. Derived heuristics-based consistent optimization of material flow in a gold processing plant

    NASA Astrophysics Data System (ADS)

    Myburgh, Christie; Deb, Kalyanmoy

    2018-01-01

    Material flow in a chemical processing plant often follows complicated control laws and involves plant capacity constraints. Importantly, the process involves discrete scenarios which when modelled in a programming format involves if-then-else statements. Therefore, a formulation of an optimization problem of such processes becomes complicated with nonlinear and non-differentiable objective and constraint functions. In handling such problems using classical point-based approaches, users often have to resort to modifications and indirect ways of representing the problem to suit the restrictions associated with classical methods. In a particular gold processing plant optimization problem, these facts are demonstrated by showing results from MATLAB®'s well-known fmincon routine. Thereafter, a customized evolutionary optimization procedure which is capable of handling all complexities offered by the problem is developed. Although the evolutionary approach produced results with comparatively less variance over multiple runs, the performance has been enhanced by introducing derived heuristics associated with the problem. In this article, the development and usage of derived heuristics in a practical problem are presented and their importance in a quick convergence of the overall algorithm is demonstrated.

  12. Foldover-free shape deformation for biomedicine.

    PubMed

    Yu, Hongchuan; Zhang, Jian J; Lee, Tong-Yee

    2014-04-01

    Shape deformation as a fundamental geometric operation underpins a wide range of applications, from geometric modelling, medical imaging to biomechanics. In medical imaging, for example, to quantify the difference between two corresponding images, 2D or 3D, one needs to find the deformation between both images. However, such deformations, particularly deforming complex volume datasets, are prone to the problem of foldover, i.e. during deformation, the required property of one-to-one mapping no longer holds for some points. Despite numerous research efforts, the construction of a mathematically robust foldover-free solution subject to positional constraints remains open. In this paper, we address this challenge by developing a radial basis function-based deformation method. In particular we formulate an effective iterative mechanism which ensures the foldover-free property is satisfied all the time. The experimental results suggest that the resulting deformations meet the internal positional constraints. In addition to radial basis functions, this iterative mechanism can also be incorporated into other deformation approaches, e.g. B-spline based FFDs, to develop different deformable approaches for various applications. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  13. Prognostics of Proton Exchange Membrane Fuel Cells stack using an ensemble of constraints based connectionist networks

    NASA Astrophysics Data System (ADS)

    Javed, Kamran; Gouriveau, Rafael; Zerhouni, Noureddine; Hissel, Daniel

    2016-08-01

    Proton Exchange Membrane Fuel Cell (PEMFC) is considered the most versatile among available fuel cell technologies, which qualify for diverse applications. However, the large-scale industrial deployment of PEMFCs is limited due to their short life span and high exploitation costs. Therefore, ensuring fuel cell service for a long duration is of vital importance, which has led to Prognostics and Health Management of fuel cells. More precisely, prognostics of PEMFC is major area of focus nowadays, which aims at identifying degradation of PEMFC stack at early stages and estimating its Remaining Useful Life (RUL) for life cycle management. This paper presents a data-driven approach for prognostics of PEMFC stack using an ensemble of constraint based Summation Wavelet- Extreme Learning Machine (SW-ELM) models. This development aim at improving the robustness and applicability of prognostics of PEMFC for an online application, with limited learning data. The proposed approach is applied to real data from two different PEMFC stacks and compared with ensembles of well known connectionist algorithms. The results comparison on long-term prognostics of both PEMFC stacks validates our proposition.

  14. Robust input design for nonlinear dynamic modeling of AUV.

    PubMed

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  15. LexValueSets: An Approach for Context-Driven Value Sets Extraction

    PubMed Central

    Pathak, Jyotishman; Jiang, Guoqian; Dwarkanath, Sridhar O.; Buntrock, James D.; Chute, Christopher G.

    2008-01-01

    The ability to model, share and re-use value sets across multiple medical information systems is an important requirement. However, generating value sets semi-automatically from a terminology service is still an unresolved issue, in part due to the lack of linkage to clinical context patterns that provide the constraints in defining a concept domain and invocation of value sets extraction. Towards this goal, we develop and evaluate an approach for context-driven automatic value sets extraction based on a formal terminology model. The crux of the technique is to identify and define the context patterns from various domains of discourse and leverage them for value set extraction using two complementary ideas based on (i) local terms provided by the Subject Matter Experts (extensional) and (ii) semantic definition of the concepts in coding schemes (intensional). A prototype was implemented based on SNOMED CT rendered in the LexGrid terminology model and a preliminary evaluation is presented. PMID:18998955

  16. Integrated configurable equipment selection and line balancing for mass production with serial-parallel machining systems

    NASA Astrophysics Data System (ADS)

    Battaïa, Olga; Dolgui, Alexandre; Guschinsky, Nikolai; Levin, Genrikh

    2014-10-01

    Solving equipment selection and line balancing problems together allows better line configurations to be reached and avoids local optimal solutions. This article considers jointly these two decision problems for mass production lines with serial-parallel workplaces. This study was motivated by the design of production lines based on machines with rotary or mobile tables. Nevertheless, the results are more general and can be applied to assembly and production lines with similar structures. The designers' objectives and the constraints are studied in order to suggest a relevant mathematical model and an efficient optimization approach to solve it. A real case study is used to validate the model and the developed approach.

  17. Cosmological constraints from the CFHTLenS shear measurements using a new, accurate, and flexible way of predicting non-linear mass clustering

    NASA Astrophysics Data System (ADS)

    Angulo, Raul E.; Hilbert, Stefan

    2015-03-01

    We explore the cosmological constraints from cosmic shear using a new way of modelling the non-linear matter correlation functions. The new formalism extends the method of Angulo & White, which manipulates outputs of N-body simulations to represent the 3D non-linear mass distribution in different cosmological scenarios. We show that predictions from our approach for shear two-point correlations at 1-300 arcmin separations are accurate at the ˜10 per cent level, even for extreme changes in cosmology. For moderate changes, with target cosmologies similar to that preferred by analyses of recent Planck data, the accuracy is close to ˜5 per cent. We combine this approach with a Monte Carlo Markov chain sampler to explore constraints on a Λ cold dark matter model from the shear correlation functions measured in the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS). We obtain constraints on the parameter combination σ8(Ωm/0.27)0.6 = 0.801 ± 0.028. Combined with results from cosmic microwave background data, we obtain marginalized constraints on σ8 = 0.81 ± 0.01 and Ωm = 0.29 ± 0.01. These results are statistically compatible with previous analyses, which supports the validity of our approach. We discuss the advantages of our method and the potential it offers, including a path to model in detail (i) the effects of baryons, (ii) high-order shear correlation functions, and (iii) galaxy-galaxy lensing, among others, in future high-precision cosmological analyses.

  18. Dynamics and phenomenology of higher order gravity cosmological models

    NASA Astrophysics Data System (ADS)

    Moldenhauer, Jacob Andrew

    2010-10-01

    I present here some new results about a systematic approach to higher-order gravity (HOG) cosmological models. The HOG models are derived from curvature invariants that are more general than the Einstein-Hilbert action. Some of the models exhibit late-time cosmic acceleration without the need for dark energy and fit some current observations. The open question is that there are an infinite number of invariants that one could select, and many of the published papers have stressed the need to find a systematic approach that will allow one to study methodically the various possibilities. We explore a new connection that we made between theorems from the theory of invariants in general relativity and these cosmological models. In summary, the theorems demonstrate that curvature invariants are not all independent from each other and that for a given Ricci Segre type and Petrov type (symmetry classification) of the space-time, there exists a complete minimal set of independent invariants (a basis) in terms of which all the other invariants can be expressed. As an immediate consequence of the proposed approach, the number of invariants to consider is dramatically reduced from infinity to four invariants in the worst case and to only two invariants in the cases of interest, including all Friedmann-Lemaitre-Robertson-Walker metrics. We derive models that pass stability and physical acceptability conditions. We derive dynamical equations and phase portrait analyses that show the promise of the systematic approach. We consider observational constraints from magnitude-redshift Supernovae Type Ia data, distance to the last scattering surface of the Cosmic Microwave Background radiation, and Baryon Acoustic Oscillations. We put observational constraints on general HOG models. We constrain different forms of the Gauss-Bonnet, f(G), modified gravity models with these observations. We show some of these models pass solar system tests. We seek to find models that pass physical and observational constraints and give fits to the data that are almost as good as those of the standard Lambda-Cold-Dark-Matter model. Finding accelerating HOG models with late-time acceleration that pass physical acceptability conditions, solar system tests, and cosmological constraints will constitute serious contenders to explain cosmic acceleration.

  19. Integration of gene normalization stages and co-reference resolution using a Markov logic network.

    PubMed

    Dai, Hong-Jie; Chang, Yen-Ching; Tsai, Richard Tzong-Han; Hsu, Wen-Lian

    2011-09-15

    Gene normalization (GN) is the task of normalizing a textual gene mention to a unique gene database ID. Traditional top performing GN systems usually need to consider several constraints to make decisions in the normalization process, including filtering out false positives, or disambiguating an ambiguous gene mention, to improve system performance. However, these constraints are usually executed in several separate stages and cannot use each other's input/output interactively. In this article, we propose a novel approach that employs a Markov logic network (MLN) to model the constraints used in the GN task. Firstly, we show how various constraints can be formulated and combined in an MLN. Secondly, we are the first to apply the two main concepts of co-reference resolution-discourse salience in centering theory and transitivity-to GN models. Furthermore, to make our results more relevant to developers of information extraction applications, we adopt the instance-based precision/recall/F-measure (PRF) in addition to the article-wide PRF to assess system performance. Experimental results show that our system outperforms baseline and state-of-the-art systems under two evaluation schemes. Through further analysis, we have found several unexplored challenges in the GN task. hongjie@iis.sinica.edu.tw Supplementary data are available at Bioinformatics online.

  20. Analytical design of an industrial two-term controller for optimal regulatory control of open-loop unstable processes under operational constraints.

    PubMed

    Tchamna, Rodrigue; Lee, Moonyong

    2018-01-01

    This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Developments in Stochastic Fuel Efficient Cruise Control and Constrained Control with Applications to Aircraft

    NASA Astrophysics Data System (ADS)

    McDonough, Kevin K.

    The dissertation presents contributions to fuel-efficient control of vehicle speed and constrained control with applications to aircraft. In the first part of this dissertation a stochastic approach to fuel-efficient vehicle speed control is developed. This approach encompasses stochastic modeling of road grade and traffic speed, modeling of fuel consumption through the use of a neural network, and the application of stochastic dynamic programming to generate vehicle speed control policies that are optimized for the trade-off between fuel consumption and travel time. The fuel economy improvements with the proposed policies are quantified through simulations and vehicle experiments. It is shown that the policies lead to the emergence of time-varying vehicle speed patterns that are referred to as time-varying cruise. Through simulations and experiments it is confirmed that these time-varying vehicle speed profiles are more fuel-efficient than driving at a comparable constant speed. Motivated by these results, a simpler implementation strategy that is more appealing for practical implementation is also developed. This strategy relies on a finite state machine and state transition threshold optimization, and its benefits are quantified through model-based simulations and vehicle experiments. Several additional contributions are made to approaches for stochastic modeling of road grade and vehicle speed that include the use of Kullback-Liebler divergence and divergence rate and a stochastic jump-like model for the behavior of the road grade. In the second part of the dissertation, contributions to constrained control with applications to aircraft are described. Recoverable sets and integral safe sets of initial states of constrained closed-loop systems are introduced first and computational procedures of such sets based on linear discrete-time models are given. The use of linear discrete-time models is emphasized as they lead to fast computational procedures. Examples of these sets for aircraft longitudinal and lateral aircraft dynamics are reported, and it is shown that these sets can be larger in size compared to the more commonly used safe sets. An approach to constrained maneuver planning based on chaining recoverable sets or integral safe sets is described and illustrated with a simulation example. To facilitate the application of this maneuver planning approach in aircraft loss of control (LOC) situations when the model is only identified at the current trim condition but when these sets need to be predicted at other flight conditions, the dependence trends of the safe and recoverable sets on aircraft flight conditions are characterized. The scaling procedure to estimate subsets of safe and recoverable sets at one trim condition based on their knowledge at another trim condition is defined. Finally, two control schemes that exploit integral safe sets are proposed. The first scheme, referred to as the controller state governor (CSG), resets the controller state (typically an integrator) to enforce the constraints and enlarge the set of plant states that can be recovered without constraint violation. The second scheme, referred to as the controller state and reference governor (CSRG), combines the controller state governor with the reference governor control architecture and provides the capability of simultaneously modifying the reference command and the controller state to enforce the constraints. Theoretical results that characterize the response properties of both schemes are presented. Examples are reported that illustrate the operation of these schemes on aircraft flight dynamics models and gas turbine engine dynamic models.

  2. Towards Accurate Modelling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-04-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter halos. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the "accurate" regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard ΛCDM + halo model against the clustering of SDSS DR7 galaxies. Specifically, we use the projected correlation function, group multiplicity function and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir halos) matches the clustering of low luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the "standard" halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  3. Least Squares Solution of Small Sample Multiple-Master PSInSAR System

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Ding, Xiao Li; Lu, Zhong

    2010-03-01

    In this paper we propose a least squares based approach for multi-temporal SAR interferometry that allows to estimate the deformation rate with no need of phase unwrapping. The approach utilizes a series of multi-master wrapped differential interferograms with short baselines and only focuses on the arcs constructed by two nearby points at which there are no phase ambiguities. During the estimation an outlier detector is used to identify and remove the arcs with phase ambiguities, and pseudoinverse of priori variance component matrix is taken as the weight of correlated observations in the model. The parameters at points can be obtained by an indirect adjustment model with constraints when several reference points are available. The proposed approach is verified by a set of simulated data.

  4. Parametric Deformation of Discrete Geometry for Aerodynamic Shape Design

    NASA Technical Reports Server (NTRS)

    Anderson, George R.; Aftosmis, Michael J.; Nemec, Marian

    2012-01-01

    We present a versatile discrete geometry manipulation platform for aerospace vehicle shape optimization. The platform is based on the geometry kernel of an open-source modeling tool called Blender and offers access to four parametric deformation techniques: lattice, cage-based, skeletal, and direct manipulation. Custom deformation methods are implemented as plugins, and the kernel is controlled through a scripting interface. Surface sensitivities are provided to support gradient-based optimization. The platform architecture allows the use of geometry pipelines, where multiple modelers are used in sequence, enabling manipulation difficult or impossible to achieve with a constructive modeler or deformer alone. We implement an intuitive custom deformation method in which a set of surface points serve as the design variables and user-specified constraints are intrinsically satisfied. We test our geometry platform on several design examples using an aerodynamic design framework based on Cartesian grids. We examine inverse airfoil design and shape matching and perform lift-constrained drag minimization on an airfoil with thickness constraints. A transport wing-fuselage integration problem demonstrates the approach in 3D. In a final example, our platform is pipelined with a constructive modeler to parabolically sweep a wingtip while applying a 1-G loading deformation across the wingspan. This work is an important first step towards the larger goal of leveraging the investment of the graphics industry to improve the state-of-the-art in aerospace geometry tools.

  5. 3D tracking of laparoscopic instruments using statistical and geometric modeling.

    PubMed

    Wolf, Rémi; Duchateau, Josselin; Cinquin, Philippe; Voros, Sandrine

    2011-01-01

    During a laparoscopic surgery, the endoscope can be manipulated by an assistant or a robot. Several teams have worked on the tracking of surgical instruments, based on methods ranging from the development of specific devices to image processing methods. We propose to exploit the instruments' insertion points, which are fixed on the patients abdominal cavity, as a geometric constraint for the localization of the instruments. A simple geometric model of a laparoscopic instrument is described, as well as a parametrization that exploits a spherical geometric grid, which offers attracting homogeneity and isotropy properties. The general architecture of our proposed approach is based on the probabilistic Condensation algorithm.

  6. Non-fragile observer-based output feedback control for polytopic uncertain system under distributed model predictive control approach

    NASA Astrophysics Data System (ADS)

    Zhu, Kaiqun; Song, Yan; Zhang, Sunjie; Zhong, Zhaozhun

    2017-07-01

    In this paper, a non-fragile observer-based output feedback control problem for the polytopic uncertain system under distributed model predictive control (MPC) approach is discussed. By decomposing the global system into some subsystems, the computation complexity is reduced, so it follows that the online designing time can be saved.Moreover, an observer-based output feedback control algorithm is proposed in the framework of distributed MPC to deal with the difficulties in obtaining the states measurements. In this way, the presented observer-based output-feedback MPC strategy is more flexible and applicable in practice than the traditional state-feedback one. What is more, the non-fragility of the controller has been taken into consideration in favour of increasing the robustness of the polytopic uncertain system. After that, a sufficient stability criterion is presented by using Lyapunov-like functional approach, meanwhile, the corresponding control law and the upper bound of the quadratic cost function are derived by solving an optimisation subject to convex constraints. Finally, some simulation examples are employed to show the effectiveness of the method.

  7. Reconstruction of elasticity: a stochastic model-based approach in ultrasound elastography

    PubMed Central

    2013-01-01

    Background The convectional strain-based algorithm has been widely utilized in clinical practice. It can only provide the information of relative information of tissue stiffness. However, the exact information of tissue stiffness should be valuable for clinical diagnosis and treatment. Methods In this study we propose a reconstruction strategy to recover the mechanical properties of the tissue. After the discrepancies between the biomechanical model and data are modeled as the process noise, and the biomechanical model constraint is transformed into a state space representation the reconstruction of elasticity can be accomplished through one filtering identification process, which is to recursively estimate the material properties and kinematic functions from ultrasound data according to the minimum mean square error (MMSE) criteria. In the implementation of this model-based algorithm, the linear isotropic elasticity is adopted as the biomechanical constraint. The estimation of kinematic functions (i.e., the full displacement and velocity field), and the distribution of Young’s modulus are computed simultaneously through an extended Kalman filter (EKF). Results In the following experiments the accuracy and robustness of this filtering framework is first evaluated on synthetic data in controlled conditions, and the performance of this framework is then evaluated in the real data collected from elastography phantom and patients using the ultrasound system. Quantitative analysis verifies that strain fields estimated by our filtering strategy are more closer to the ground truth. The distribution of Young’s modulus is also well estimated. Further, the effects of measurement noise and process noise have been investigated as well. Conclusions The advantage of this model-based algorithm over the conventional strain-based algorithm is its potential of providing the distribution of elasticity under a proper biomechanical model constraint. We address the model-data discrepancy and measurement noise by introducing process noise and measurement noise in our framework, and then the absolute values of Young’s modulus are estimated through the EFK in the MMSE sense. However, the initial conditions, and the mesh strategy will affect the performance, i.e., the convergence rate, and computational cost, etc. PMID:23937814

  8. Reconstruction of elasticity: a stochastic model-based approach in ultrasound elastography.

    PubMed

    Lu, Minhua; Zhang, Heye; Wang, Jun; Yuan, Jinwei; Hu, Zhenghui; Liu, Huafeng

    2013-08-10

    The convectional strain-based algorithm has been widely utilized in clinical practice. It can only provide the information of relative information of tissue stiffness. However, the exact information of tissue stiffness should be valuable for clinical diagnosis and treatment. In this study we propose a reconstruction strategy to recover the mechanical properties of the tissue. After the discrepancies between the biomechanical model and data are modeled as the process noise, and the biomechanical model constraint is transformed into a state space representation the reconstruction of elasticity can be accomplished through one filtering identification process, which is to recursively estimate the material properties and kinematic functions from ultrasound data according to the minimum mean square error (MMSE) criteria. In the implementation of this model-based algorithm, the linear isotropic elasticity is adopted as the biomechanical constraint. The estimation of kinematic functions (i.e., the full displacement and velocity field), and the distribution of Young's modulus are computed simultaneously through an extended Kalman filter (EKF). In the following experiments the accuracy and robustness of this filtering framework is first evaluated on synthetic data in controlled conditions, and the performance of this framework is then evaluated in the real data collected from elastography phantom and patients using the ultrasound system. Quantitative analysis verifies that strain fields estimated by our filtering strategy are more closer to the ground truth. The distribution of Young's modulus is also well estimated. Further, the effects of measurement noise and process noise have been investigated as well. The advantage of this model-based algorithm over the conventional strain-based algorithm is its potential of providing the distribution of elasticity under a proper biomechanical model constraint. We address the model-data discrepancy and measurement noise by introducing process noise and measurement noise in our framework, and then the absolute values of Young's modulus are estimated through the EFK in the MMSE sense. However, the initial conditions, and the mesh strategy will affect the performance, i.e., the convergence rate, and computational cost, etc.

  9. Optimization of composite box-beam structures including effects of subcomponent interactions

    NASA Technical Reports Server (NTRS)

    Ragon, Scott A.; Guerdal, Zafer; Starnes, James H., Jr.

    1995-01-01

    Minimum mass designs are obtained for a simple box beam structure subject to bending, torque and combined bending/torque load cases. These designs are obtained subject to point strain and linear buckling constraints. The present work differs from previous efforts in that special attention is payed to including the effects of subcomponent panel interaction in the optimal design process. Two different approaches are used to impose the buckling constraints. When the global approach is used, buckling constraints are imposed on the global structure via a linear eigenvalue analysis. This approach allows the subcomponent panels to interact in a realistic manner. The results obtained using this approach are compared to results obtained using a traditional, less expensive approach, called the local approach. When the local approach is used, in-plane loads are extracted from the global model and used to impose buckling constraints on each subcomponent panel individually. In the global cases, it is found that there can be significant interaction between skin, spar, and rib design variables. This coupling is weak or nonexistent in the local designs. It is determined that weight savings of up to 7% may be obtained by using the global approach instead of the local approach to design these structures. Several of the designs obtained using the linear buckling analysis are subjected to a geometrically nonlinear analysis. For the designs which were subjected to bending loads, the innermost rib panel begins to collapse at less than half the intended design load and in a mode different from that predicted by linear analysis. The discrepancy between the predicted linear and nonlinear responses is attributed to the effects of the nonlinear rib crushing load, and the parameter which controls this rib collapse failure mode is shown to be the rib thickness. The rib collapse failure mode may be avoided by increasing the rib thickness above the value obtained from the (linear analysis based) optimizer. It is concluded that it would be necessary to include geometric nonlinearities in the design optimization process if the true optimum in this case were to be found.

  10. Fuzzy Constraint Based Model for Efficient Management of Dynamic Purchasing Environments

    NASA Astrophysics Data System (ADS)

    Sakas, D. P.; Vlachos, D. S.; Simos, T. E.

    2007-12-01

    This paper considers the application of a fuzzy constraint based model for handling dynamic environments where only one of possibly many bundles of items must be purchased and quotes for items open and close over time. Simulation results are presented and compared with the optimal solution.

  11. A guided search genetic algorithm using mined rules for optimal affective product design

    NASA Astrophysics Data System (ADS)

    Fung, Chris K. Y.; Kwong, C. K.; Chan, Kit Yan; Jiang, H.

    2014-08-01

    Affective design is an important aspect of new product development, especially for consumer products, to achieve a competitive edge in the marketplace. It can help companies to develop new products that can better satisfy the emotional needs of customers. However, product designers usually encounter difficulties in determining the optimal settings of the design attributes for affective design. In this article, a novel guided search genetic algorithm (GA) approach is proposed to determine the optimal design attribute settings for affective design. The optimization model formulated based on the proposed approach applied constraints and guided search operators, which were formulated based on mined rules, to guide the GA search and to achieve desirable solutions. A case study on the affective design of mobile phones was conducted to illustrate the proposed approach and validate its effectiveness. Validation tests were conducted, and the results show that the guided search GA approach outperforms the GA approach without the guided search strategy in terms of GA convergence and computational time. In addition, the guided search optimization model is capable of improving GA to generate good solutions for affective design.

  12. Impacts of Base-Case and Post-Contingency Constraint Relaxations on Static and Dynamic Operational Security

    NASA Astrophysics Data System (ADS)

    Salloum, Ahmed

    Constraint relaxation by definition means that certain security, operational, or financial constraints are allowed to be violated in the energy market model for a predetermined penalty price. System operators utilize this mechanism in an effort to impose a price-cap on shadow prices throughout the market. In addition, constraint relaxations can serve as corrective approximations that help in reducing the occurrence of infeasible or extreme solutions in the day-ahead markets. This work aims to capture the impact constraint relaxations have on system operational security. Moreover, this analysis also provides a better understanding of the correlation between DC market models and AC real-time systems and analyzes how relaxations in market models propagate to real-time systems. This information can be used not only to assess the criticality of constraint relaxations, but also as a basis for determining penalty prices more accurately. Constraint relaxations practice was replicated in this work using a test case and a real-life large-scale system, while capturing both energy market aspects and AC real-time system performance. System performance investigation included static and dynamic security analysis for base-case and post-contingency operating conditions. PJM peak hour loads were dynamically modeled in order to capture delayed voltage recovery and sustained depressed voltage profiles as a result of reactive power deficiency caused by constraint relaxations. Moreover, impacts of constraint relaxations on operational system security were investigated when risk based penalty prices are used. Transmission lines in the PJM system were categorized according to their risk index and each category was as-signed a different penalty price accordingly in order to avoid real-time overloads on high risk lines. This work also extends the investigation of constraint relaxations to post-contingency relaxations, where emergency limits are allowed to be relaxed in energy market models. Various scenarios were investigated to capture and compare between the impacts of base-case and post-contingency relaxations on real-time system performance, including the presence of both relaxations simultaneously. The effect of penalty prices on the number and magnitude of relaxations was investigated as well.

  13. A Declarative Design Approach to Modeling Traditional and Non-Traditional Space Systems

    NASA Astrophysics Data System (ADS)

    Hoag, Lucy M.

    The space system design process is known to be laborious, complex, and computationally demanding. It is highly multi-disciplinary, involving several interdependent subsystems that must be both highly optimized and reliable due to the high cost of launch. Satellites must also be capable of operating in harsh and unpredictable environments, so integrating high-fidelity analysis is important. To address each of these concerns, a holistic design approach is necessary. However, while the sophistication of space systems has evolved significantly in the last 60 years, improvements in the design process have been comparatively stagnant. Space systems continue to be designed using a procedural, subsystem-by-subsystem approach. This method is inadequate since it generally requires extensive iteration and limited or heuristic-based search, which can be slow, labor-intensive, and inaccurate. The use of a declarative design approach can potentially address these inadequacies. In the declarative programming style, the focus of a problem is placed on what the objective is, and not necessarily how it should be achieved. In the context of design, this entails knowledge expressed as a declaration of statements that are true about the desired artifact instead of explicit instructions on how to implement it. A well-known technique is through constraint-based reasoning, where a design problem is represented as a network of rules and constraints that are reasoned across by a solver to dynamically discover the optimal candidate(s). This enables implicit instantiation of the tradespace and allows for automatic generation of all feasible design candidates. As such, this approach also appears to be well-suited to modeling adaptable space systems, which generally have large tradespaces and possess configurations that are not well-known a priori. This research applied a declarative design approach to holistic satellite design and to tradespace exploration for adaptable space systems. The approach was tested during the design of USC's Aeneas nanosatellite project, and a case study was performed to assess the advantages of the new approach over past procedural approaches. It was found that use of the declarative approach improved design accuracy through exhaustive tradespace search and provable optimality; decreased design time through improved model generation, faster run time, and reduction in time and number of iteration cycles; and enabled modular and extensible code. Observed weaknesses included non-intuitive model abstraction; increased debugging time; and difficulty of data extrapolation and analysis.

  14. A practical application of the geometrical theory on fibered manifolds to an autonomous bicycle motion in mechanical system with nonholonomic constraints

    NASA Astrophysics Data System (ADS)

    Haddout, Soufiane

    2018-01-01

    The equations of motion of a bicycle are highly nonlinear and rolling of wheels without slipping can only be expressed by nonholonomic constraint equations. A geometrical theory of general nonholonomic constrained systems on fibered manifolds and their jet prolongations, based on so-called Chetaev-type constraint forces, was proposed and developed in the last decade by O. Krupková (Rossi) in 1990's. Her approach is suitable for study of all kinds of mechanical systems-without restricting to Lagrangian, time-independent, or regular ones, and is applicable to arbitrary constraints (holonomic, semiholonomic, linear, nonlinear or general nonholonomic). The goal of this paper is to apply Krupková's geometric theory of nonholonomic mechanical systems to study a concrete problem in nonlinear nonholonomic dynamics, i.e., autonomous bicycle. The dynamical model is preserved in simulations in its original nonlinear form without any simplifying. The results of numerical solutions of constrained equations of motion, derived within the theory, are in good agreement with measurements and thus they open the possibility of direct application of the theory to practical situations.

  15. Non-Convex Sparse and Low-Rank Based Robust Subspace Segmentation for Data Mining.

    PubMed

    Cheng, Wenlong; Zhao, Mingbo; Xiong, Naixue; Chui, Kwok Tai

    2017-07-15

    Parsimony, including sparsity and low-rank, has shown great importance for data mining in social networks, particularly in tasks such as segmentation and recognition. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with convex l ₁-norm or nuclear norm constraints. However, the obtained results by convex optimization are usually suboptimal to solutions of original sparse or low-rank problems. In this paper, a novel robust subspace segmentation algorithm has been proposed by integrating l p -norm and Schatten p -norm constraints. Our so-obtained affinity graph can better capture local geometrical structure and the global information of the data. As a consequence, our algorithm is more generative, discriminative and robust. An efficient linearized alternating direction method is derived to realize our model. Extensive segmentation experiments are conducted on public datasets. The proposed algorithm is revealed to be more effective and robust compared to five existing algorithms.

  16. A Method for Retrieving Ground Flash Fraction from Satellite Lightning Imager Data

    NASA Technical Reports Server (NTRS)

    Koshak, William J.

    2009-01-01

    A general theory for retrieving the fraction of ground flashes in N lightning observed by a satellite-based lightning imager is provided. An "exponential model" is applied as a physically reasonable constraint to describe the measured optical parameter distributions, and population statistics (i.e., mean, variance) are invoked to add additional constraints to the retrieval process. The retrieval itself is expressed in terms of a Bayesian inference, and the Maximum A Posteriori (MAP) solution is obtained. The approach is tested by performing simulated retrievals, and retrieval error statistics are provided. The ability to retrieve ground flash fraction has important benefits to the atmospheric chemistry community. For example, using the method to partition the existing satellite global lightning climatology into separate ground and cloud flash climatologies will improve estimates of lightning nitrogen oxides (NOx) production; this in turn will improve both regional air quality and global chemistry/climate model predictions.

  17. A Role Calculus for ORM

    NASA Astrophysics Data System (ADS)

    Curland, Matthew; Halpin, Terry; Stirewalt, Kurt

    A conceptual schema of an information system specifies the fact structures of interest as well as related business rules that are either constraints or derivation rules. Constraints restrict the possible or permitted states or state transitions, while derivation rules enable some facts to be derived from others. Graphical languages are commonly used to specify conceptual schemas, but often need to be supplemented by more expressive textual languages to capture additional business rules, as well as conceptual queries that enable conceptual models to be queried directly. This paper describes research to provide a role calculus to underpin textual languages for Object-Role Modeling (ORM), to enable business rules and queries to be formulated in a language intelligible to business users. The role-based nature of this calculus, which exploits the attribute-free nature of ORM, appears to offer significant advantages over other proposed approaches, especially in the area of semantic stability.

  18. Advances in modeling trait-based plant community assembly.

    PubMed

    Laughlin, Daniel C; Laughlin, David E

    2013-10-01

    In this review, we examine two new trait-based models of community assembly that predict the relative abundance of species from a regional species pool. The models use fundamentally different mathematical approaches and the predictions can differ considerably. Maxent obtains the most even probability distribution subject to community-weighted mean trait constraints. Traitspace predicts low probabilities for any species whose trait distribution does not pass through the environmental filter. Neither model maximizes functional diversity because of the emphasis on environmental filtering over limiting similarity. Traitspace can test for the effects of limiting similarity by explicitly incorporating intraspecific trait variation. The range of solutions in both models could be used to define the range of natural variability of community composition in restoration projects. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Conformational Sampling in Template-Free Protein Loop Structure Modeling: An Overview

    PubMed Central

    Li, Yaohang

    2013-01-01

    Accurately modeling protein loops is an important step to predict three-dimensional structures as well as to understand functions of many proteins. Because of their high flexibility, modeling the three-dimensional structures of loops is difficult and is usually treated as a “mini protein folding problem” under geometric constraints. In the past decade, there has been remarkable progress in template-free loop structure modeling due to advances of computational methods as well as stably increasing number of known structures available in PDB. This mini review provides an overview on the recent computational approaches for loop structure modeling. In particular, we focus on the approaches of sampling loop conformation space, which is a critical step to obtain high resolution models in template-free methods. We review the potential energy functions for loop modeling, loop buildup mechanisms to satisfy geometric constraints, and loop conformation sampling algorithms. The recent loop modeling results are also summarized. PMID:24688696

  20. Conformational sampling in template-free protein loop structure modeling: an overview.

    PubMed

    Li, Yaohang

    2013-01-01

    Accurately modeling protein loops is an important step to predict three-dimensional structures as well as to understand functions of many proteins. Because of their high flexibility, modeling the three-dimensional structures of loops is difficult and is usually treated as a "mini protein folding problem" under geometric constraints. In the past decade, there has been remarkable progress in template-free loop structure modeling due to advances of computational methods as well as stably increasing number of known structures available in PDB. This mini review provides an overview on the recent computational approaches for loop structure modeling. In particular, we focus on the approaches of sampling loop conformation space, which is a critical step to obtain high resolution models in template-free methods. We review the potential energy functions for loop modeling, loop buildup mechanisms to satisfy geometric constraints, and loop conformation sampling algorithms. The recent loop modeling results are also summarized.

  1. Image-optimized Coronal Magnetic Field Models

    NASA Astrophysics Data System (ADS)

    Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.

    2017-08-01

    We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.

  2. Image-Optimized Coronal Magnetic Field Models

    NASA Technical Reports Server (NTRS)

    Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.

    2017-01-01

    We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work we presented early tests of the method which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane, and the effect on the outcome of the optimization of errors in localization of constraints. We find that substantial improvement in the model field can be achieved with this type of constraints, even when magnetic features in the images are located outside of the image plane.

  3. Distribution of model uncertainty across multiple data streams

    NASA Astrophysics Data System (ADS)

    Wutzler, Thomas

    2014-05-01

    When confronting biogeochemical models with a diversity of observational data streams, we are faced with the problem of weighing the data streams. Without weighing or multiple blocked cost functions, model uncertainty is allocated to the sparse data streams and possible bias in processes that are strongly constraint is exported to processes that are constrained by sparse data streams only. In this study we propose an approach that aims at making model uncertainty a factor of observations uncertainty, that is constant over all data streams. Further we propose an implementation based on Monte-Carlo Markov chain sampling combined with simulated annealing that is able to determine this variance factor. The method is exemplified both with very simple models, artificial data and with an inversion of the DALEC ecosystem carbon model against multiple observations of Howland forest. We argue that the presented approach is able to help and maybe resolve the problem of bias export to sparse data streams.

  4. Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner

    PubMed Central

    Yu, Chengyi; Chen, Xiaobo; Xi, Juntong

    2017-01-01

    A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method. PMID:28098844

  5. Estimating Water and Heat Fluxes with a Four-dimensional Weak-constraint Variational Data Assimilation Approach

    NASA Astrophysics Data System (ADS)

    Bateni, S. M.; Xu, T.

    2015-12-01

    Accurate estimation of water and heat fluxes is required for irrigation scheduling, weather prediction, and water resources planning and management. A weak-constraint variational data assimilation (WC-VDA) scheme is developed to estimate water and heat fluxes by assimilating sequences of land surface temperature (LST) observations. The commonly used strong-constraint VDA systems adversely affect the accuracy of water and heat flux estimates as they assume the model is perfect. The WC-VDA approach accounts for structural and model errors and generates more accurate results via adding a model error term into the surface energy balance equation. The two key unknown parameters of the WC-VDA system (i.e., CHN, the bulk heat transfer coefficient and EF, evaporative fraction) and the model error term are optimized by minimizing the cost function. The WC-VDA model was tested at two sites with contrasting hydrological and vegetative conditions: the Daman site (a wet site located in an oasis area and covered by seeded corn) and the Huazhaizi site (a dry site located in a desert area and covered by sparse grass) in middle stream of Heihe river basin, northwest China. Compared to the strong-constraint VDA system, the WC-VDA method generates more accurate estimates of water and energy fluxes over the desert and oasis sites with dry and wet conditions.

  6. On continuous and discontinuous approaches for modeling groundwater flow in heterogeneous media using the Numerical Manifold Method: Model development and comparison

    NASA Astrophysics Data System (ADS)

    Hu, Mengsu; Wang, Yuan; Rutqvist, Jonny

    2015-06-01

    One major challenge in modeling groundwater flow within heterogeneous geological media is that of modeling arbitrarily oriented or intersected boundaries and inner material interfaces. The Numerical Manifold Method (NMM) has recently emerged as a promising method for such modeling, in its ability to handle boundaries, its flexibility in constructing physical cover functions (continuous or with gradient jump), its meshing efficiency with a fixed mathematical mesh (covers), its convenience for enhancing approximation precision, and its integration precision, achieved by simplex integration. In this paper, we report on developing and comparing two new approaches for boundary constraints using the NMM, namely a continuous approach with jump functions and a discontinuous approach with Lagrange multipliers. In the discontinuous Lagrange multiplier method (LMM), the material interfaces are regarded as discontinuities which divide mathematical covers into different physical covers. We define and derive stringent forms of Lagrange multipliers to link the divided physical covers, thus satisfying the continuity requirement of the refraction law. In the continuous Jump Function Method (JFM), the material interfaces are regarded as inner interfaces contained within physical covers. We briefly define jump terms to represent the discontinuity of the head gradient across an interface to satisfy the refraction law. We then make a theoretical comparison between the two approaches in terms of global degrees of freedom, treatment of multiple material interfaces, treatment of small area, treatment of moving interfaces, the feasibility of coupling with mechanical analysis and applicability to other numerical methods. The newly derived boundary-constraint approaches are coded into a NMM model for groundwater flow analysis, and tested for precision and efficiency on different simulation examples. We first test the LMM for a Dirichlet boundary and then test both LMM and JFM for an idealized heterogeneous model, comparing the numerical results with analytical solutions. Then we test both approaches for a heterogeneous model and compare the results of hydraulic head and specific discharge. We show that both approaches are suitable for modeling material boundaries, considering high accuracy for the boundary constraints, the capability to deal with arbitrarily oriented or complexly intersected boundaries, and their efficiency using a fixed mathematical mesh.

  7. PlanWorks: A Debugging Environment for Constraint Based Planning Systems

    NASA Technical Reports Server (NTRS)

    Daley, Patrick; Frank, Jeremy; Iatauro, Michael; McGann, Conor; Taylor, Will

    2005-01-01

    Numerous planning and scheduling systems employ underlying constraint reasoning systems. Debugging such systems involves the search for errors in model rules, constraint reasoning algorithms, search heuristics, and the problem instance (initial state and goals). In order to effectively find such problems, users must see why each state or action is in a plan by tracking causal chains back to part of the initial problem instance. They must be able to visualize complex relationships among many different entities and distinguish between those entities easily. For example, a variable can be in the scope of several constraints, as well as part of a state or activity in a plan; the activity can arise as a consequence of another activity and a model rule. Finally, they must be able to track each logical inference made during planning. We have developed PlanWorks, a comprehensive system for debugging constraint-based planning and scheduling systems. PlanWorks assumes a strong transaction model of the entire planning process, including adding and removing parts of the constraint network, variable assignment, and constraint propagation. A planner logs all transactions to a relational database that is tailored to support queries for of specialized views to display different forms of data (e.g. constraints, activities, resources, and causal links). PlanWorks was specifically developed for the Extensible Universal Remote Operations Planning Architecture (EUROPA(sub 2)) developed at NASA, but the underlying principles behind PlanWorks make it useful for many constraint-based planning systems. The paper is organized as follows. We first describe some fundamentals of EUROPA(sub 2). We then describe PlanWorks' principal components. We then discuss each component in detail, and then describe inter-component navigation features. We close with a discussion of how PlanWorks is used to find model flaws.

  8. Knowledge-based versus experimentally acquired distance and angle constraints for NMR structure refinement.

    PubMed

    Cui, Feng; Jernigan, Robert; Wu, Zhijun

    2008-04-01

    Nuclear Overhauser effects (NOE) distance constraints and torsion angle constraints are major conformational constraints for nuclear magnetic resonance (NMR) structure refinement. In particular, the number of NOE constraints has been considered as an important determinant for the quality of NMR structures. Of course, the availability of torsion angle constraints is also critical for the formation of correct local conformations. In our recent work, we have shown how a set of knowledge-based short-range distance constraints can also be utilized for NMR structure refinement, as a complementary set of conformational constraints to the NOE and torsion angle constraints. In this paper, we show the results from a series of structure refinement experiments by using different types of conformational constraints--NOE, torsion angle, or knowledge-based constraints--or their combinations, and make a quantitative assessment on how the experimentally acquired constraints contribute to the quality of structural models and whether or not they can be combined with or substituted by the knowledge-based constraints. We have carried out the experiments on a small set of NMR structures. Our preliminary calculations have revealed that the torsion angle constraints contribute substantially to the quality of the structures, but require to be combined with the NOE constraints to be fully effective. The knowledge-based constraints can be functionally as crucial as the torsion angle constraints, although they are statistical constraints after all and are not meant to be able to replace the latter.

  9. Optimizing decentralized production-distribution planning problem in a multi-period supply chain network under uncertainty

    NASA Astrophysics Data System (ADS)

    Nourifar, Raheleh; Mahdavi, Iraj; Mahdavi-Amiri, Nezam; Paydar, Mohammad Mahdi

    2017-09-01

    Decentralized supply chain management is found to be significantly relevant in today's competitive markets. Production and distribution planning is posed as an important optimization problem in supply chain networks. Here, we propose a multi-period decentralized supply chain network model with uncertainty. The imprecision related to uncertain parameters like demand and price of the final product is appropriated with stochastic and fuzzy numbers. We provide mathematical formulation of the problem as a bi-level mixed integer linear programming model. Due to problem's convolution, a structure to solve is developed that incorporates a novel heuristic algorithm based on Kth-best algorithm, fuzzy approach and chance constraint approach. Ultimately, a numerical example is constructed and worked through to demonstrate applicability of the optimization model. A sensitivity analysis is also made.

  10. Generalized expectation-maximization segmentation of brain MR images

    NASA Astrophysics Data System (ADS)

    Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.

    2006-03-01

    Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.

  11. Multi-scale exploration of the technical, economic, and environmental dimensions of bio-based chemical production.

    PubMed

    Zhuang, Kai H; Herrgård, Markus J

    2015-09-01

    In recent years, bio-based chemicals have gained traction as a sustainable alternative to petrochemicals. However, despite rapid advances in metabolic engineering and synthetic biology, there remain significant economic and environmental challenges. In order to maximize the impact of research investment in a new bio-based chemical industry, there is a need for assessing the technological, economic, and environmental potentials of combinations of biomass feedstocks, biochemical products, bioprocess technologies, and metabolic engineering approaches in the early phase of development of cell factories. To address this issue, we have developed a comprehensive Multi-scale framework for modeling Sustainable Industrial Chemicals production (MuSIC), which integrates modeling approaches for cellular metabolism, bioreactor design, upstream/downstream processes and economic impact assessment. We demonstrate the use of the MuSIC framework in a case study where two major polymer precursors (1,3-propanediol and 3-hydroxypropionic acid) are produced from two biomass feedstocks (corn-based glucose and soy-based glycerol) through 66 proposed biosynthetic pathways in two host organisms (Escherichia coli and Saccharomyces cerevisiae). The MuSIC framework allows exploration of tradeoffs and interactions between economy-scale objectives (e.g. profit maximization, emission minimization), constraints (e.g. land-use constraints) and process- and cell-scale technology choices (e.g. strain design or oxygenation conditions). We demonstrate that economy-scale assessment can be used to guide specific strain design decisions in metabolic engineering, and that these design decisions can be affected by non-intuitive dependencies across multiple scales. Copyright © 2015 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  12. A Graph Based Backtracking Algorithm for Solving General CSPs

    NASA Technical Reports Server (NTRS)

    Pang, Wanlin; Goodwin, Scott D.

    2003-01-01

    Many AI tasks can be formalized as constraint satisfaction problems (CSPs), which involve finding values for variables subject to constraints. While solving a CSP is an NP-complete task in general, tractable classes of CSPs have been identified based on the structure of the underlying constraint graphs. Much effort has been spent on exploiting structural properties of the constraint graph to improve the efficiency of finding a solution. These efforts contributed to development of a class of CSP solving algorithms called decomposition algorithms. The strength of CSP decomposition is that its worst-case complexity depends on the structural properties of the constraint graph and is usually better than the worst-case complexity of search methods. Its practical application is limited, however, since it cannot be applied if the CSP is not decomposable. In this paper, we propose a graph based backtracking algorithm called omega-CDBT, which shares merits and overcomes the weaknesses of both decomposition and search approaches.

  13. Multivariable optimization of an auto-thermal ammonia synthesis reactor using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Anh-Nga, Nguyen T.; Tuan-Anh, Nguyen; Tien-Dung, Vu; Kim-Trung, Nguyen

    2017-09-01

    The ammonia synthesis system is an important chemical process used in the manufacture of fertilizers, chemicals, explosives, fibers, plastics, refrigeration. In the literature, many works approaching the modeling, simulation and optimization of an auto-thermal ammonia synthesis reactor can be found. However, they just focus on the optimization of the reactor length while keeping the others parameters constant. In this study, the other parameters are also considered in the optimization problem such as the temperature of feed gas enters the catalyst zone. The optimal problem requires the maximization of a multivariable objective function which subjects to a number of equality constraints involving the solution of coupled differential equations and also inequality constraints. The solution of an optimization problem can be found through, among others, deterministic or stochastic approaches. The stochastic methods, such as evolutionary algorithm (EA), which is based on natural phenomenon, can overcome the drawbacks such as the requirement of the derivatives of the objective function and/or constraints, or being not efficient in non-differentiable or discontinuous problems. Genetic algorithm (GA) which is a class of EA, exceptionally simple, robust at numerical optimization and is more likely to find a true global optimum. In this study, the genetic algorithm is employed to find the optimum profit of the process. The inequality constraints were treated using penalty method. The coupled differential equations system was solved using Runge-Kutta 4th order method. The results showed that the presented numerical method could be applied to model the ammonia synthesis reactor. The optimum economic profit obtained from this study are also compared to the results from the literature. It suggests that the process should be operated at higher temperature of feed gas in catalyst zone and the reactor length is slightly longer.

  14. Explosives Detection: Exploitation of the Physical Signatures

    NASA Astrophysics Data System (ADS)

    Atkinson, David

    2010-10-01

    Explosives based terrorism is an ongoing threat that is evolving with respect to implementation, configuration and materials used. There are a variety of devices designed to detect explosive devices, however, each technology has limitations and operational constraints. A full understanding of the signatures available for detection coupled with the array of detection choices can be used to develop a conceptual model of an explosives screening operation. Physics based sensors provide a robust approach to explosives detection, typically through the identification of anomalies, and are currently used for screening in airports around the world. The next generation of detectors for explosives detection will need to be more sensitive and selective, as well as integrate seamlessly with devices focused on chemical signatures. An appreciation for the details of the physical signature exploitation in cluttered environments with time, space, and privacy constraints is necessary for effective explosives screening of people, luggage, cargo, and vehicles.

  15. A trust region-based approach to optimize triple response systems

    NASA Astrophysics Data System (ADS)

    Fan, Shu-Kai S.; Fan, Chihhao; Huang, Chia-Fen

    2014-05-01

    This article presents a new computing procedure for the global optimization of the triple response system (TRS) where the response functions are non-convex quadratics and the input factors satisfy a radial constrained region of interest. The TRS arising from response surface modelling can be approximated using a nonlinear mathematical program that considers one primary objective function and two secondary constraint functions. An optimization algorithm named the triple response surface algorithm (TRSALG) is proposed to determine the global optimum for the non-degenerate TRS. In TRSALG, the Lagrange multipliers of the secondary functions are determined using the Hooke-Jeeves search method and the Lagrange multiplier of the radial constraint is located using the trust region method within the global optimality space. The proposed algorithm is illustrated in terms of three examples appearing in the quality-control literature. The results of TRSALG compared to a gradient-based method are also presented.

  16. Multi-objective Optimization Design of Gear Reducer Based on Adaptive Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Li, Rui; Chang, Tian; Wang, Jianwei; Wei, Xiaopeng; Wang, Jinming

    2008-11-01

    An adaptive Genetic Algorithm (GA) is introduced to solve the multi-objective optimized design of the reducer. Firstly, according to the structure, strength, etc. in a reducer, a multi-objective optimized model of the helical gear reducer is established. And then an adaptive GA based on a fuzzy controller is introduced, aiming at the characteristics of multi-objective, multi-parameter, multi-constraint conditions. Finally, a numerical example is illustrated to show the advantages of this approach and the effectiveness of an adaptive genetic algorithm used in optimized design of a reducer.

  17. Hard and Soft Constraints in Reliability-Based Design Optimization

    NASA Technical Reports Server (NTRS)

    Crespo, L.uis G.; Giesy, Daniel P.; Kenny, Sean P.

    2006-01-01

    This paper proposes a framework for the analysis and design optimization of models subject to parametric uncertainty where design requirements in the form of inequality constraints are present. Emphasis is given to uncertainty models prescribed by norm bounded perturbations from a nominal parameter value and by sets of componentwise bounded uncertain variables. These models, which often arise in engineering problems, allow for a sharp mathematical manipulation. Constraints can be implemented in the hard sense, i.e., constraints must be satisfied for all parameter realizations in the uncertainty model, and in the soft sense, i.e., constraints can be violated by some realizations of the uncertain parameter. In regard to hard constraints, this methodology allows (i) to determine if a hard constraint can be satisfied for a given uncertainty model and constraint structure, (ii) to generate conclusive, formally verifiable reliability assessments that allow for unprejudiced comparisons of competing design alternatives and (iii) to identify the critical combination of uncertain parameters leading to constraint violations. In regard to soft constraints, the methodology allows the designer (i) to use probabilistic uncertainty models, (ii) to calculate upper bounds to the probability of constraint violation, and (iii) to efficiently estimate failure probabilities via a hybrid method. This method integrates the upper bounds, for which closed form expressions are derived, along with conditional sampling. In addition, an l(sub infinity) formulation for the efficient manipulation of hyper-rectangular sets is also proposed.

  18. A new adaptive multiple modelling approach for non-linear and non-stationary systems

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Gong, Yu; Hong, Xia

    2016-07-01

    This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.

  19. Experimental and Numerical Investigations of Constraint Effect on Deformation Behavior of Tailor-Welded Blanks

    NASA Astrophysics Data System (ADS)

    Li, Yanhua; Lin, Jianping

    2015-08-01

    Tailor-welded blanks (TWBs) have been considered as a productive sheet forming method in automotive industries. However, formability of TWBs is reduced due to different properties or thicknesses of the blanks and is a challenge for manufacturing designers. The plastic capacity of TWBs is decreased even when the material and thickness are the same. The constraint effect of the laser weld (including weld and heat-affected zone) material in the forming process of similar TWBs is a key problem to be solved in the research, development and application of thin-sheet TWBs. In this paper, uniaxial tensile tests with full-field strain measurement by digital image correlation and Erichsen tests are performed to investigate the constraint effect on deformation behavior and explore the mechanism of decreasing formability of similar TWBs. In addition, finite element models are conducted under ABAQUS code to further reveal the phenomenal behavior of the constraint effect. The results of the base material and welded blanks are compared for characterizing the differences. Furthermore, in order to better understand this mechanism, theoretical and numerical investigations are employed and compared to interpret the constraint effect of laser weld on the deformation behavior of TWBs. An index is proposed to quantify the constraint effect. Results show that the constraint effect of laser weld appears in both stretch forming and drawing of TWBs. Strain paths are approaching the plane strain condition as compared to the monolithic blank due to the constraint effect. Constraint effect is a major factor affecting the formability of TWBs when the failure occurs away from the weld seam.

  20. Rational Adaptation under Task and Processing Constraints: Implications for Testing Theories of Cognition and Action

    ERIC Educational Resources Information Center

    Howes, Andrew; Lewis, Richard L.; Vera, Alonso

    2009-01-01

    The authors assume that individuals adapt rationally to a utility function given constraints imposed by their cognitive architecture and the local task environment. This assumption underlies a new approach to modeling and understanding cognition--cognitively bounded rational analysis--that sharpens the predictive acuity of general, integrated…

  1. Bivariate Rainfall and Runoff Analysis Using Shannon Entropy Theory

    NASA Astrophysics Data System (ADS)

    Rahimi, A.; Zhang, L.

    2012-12-01

    Rainfall-Runoff analysis is the key component for many hydrological and hydraulic designs in which the dependence of rainfall and runoff needs to be studied. It is known that the convenient bivariate distribution are often unable to model the rainfall-runoff variables due to that they either have constraints on the range of the dependence or fixed form for the marginal distributions. Thus, this paper presents an approach to derive the entropy-based joint rainfall-runoff distribution using Shannon entropy theory. The distribution derived can model the full range of dependence and allow different specified marginals. The modeling and estimation can be proceeded as: (i) univariate analysis of marginal distributions which includes two steps, (a) using the nonparametric statistics approach to detect modes and underlying probability density, and (b) fitting the appropriate parametric probability density functions; (ii) define the constraints based on the univariate analysis and the dependence structure; (iii) derive and validate the entropy-based joint distribution. As to validate the method, the rainfall-runoff data are collected from the small agricultural experimental watersheds located in semi-arid region near Riesel (Waco), Texas, maintained by the USDA. The results of unviariate analysis show that the rainfall variables follow the gamma distribution, whereas the runoff variables have mixed structure and follow the mixed-gamma distribution. With this information, the entropy-based joint distribution is derived using the first moments, the first moments of logarithm transformed rainfall and runoff, and the covariance between rainfall and runoff. The results of entropy-based joint distribution indicate: (1) the joint distribution derived successfully preserves the dependence between rainfall and runoff, and (2) the K-S goodness of fit statistical tests confirm the marginal distributions re-derived reveal the underlying univariate probability densities which further assure that the entropy-based joint rainfall-runoff distribution are satisfactorily derived. Overall, the study shows the Shannon entropy theory can be satisfactorily applied to model the dependence between rainfall and runoff. The study also shows that the entropy-based joint distribution is an appropriate approach to capture the dependence structure that cannot be captured by the convenient bivariate joint distributions. Joint Rainfall-Runoff Entropy Based PDF, and Corresponding Marginal PDF and Histogram for W12 Watershed The K-S Test Result and RMSE on Univariate Distributions Derived from the Maximum Entropy Based Joint Probability Distribution;

  2. Integrating models with data in ecology and palaeoecology: advances towards a model-data fusion approach.

    PubMed

    Peng, Changhui; Guiot, Joel; Wu, Haibin; Jiang, Hong; Luo, Yiqi

    2011-05-01

    It is increasingly being recognized that global ecological research requires novel methods and strategies in which to combine process-based ecological models and data in cohesive, systematic ways. Model-data fusion (MDF) is an emerging area of research in ecology and palaeoecology. It provides a new quantitative approach that offers a high level of empirical constraint over model predictions based on observations using inverse modelling and data assimilation (DA) techniques. Increasing demands to integrate model and data methods in the past decade has led to MDF utilization in palaeoecology, ecology and earth system sciences. This paper reviews key features and principles of MDF and highlights different approaches with regards to DA. After providing a critical evaluation of the numerous benefits of MDF and its current applications in palaeoecology (i.e., palaeoclimatic reconstruction, palaeovegetation and palaeocarbon storage) and ecology (i.e. parameter and uncertainty estimation, model error identification, remote sensing and ecological forecasting), the paper discusses method limitations, current challenges and future research direction. In the ongoing data-rich era of today's world, MDF could become an important diagnostic and prognostic tool in which to improve our understanding of ecological processes while testing ecological theory and hypotheses and forecasting changes in ecosystem structure, function and services. © 2011 Blackwell Publishing Ltd/CNRS.

  3. Object-oriented approach to the automatic segmentation of bones from pediatric hand radiographs

    NASA Astrophysics Data System (ADS)

    Shim, Hyeonjoon; Liu, Brent J.; Taira, Ricky K.; Hall, Theodore R.

    1997-04-01

    The purpose of this paper is to develop a robust and accurate method that automatically segments phalangeal and epiphyseal bones from digital pediatric hand radiographs exhibiting various stages of growth. The development of this system draws principles from object-oriented design, model- guided analysis, and feedback control. A system architecture called 'the object segmentation machine' was implemented incorporating these design philosophies. The system is aided by a knowledge base where all model contours and other information such as age, race, and sex, are stored. These models include object structure models, shape models, 1-D wrist profiles, and gray level histogram models. Shape analysis is performed first by using an arc-length orientation transform to break down a given contour into elementary segments and curves. Then an interpretation tree is used as an inference engine to map known model contour segments to data contour segments obtained from the transform. Spatial and anatomical relationships among contour segments work as constraints from shape model. These constraints aid in generating a list of candidate matches. The candidate match with the highest confidence is chosen to be the current intermediate result. Verification of intermediate results are perform by a feedback control loop.

  4. The effects of perceived leisure constraints among Korean university students

    Treesearch

    Sae-Sook Oh; Sei-Yi Oh; Linda L. Caldwell

    2002-01-01

    This study is based on Crawford, Jackson, and Godbey's model of leisure constraints (1991), and examines the relationships between the influences of perceived constraints, frequency of participation, and health status in the context of leisure-time outdoor activities. The study was based on a sample of 234 Korean university students. This study provides further...

  5. An interactive approach based on a discrete differential evolution algorithm for a class of integer bilevel programming problems

    NASA Astrophysics Data System (ADS)

    Li, Hong; Zhang, Li; Jiao, Yong-Chang

    2016-07-01

    This paper presents an interactive approach based on a discrete differential evolution algorithm to solve a class of integer bilevel programming problems, in which integer decision variables are controlled by an upper-level decision maker and real-value or continuous decision variables are controlled by a lower-level decision maker. Using the Karush--Kuhn-Tucker optimality conditions in the lower-level programming, the original discrete bilevel formulation can be converted into a discrete single-level nonlinear programming problem with the complementarity constraints, and then the smoothing technique is applied to deal with the complementarity constraints. Finally, a discrete single-level nonlinear programming problem is obtained, and solved by an interactive approach. In each iteration, for each given upper-level discrete variable, a system of nonlinear equations including the lower-level variables and Lagrange multipliers is solved first, and then a discrete nonlinear programming problem only with inequality constraints is handled by using a discrete differential evolution algorithm. Simulation results show the effectiveness of the proposed approach.

  6. Building a Progressive-Situational Model of Post-Diagnosis Information Seeking for Parents of Individuals With Down Syndrome

    PubMed Central

    Gibson, Amelia N.

    2016-01-01

    This grounded theory study used in-depth, semi-structured interview to examine the information-seeking behaviors of 35 parents of children with Down syndrome. Emergent themes include a progressive pattern of behavior including information overload and avoidance, passive attention, and active information seeking; varying preferences between tacit and explicit information at different stages; and selection of information channels and sources that varied based on personal and situational constraints. Based on the findings, the author proposes a progressive model of health information seeking and a framework for using this model to collect data in practice. The author also discusses the practical and theoretical implications of a responsive, progressive approach to understanding parents’ health information–seeking behavior. PMID:28462351

  7. The capability and constraint model of recoverability: An integrated theory of continuity planning.

    PubMed

    Lindstedt, David

    2017-01-01

    While there are best practices, good practices, regulations and standards for continuity planning, there is no single model to collate and sort their various recommended activities. To address this deficit, this paper presents the capability and constraint model of recoverability - a new model to provide an integrated foundation for business continuity planning. The model is non-linear in both construct and practice, thus allowing practitioners to remain adaptive in its application. The paper presents each facet of the model, outlines the model's use in both theory and practice, suggests a subsequent approach that arises from the model, and discusses some possible ramifications to the industry.

  8. An Efficient Numerical Approach for Nonlinear Fokker-Planck equations

    NASA Astrophysics Data System (ADS)

    Otten, Dustin; Vedula, Prakash

    2009-03-01

    Fokker-Planck equations which are nonlinear with respect to their probability densities that occur in many nonequilibrium systems relevant to mean field interaction models, plasmas, classical fermions and bosons can be challenging to solve numerically. To address some underlying challenges in obtaining numerical solutions, we propose a quadrature based moment method for efficient and accurate determination of transient (and stationary) solutions of nonlinear Fokker-Planck equations. In this approach the distribution function is represented as a collection of Dirac delta functions with corresponding quadrature weights and locations, that are in turn determined from constraints based on evolution of generalized moments. Properties of the distribution function can be obtained by solution of transport equations for quadrature weights and locations. We will apply this computational approach to study a wide range of problems, including the Desai-Zwanzig Model (for nonlinear muscular contraction) and multivariate nonlinear Fokker-Planck equations describing classical fermions and bosons, and will also demonstrate good agreement with results obtained from Monte Carlo and other standard numerical methods.

  9. Study of a Terrain-Based Motion Estimation Model to Predict the Position of a Moving Target to Enhance Weapon Probability of Kill

    DTIC Science & Technology

    2017-09-01

    target is modeled based on the kinematic constraints for the type of vehicle and the type of path on which it is traveling . The discrete- time position...is modeled based on the kinematic constraints for the type of vehicle and the type of path on which it is traveling . The discrete- time position...49 A. TRAVELING TIME COMPUTATION ............................................. 49 B. CONVERSION TO

  10. Enhancement Approachof Object Constraint Language Generation

    NASA Astrophysics Data System (ADS)

    Salemi, Samin; Selamat, Ali

    2018-01-01

    OCL is the most prevalent language to document system constraints that are annotated in UML. Writing OCL specifications is not an easy task due to the complexity of the OCL syntax. Therefore, an approach to help and assist developers to write OCL specifications is needed. There are two approaches to do so: First, creating an OCL specifications by a tool called COPACABANA. Second, an MDA-based approach to help developers in writing OCL specification by another tool called NL2OCLviaSBVR that generates OCL specification automatically. This study presents another MDA-based approach called En2OCL, and its objective is twofold. 1- to improve the precison of the existing works. 2- to present a benchmark of these approaches. The benchmark shows that the accuracy of COPACABANA, NL2OCLviaSBVR, and En2OCL are 69.23, 84.64, and 88.40 respectively.

  11. Rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, produce modified schedules, quickly, and exhibits 'anytime' behavior. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. We also show the anytime characteristics of the system. These experiments were performed within the domain of Space Shuttle ground processing.

  12. Ontology and modeling patterns for state-based behavior representation

    NASA Technical Reports Server (NTRS)

    Castet, Jean-Francois; Rozek, Matthew L.; Ingham, Michel D.; Rouquette, Nicolas F.; Chung, Seung H.; Kerzhner, Aleksandr A.; Donahue, Kenneth M.; Jenkins, J. Steven; Wagner, David A.; Dvorak, Daniel L.; hide

    2015-01-01

    This paper provides an approach to capture state-based behavior of elements, that is, the specification of their state evolution in time, and the interactions amongst them. Elements can be components (e.g., sensors, actuators) or environments, and are characterized by state variables that vary with time. The behaviors of these elements, as well as interactions among them are represented through constraints on state variables. This paper discusses the concepts and relationships introduced in this behavior ontology, and the modeling patterns associated with it. Two example cases are provided to illustrate their usage, as well as to demonstrate the flexibility and scalability of the behavior ontology: a simple flashlight electrical model and a more complex spacecraft model involving instruments, power and data behaviors. Finally, an implementation in a SysML profile is provided.

  13. Constraints and Approach for Selecting the Mars Surveyor '01 Landing Site

    NASA Technical Reports Server (NTRS)

    Golombek, M.; Bridges, N.; Gilmore, M.; Haldemann, A.; Parker, T.; Saunders, R.; Spencer, D.; Smith, J.; Weitz, C.

    1999-01-01

    There are many similarities between the Mars Surveyor '01 (MS '01) landing site selection process and that of Mars Pathfinder. The selection process includes two parallel activities in which engineers define and refine the capabilities of the spacecraft through design, testing and modeling and scientists define a set of landing site constraints based on the spacecraft design and landing scenario. As for Pathfinder, the safety of the site is without question the single most important factor, for the simple reason that failure to land safely yields no science and exposes the mission and program to considerable risk. The selection process must be thorough and defensible and capable of surviving multiple withering reviews similar to the Pathfinder decision. On Pathfinder, this was accomplished by attempting to understand the surface properties of sites using available remote sensing data sets and models based on them. Science objectives are factored into the selection process only after the safety of the site is validated. Finally, as for Pathfinder, the selection process is being done in an open environment with multiple opportunities for community involvement including open workshops, with education and outreach opportunities.

  14. Constraints, Approach and Present Status for Selecting the Mars Surveyor 2001 Landing Site

    NASA Technical Reports Server (NTRS)

    Golombek, M.; Anderson, F.; Bridges, N.; Briggs, G.; Gilmore, M.; Gulick, V.; Haldemann, A.; Parker, T.; Saunders, R.; Spencer, D.; hide

    1999-01-01

    There are many similarities between the Mars Surveyor '01 (MS '01) landing site selection process and that of Mars Pathfinder. The selection process includes two parallel activities in which engineers define and refine the capabilities of the spacecraft through design, testing and modeling and scientists define a set of landing site constraints based on the spacecraft design and landing scenario. As for Pathfinder, the safety of the site is without question the single most important factor, for the simple reason that failure to land safely yields no science and exposes the mission and program to considerable risk. The selection process must be thorough, defensible and capable of surviving multiple withering reviews similar to the Pathfinder decision. On Pathfinder, this was accomplished by attempting to understand the surface properties of sites using available remote sensing data sets and models based on them. Science objectives are factored into the selection process only after the safety of the site is validated. Finally, as for Pathfinder, the selection process is being done in an open environment with multiple opportunities for community involvement including open workshops, with education and outreach opportunities.

  15. Image size invariant visual cryptography for general access structures subject to display quality constraints.

    PubMed

    Lee, Kai-Hui; Chiu, Pei-Ling

    2013-10-01

    Conventional visual cryptography (VC) suffers from a pixel-expansion problem, or an uncontrollable display quality problem for recovered images, and lacks a general approach to construct visual secret sharing schemes for general access structures. We propose a general and systematic approach to address these issues without sophisticated codebook design. This approach can be used for binary secret images in non-computer-aided decryption environments. To avoid pixel expansion, we design a set of column vectors to encrypt secret pixels rather than using the conventional VC-based approach. We begin by formulating a mathematic model for the VC construction problem to find the column vectors for the optimal VC construction, after which we develop a simulated-annealing-based algorithm to solve the problem. The experimental results show that the display quality of the recovered image is superior to that of previous papers.

  16. Data Collection for Mobile Group Consumption: An Asynchronous Distributed Approach.

    PubMed

    Zhu, Weiping; Chen, Weiran; Hu, Zhejie; Li, Zuoyou; Liang, Yue; Chen, Jiaojiao

    2016-04-06

    Mobile group consumption refers to consumption by a group of people, such as a couple, a family, colleagues and friends, based on mobile communications. It differs from consumption only involving individuals, because of the complex relations among group members. Existing data collection systems for mobile group consumption are centralized, which has the disadvantages of being a performance bottleneck, having single-point failure and increasing business and security risks. Moreover, these data collection systems are based on a synchronized clock, which is often unrealistic because of hardware constraints, privacy concerns or synchronization cost. In this paper, we propose the first asynchronous distributed approach to collecting data generated by mobile group consumption. We formally built a system model thereof based on asynchronous distributed communication. We then designed a simulation system for the model for which we propose a three-layer solution framework. After that, we describe how to detect the causality relation of two/three gathering events that happened in the system based on the collected data. Various definitions of causality relations based on asynchronous distributed communication are supported. Extensive simulation results show that the proposed approach is effective for data collection relating to mobile group consumption.

  17. Data Collection for Mobile Group Consumption: An Asynchronous Distributed Approach †

    PubMed Central

    Zhu, Weiping; Chen, Weiran; Hu, Zhejie; Li, Zuoyou; Liang, Yue; Chen, Jiaojiao

    2016-01-01

    Mobile group consumption refers to consumption by a group of people, such as a couple, a family, colleagues and friends, based on mobile communications. It differs from consumption only involving individuals, because of the complex relations among group members. Existing data collection systems for mobile group consumption are centralized, which has the disadvantages of being a performance bottleneck, having single-point failure and increasing business and security risks. Moreover, these data collection systems are based on a synchronized clock, which is often unrealistic because of hardware constraints, privacy concerns or synchronization cost. In this paper, we propose the first asynchronous distributed approach to collecting data generated by mobile group consumption. We formally built a system model thereof based on asynchronous distributed communication. We then designed a simulation system for the model for which we propose a three-layer solution framework. After that, we describe how to detect the causality relation of two/three gathering events that happened in the system based on the collected data. Various definitions of causality relations based on asynchronous distributed communication are supported. Extensive simulation results show that the proposed approach is effective for data collection relating to mobile group consumption. PMID:27058544

  18. Enriching mission planning approach with state transition graph heuristics for deep space exploration

    NASA Astrophysics Data System (ADS)

    Jin, Hao; Xu, Rui; Xu, Wenming; Cui, Pingyuan; Zhu, Shengying

    2017-10-01

    As to support the mission of Mars exploration in China, automated mission planning is required to enhance security and robustness of deep space probe. Deep space mission planning requires modeling of complex operations constraints and focus on the temporal state transitions of involved subsystems. Also, state transitions are ubiquitous in physical systems, but have been elusive for knowledge description. We introduce a modeling approach to cope with these difficulties that takes state transitions into consideration. The key technique we build on is the notion of extended states and state transition graphs. Furthermore, a heuristics that based on state transition graphs is proposed to avoid redundant work. Finally, we run comprehensive experiments on selected domains and our techniques present an excellent performance.

  19. Towards accurate modelling of galaxy clustering on small scales: testing the standard ΛCDM + halo model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-07-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter haloes. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the `accurate' regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard Λ cold dark matter (ΛCDM) + halo model against the clustering of Sloan Digital Sky Survey (SDSS) seventh data release (DR7) galaxies. Specifically, we use the projected correlation function, group multiplicity function, and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir haloes) matches the clustering of low-luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the `standard' halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  20. A "Reverse-Schur" Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design.

    PubMed

    Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.

  1. A “Reverse-Schur” Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design

    PubMed Central

    Bardhan, Jaydeep P.; Altman, Michael D.

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839

  2. Multi-View Budgeted Learning under Label and Feature Constraints Using Label-Guided Graph-Based Regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Symons, Christopher T; Arel, Itamar

    2011-01-01

    Budgeted learning under constraints on both the amount of labeled information and the availability of features at test time pertains to a large number of real world problems. Ideas from multi-view learning, semi-supervised learning, and even active learning have applicability, but a common framework whose assumptions fit these problem spaces is non-trivial to construct. We leverage ideas from these fields based on graph regularizers to construct a robust framework for learning from labeled and unlabeled samples in multiple views that are non-independent and include features that are inaccessible at the time the model would need to be applied. We describemore » examples of applications that fit this scenario, and we provide experimental results to demonstrate the effectiveness of knowledge carryover from training-only views. As learning algorithms are applied to more complex applications, relevant information can be found in a wider variety of forms, and the relationships between these information sources are often quite complex. The assumptions that underlie most learning algorithms do not readily or realistically permit the incorporation of many of the data sources that are available, despite an implicit understanding that useful information exists in these sources. When multiple information sources are available, they are often partially redundant, highly interdependent, and contain noise as well as other information that is irrelevant to the problem under study. In this paper, we are focused on a framework whose assumptions match this reality, as well as the reality that labeled information is usually sparse. Most significantly, we are interested in a framework that can also leverage information in scenarios where many features that would be useful for learning a model are not available when the resulting model will be applied. As with constraints on labels, there are many practical limitations on the acquisition of potentially useful features. A key difference in the case of feature acquisition is that the same constraints often don't pertain to the training samples. This difference provides an opportunity to allow features that are impractical in an applied setting to nevertheless add value during the model-building process. Unfortunately, there are few machine learning frameworks built on assumptions that allow effective utilization of features that are only available at training time. In this paper we formulate a knowledge carryover framework for the budgeted learning scenario with constraints on features and labels. The approach is based on multi-view and semi-supervised learning methods that use graph-encoded regularization. Our main contributions are the following: (1) we propose and provide justification for a methodology for ensuring that changes in the graph regularizer using alternate views are performed in a manner that is target-concept specific, allowing value to be obtained from noisy views; and (2) we demonstrate how this general set-up can be used to effectively improve models by leveraging features unavailable at test time. The rest of the paper is structured as follows. In Section 2, we outline real-world problems to motivate the approach and describe relevant prior work. Section 3 describes the graph construction process and the learning methodologies that are employed. Section 4 provides preliminary discussion regarding theoretical motivation for the method. In Section 5, effectiveness of the approach is demonstrated in a series of experiments employing modified versions of two well-known semi-supervised learning algorithms. Section 6 concludes the paper.« less

  3. Chance-Constrained Guidance With Non-Convex Constraints

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro

    2011-01-01

    Missions to small bodies, such as comets or asteroids, require autonomous guidance for descent to these small bodies. Such guidance is made challenging by uncertainty in the position and velocity of the spacecraft, as well as the uncertainty in the gravitational field around the small body. In addition, the requirement to avoid collision with the asteroid represents a non-convex constraint that means finding the optimal guidance trajectory, in general, is intractable. In this innovation, a new approach is proposed for chance-constrained optimal guidance with non-convex constraints. Chance-constrained guidance takes into account uncertainty so that the probability of collision is below a specified threshold. In this approach, a new bounding method has been developed to obtain a set of decomposed chance constraints that is a sufficient condition of the original chance constraint. The decomposition of the chance constraint enables its efficient evaluation, as well as the application of the branch and bound method. Branch and bound enables non-convex problems to be solved efficiently to global optimality. Considering the problem of finite-horizon robust optimal control of dynamic systems under Gaussian-distributed stochastic uncertainty, with state and control constraints, a discrete-time, continuous-state linear dynamics model is assumed. Gaussian-distributed stochastic uncertainty is a more natural model for exogenous disturbances such as wind gusts and turbulence than the previously studied set-bounded models. However, with stochastic uncertainty, it is often impossible to guarantee that state constraints are satisfied, because there is typically a non-zero probability of having a disturbance that is large enough to push the state out of the feasible region. An effective framework to address robustness with stochastic uncertainty is optimization with chance constraints. These require that the probability of violating the state constraints (i.e., the probability of failure) is below a user-specified bound known as the risk bound. An example problem is to drive a car to a destination as fast as possible while limiting the probability of an accident to 10(exp -7). This framework allows users to trade conservatism against performance by choosing the risk bound. The more risk the user accepts, the better performance they can expect.

  4. Optimization Control of the Color-Coating Production Process for Model Uncertainty

    PubMed Central

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563

  5. Optimization Control of the Color-Coating Production Process for Model Uncertainty.

    PubMed

    He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong

    2016-01-01

    Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.

  6. Risk analysis for renewable energy projects due to constraints arising

    NASA Astrophysics Data System (ADS)

    Prostean, G.; Vasar, C.; Prostean, O.; Vartosu, A.

    2016-02-01

    Starting from the target of the European Union (EU) to use renewable energy in the area that aims a binding target of 20% renewable energy in final energy consumption by 2020, this article illustrates the identification of risks for implementation of wind energy projects in Romania, which could lead to complex technical implications, social and administrative. In specific projects analyzed in this paper were identified critical bottlenecks in the future wind power supply chain and reasonable time periods that may arise. Renewable energy technologies have to face a number of constraints that delayed scaling-up their production process, their transport process, the equipment reliability, etc. so implementing these types of projects requiring complex specialized team, the coordination of which also involve specific risks. The research team applied an analytical risk approach to identify major risks encountered within a wind farm project developed in Romania in isolated regions with different particularities, configured for different geographical areas (hill and mountain locations in Romania). Identification of major risks was based on the conceptual model set up for the entire project implementation process. Throughout this conceptual model there were identified specific constraints of such process. Integration risks were examined by an empirical study based on the method HAZOP (Hazard and Operability). The discussion describes the analysis of our results implementation context of renewable energy projects in Romania and creates a framework for assessing energy supply to any entity from renewable sources.

  7. From Network Analysis to Functional Metabolic Modeling of the Human Gut Microbiota.

    PubMed

    Bauer, Eugen; Thiele, Ines

    2018-01-01

    An important hallmark of the human gut microbiota is its species diversity and complexity. Various diseases have been associated with a decreased diversity leading to reduced metabolic functionalities. Common approaches to investigate the human microbiota include high-throughput sequencing with subsequent correlative analyses. However, to understand the ecology of the human gut microbiota and consequently design novel treatments for diseases, it is important to represent the different interactions between microbes with their associated metabolites. Computational systems biology approaches can give further mechanistic insights by constructing data- or knowledge-driven networks that represent microbe interactions. In this minireview, we will discuss current approaches in systems biology to analyze the human gut microbiota, with a particular focus on constraint-based modeling. We will discuss various community modeling techniques with their advantages and differences, as well as their application to predict the metabolic mechanisms of intestinal microbial communities. Finally, we will discuss future perspectives and current challenges of simulating realistic and comprehensive models of the human gut microbiota.

  8. Supporting Collaborative Learning and Problem-Solving in a Constraint-Based CSCL Environment for UML Class Diagrams

    ERIC Educational Resources Information Center

    Baghaei, Nilufar; Mitrovic, Antonija; Irwin, Warwick

    2007-01-01

    We present COLLECT-UML, a constraint-based intelligent tutoring system (ITS) that teaches object-oriented analysis and design using Unified Modelling Language (UML). UML is easily the most popular object-oriented modelling technology in current practice. While teaching how to design UML class diagrams, COLLECT-UML also provides feedback on…

  9. Transfer and Use of Training Technology: A Model for Matching Training Approaches with Training Settings. Technical Report No. 74-24.

    ERIC Educational Resources Information Center

    Haverland, Edgar M.

    The report describes a project designed to facilitate the transfer and utilization of training technology by developing a model for evaluating training approaches or innovtions in relation to the requirements, resources, and constraints of specific training settings. The model consists of two parallel sets of open-ended questions--one set…

  10. Supervised variational model with statistical inference and its application in medical image segmentation.

    PubMed

    Li, Changyang; Wang, Xiuying; Eberl, Stefan; Fulham, Michael; Yin, Yong; Dagan Feng, David

    2015-01-01

    Automated and general medical image segmentation can be challenging because the foreground and the background may have complicated and overlapping density distributions in medical imaging. Conventional region-based level set algorithms often assume piecewise constant or piecewise smooth for segments, which are implausible for general medical image segmentation. Furthermore, low contrast and noise make identification of the boundaries between foreground and background difficult for edge-based level set algorithms. Thus, to address these problems, we suggest a supervised variational level set segmentation model to harness the statistical region energy functional with a weighted probability approximation. Our approach models the region density distributions by using the mixture-of-mixtures Gaussian model to better approximate real intensity distributions and distinguish statistical intensity differences between foreground and background. The region-based statistical model in our algorithm can intuitively provide better performance on noisy images. We constructed a weighted probability map on graphs to incorporate spatial indications from user input with a contextual constraint based on the minimization of contextual graphs energy functional. We measured the performance of our approach on ten noisy synthetic images and 58 medical datasets with heterogeneous intensities and ill-defined boundaries and compared our technique to the Chan-Vese region-based level set model, the geodesic active contour model with distance regularization, and the random walker model. Our method consistently achieved the highest Dice similarity coefficient when compared to the other methods.

  11. Deep Drawing Simulations With Different Polycrystalline Models

    NASA Astrophysics Data System (ADS)

    Duchêne, Laurent; de Montleau, Pierre; Bouvier, Salima; Habraken, Anne Marie

    2004-06-01

    The goal of this research is to study the anisotropic material behavior during forming processes, represented by both complex yield loci and kinematic-isotropic hardening models. A first part of this paper describes the main concepts of the `Stress-strain interpolation' model that has been implemented in the non-linear finite element code Lagamine. This model consists of a local description of the yield locus based on the texture of the material through the full constraints Taylor's model. The texture evolution due to plastic deformations is computed throughout the FEM simulations. This `local yield locus' approach was initially linked to the classical isotropic Swift hardening law. Recently, a more complex hardening model was implemented: the physically-based microstructural model of Teodosiu. It takes into account intergranular heterogeneity due to the evolution of dislocation structures, that affects isotropic and kinematic hardening. The influence of the hardening model is compared to the influence of the texture evolution thanks to deep drawing simulations.

  12. Optimal route discovery for soft QOS provisioning in mobile ad hoc multimedia networks

    NASA Astrophysics Data System (ADS)

    Huang, Lei; Pan, Feng

    2007-09-01

    In this paper, we propose an optimal routing discovery algorithm for ad hoc multimedia networks whose resource keeps changing, First, we use stochastic models to measure the network resource availability, based on the information about the location and moving pattern of the nodes, as well as the link conditions between neighboring nodes. Then, for a certain multimedia packet flow to be transmitted from a source to a destination, we formulate the optimal soft-QoS provisioning problem as to find the best route that maximize the probability of satisfying its desired QoS requirements in terms of the maximum delay constraints. Based on the stochastic network resource model, we developed three approaches to solve the formulated problem: A centralized approach serving as the theoretical reference, a distributed approach that is more suitable to practical real-time deployment, and a distributed dynamic approach that utilizes the updated time information to optimize the routing for each individual packet. Examples of numerical results demonstrated that using the route discovered by our distributed algorithm in a changing network environment, multimedia applications could achieve better QoS statistically.

  13. The Galactic Isotropic γ-ray Background and Implications for Dark Matter

    NASA Astrophysics Data System (ADS)

    Campbell, Sheldon S.; Kwa, Anna; Kaplinghat, Manoj

    2018-06-01

    We present an analysis of the radial angular profile of the galacto-isotropic (GI) γ-ray flux-the statistically uniform flux in angular annuli centred on the Galactic centre. Two different approaches are used to measure the GI flux profile in 85 months of Fermi-LAT data: the BDS statistical method which identifies spatial correlations, and a new Poisson ordered-pixel method which identifies non-Poisson contributions. Both methods produce similar GI flux profiles. The GI flux profile is well-described by an existing model of bremsstrahlung, π0 production, inverse Compton scattering, and the isotropic background. Discrepancies with data in our full-sky model are not present in the GI component, and are therefore due to mis-modelling of the non-GI emission. Dark matter annihilation constraints based solely on the observed GI profile are close to the thermal WIMP cross section below 100 GeV, for fixed models of the dark matter density profile and astrophysical γ-ray foregrounds. Refined measurements of the GI profile are expected to improve these constraints by a factor of a few.

  14. Radiation dose constraints for organs at risk in neuro-oncology; the European Particle Therapy Network consensus.

    PubMed

    Lambrecht, Maarten; Eekers, Daniëlle B P; Alapetite, Claire; Burnet, Neil G; Calugaru, Valentin; Coremans, Ida E M; Fossati, Piero; Høyer, Morten; Langendijk, Johannes A; Romero, Alejandra Méndez; Paulsen, Frank; Perpar, Ana; Renard, Laurette; de Ruysscher, Dirk; Timmermann, Beate; Vitek, Pavel; Weber, Damien C; van der Weide, Hiske L; Whitfield, Gillian A; Wiggenraad, Ruud; Roelofs, Erik; Nyström, Petra Witt; Troost, Esther G C

    2018-05-17

    For unbiased comparison of different radiation modalities and techniques, consensus on delineation of radiation sensitive organs at risk (OARs) and on their dose constraints is warranted. Following the publication of a digital, online atlas for OAR delineation in neuro-oncology by the same group, we assessed the brain OAR-dose constraints in a follow-up study. We performed a comprehensive search to identify the current papers on OAR dose constraints for normofractionated photon and particle therapy in PubMed, Ovid Medline, Cochrane Library, Embase and Web of Science. Moreover, the included articles' reference lists were cross-checked for potential studies that met the inclusion criteria. Consensus was reached among 20 radiation oncology experts in the field of neuro-oncology. For the OARs published in the neuro-oncology literature, we summarized the available literature and recommended dose constraints associated with certain levels of normal tissue complication probability (NTCP) according to the recent ICRU recommendations. For those OARs with lacking or insufficient NTCP data, a proposal for effective and efficient data collection is given. The use of the European Particle Therapy Network-consensus OAR dose constraints summarized in this article is recommended for the model-based approach comparing photon and proton beam irradiation as well as for prospective clinical trials including novel radiation techniques and/or modalities. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Computational Research on Mobile Pastoralism Using Agent-Based Modeling and Satellite Imagery.

    PubMed

    Sakamoto, Takuto

    2016-01-01

    Dryland pastoralism has long attracted considerable attention from researchers in diverse fields. However, rigorous formal study is made difficult by the high level of mobility of pastoralists as well as by the sizable spatio-temporal variability of their environment. This article presents a new computational approach for studying mobile pastoralism that overcomes these issues. Combining multi-temporal satellite images and agent-based modeling allows a comprehensive examination of pastoral resource access over a realistic dryland landscape with unpredictable ecological dynamics. The article demonstrates the analytical potential of this approach through its application to mobile pastoralism in northeast Nigeria. Employing more than 100 satellite images of the area, extensive simulations are conducted under a wide array of circumstances, including different land-use constraints. The simulation results reveal complex dependencies of pastoral resource access on these circumstances along with persistent patterns of seasonal land use observed at the macro level.

  16. Computational Research on Mobile Pastoralism Using Agent-Based Modeling and Satellite Imagery

    PubMed Central

    Sakamoto, Takuto

    2016-01-01

    Dryland pastoralism has long attracted considerable attention from researchers in diverse fields. However, rigorous formal study is made difficult by the high level of mobility of pastoralists as well as by the sizable spatio-temporal variability of their environment. This article presents a new computational approach for studying mobile pastoralism that overcomes these issues. Combining multi-temporal satellite images and agent-based modeling allows a comprehensive examination of pastoral resource access over a realistic dryland landscape with unpredictable ecological dynamics. The article demonstrates the analytical potential of this approach through its application to mobile pastoralism in northeast Nigeria. Employing more than 100 satellite images of the area, extensive simulations are conducted under a wide array of circumstances, including different land-use constraints. The simulation results reveal complex dependencies of pastoral resource access on these circumstances along with persistent patterns of seasonal land use observed at the macro level. PMID:26963526

  17. Modeling helical proteins using residual dipolar couplings, sparse long-range distance constraints and a simple residue-based force field

    PubMed Central

    Eggimann, Becky L.; Vostrikov, Vitaly V.; Veglia, Gianluigi; Siepmann, J. Ilja

    2013-01-01

    We present a fast and simple protocol to obtain moderate-resolution backbone structures of helical proteins. This approach utilizes a combination of sparse backbone NMR data (residual dipolar couplings and paramagnetic relaxation enhancements) or EPR data with a residue-based force field and Monte Carlo/simulated annealing protocol to explore the folding energy landscape of helical proteins. By using only backbone NMR data, which are relatively easy to collect and analyze, and strategically placed spin relaxation probes, we show that it is possible to obtain protein structures with correct helical topology and backbone RMS deviations well below 4 Å. This approach offers promising alternatives for the structural determination of proteins in which nuclear Overha-user effect data are difficult or impossible to assign and produces initial models that will speed up the high-resolution structure determination by NMR spectroscopy. PMID:24639619

  18. Quantification and Segmentation of Brain Tissues from MR Images: A Probabilistic Neural Network Approach

    PubMed Central

    Wang, Yue; Adalý, Tülay; Kung, Sun-Yuan; Szabo, Zsolt

    2007-01-01

    This paper presents a probabilistic neural network based technique for unsupervised quantification and segmentation of brain tissues from magnetic resonance images. It is shown that this problem can be solved by distribution learning and relaxation labeling, resulting in an efficient method that may be particularly useful in quantifying and segmenting abnormal brain tissues where the number of tissue types is unknown and the distributions of tissue types heavily overlap. The new technique uses suitable statistical models for both the pixel and context images and formulates the problem in terms of model-histogram fitting and global consistency labeling. The quantification is achieved by probabilistic self-organizing mixtures and the segmentation by a probabilistic constraint relaxation network. The experimental results show the efficient and robust performance of the new algorithm and that it outperforms the conventional classification based approaches. PMID:18172510

  19. Market-Based Coordination of Thermostatically Controlled Loads—Part I: A Mechanism Design Formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sen; Zhang, Wei; Lian, Jianming

    This paper focuses on the coordination of a population of Thermostatically Controlled Loads (TCLs) with unknown parameters to achieve group objectives. The problem involves designing the bidding and market clearing strategy to motivate self-interested users to realize efficient energy allocation subject to a peak power constraint. Using the mechanism design approach, we propose a market-based coordination framework, which can effectively incorporate heterogeneous load dynamics, systematically deal with user preferences, account for the unknown load model parameters, and enable the real-world implementation with limited communication resources. This paper is divided into two parts. Part I presents a mathematical formulation of themore » problem and develops a coordination framework using the mechanism design approach. Part II presents a learning scheme to account for the unknown load model parameters, and evaluates the proposed framework through realistic simulations.« less

  20. Expanding Metabolic Engineering Algorithms Using Feasible Space and Shadow Price Constraint Modules

    PubMed Central

    Tervo, Christopher J.; Reed, Jennifer L.

    2014-01-01

    While numerous computational methods have been developed that use genome-scale models to propose mutants for the purpose of metabolic engineering, they generally compare mutants based on a single criteria (e.g., production rate at a mutant’s maximum growth rate). As such, these approaches remain limited in their ability to include multiple complex engineering constraints. To address this shortcoming, we have developed feasible space and shadow price constraint (FaceCon and ShadowCon) modules that can be added to existing mixed integer linear adaptive evolution metabolic engineering algorithms, such as OptKnock and OptORF. These modules allow strain designs to be identified amongst a set of multiple metabolic engineering algorithm solutions that are capable of high chemical production while also satisfying additional design criteria. We describe the various module implementations and their potential applications to the field of metabolic engineering. We then incorporated these modules into the OptORF metabolic engineering algorithm. Using an Escherichia coli genome-scale model (iJO1366), we generated different strain designs for the anaerobic production of ethanol from glucose, thus demonstrating the tractability and potential utility of these modules in metabolic engineering algorithms. PMID:25478320

  1. Final Report - Regulatory Considerations for Adaptive Systems

    NASA Technical Reports Server (NTRS)

    Wilkinson, Chris; Lynch, Jonathan; Bharadwaj, Raj

    2013-01-01

    This report documents the findings of a preliminary research study into new approaches to the software design assurance of adaptive systems. We suggest a methodology to overcome the software validation and verification difficulties posed by the underlying assumption of non-adaptive software in the requirementsbased- testing verification methods in RTCA/DO-178B and C. An analysis of the relevant RTCA/DO-178B and C objectives is presented showing the reasons for the difficulties that arise in showing satisfaction of the objectives and suggested additional means by which they could be satisfied. We suggest that the software design assurance problem for adaptive systems is principally one of developing correct and complete high level requirements and system level constraints that define the necessary system functional and safety properties to assure the safe use of adaptive systems. We show how analytical techniques such as model based design, mathematical modeling and formal or formal-like methods can be used to both validate the high level functional and safety requirements, establish necessary constraints and provide the verification evidence for the satisfaction of requirements and constraints that supplements conventional testing. Finally the report identifies the follow-on research topics needed to implement this methodology.

  2. Genetic constraints on adaptation: a theoretical primer for the genomics era.

    PubMed

    Connallon, Tim; Hall, Matthew D

    2018-06-01

    Genetic constraints are features of inheritance systems that slow or prohibit adaptation. Several population genetic mechanisms of constraint have received sustained attention within the field since they were first articulated in the early 20th century. This attention is now reflected in a rich, and still growing, theoretical literature on the genetic limits to adaptive change. In turn, empirical research on constraints has seen a rapid expansion over the last two decades in response to changing interests of evolutionary biologists, along with new technologies, expanding data sets, and creative analytical approaches that blend mathematical modeling with genomics. Indeed, one of the most notable and exciting features of recent progress in genetic constraints is the close connection between theoretical and empirical research. In this review, we discuss five major population genetic contexts of genetic constraint: genetic dominance, pleiotropy, fitness trade-offs between types of individuals of a population, sign epistasis, and genetic linkage between loci. For each, we outline historical antecedents of the theory, specific contexts where constraints manifest, and their quantitative consequences for adaptation. From each of these theoretical foundations, we discuss recent empirical approaches for identifying and characterizing genetic constraints, each grounded and motivated by this theory, and outline promising areas for future work. © 2018 New York Academy of Sciences.

  3. Laplace-Beltrami operator and exact solutions for branes

    NASA Astrophysics Data System (ADS)

    Zheltukhin, A. A.

    2013-02-01

    Proposed is a new approach to finding exact solutions of nonlinear p-brane equations in D-dimensional Minkowski space based on the use of various initial value constraints. It is shown that the constraints Δx→=0 and Δx→=-Λ(t,σr)x→ give two sets of exact solutions.

  4. Minimizing conflicts: A heuristic repair method for constraint-satisfaction and scheduling problems

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Johnston, Mark; Philips, Andrew; Laird, Phil

    1992-01-01

    This paper describes a simple heuristic approach to solving large-scale constraint satisfaction and scheduling problems. In this approach one starts with an inconsistent assignment for a set of variables and searches through the space of possible repairs. The search can be guided by a value-ordering heuristic, the min-conflicts heuristic, that attempts to minimize the number of constraint violations after each step. The heuristic can be used with a variety of different search strategies. We demonstrate empirically that on the n-queens problem, a technique based on this approach performs orders of magnitude better than traditional backtracking techniques. We also describe a scheduling application where the approach has been used successfully. A theoretical analysis is presented both to explain why this method works well on certain types of problems and to predict when it is likely to be most effective.

  5. Kinetic analysis of reactions of Si-based epoxy resins by near-infrared spectroscopy, 13C NMR and soft-hard modelling.

    PubMed

    Garrido, Mariano; Larrechi, Maria Soledad; Rius, F Xavier; Mercado, Luis Adolfo; Galià, Marina

    2007-02-05

    Soft- and hard-modelling strategy was applied to near-infrared spectroscopy data obtained from monitoring the reaction between glycidyloxydimethylphenyl silane, a silicon-based epoxy monomer, and aniline. On the basis of the pure soft-modelling approach and previous chemical knowledge, a kinetic model for the reaction was proposed. Then, multivariate curve resolution-alternating least squares optimization was carried out under a hard constraint, that compels the concentration profiles to fulfil the proposed kinetic model at each iteration of the optimization process. In this way, the concentration profiles of each species and the corresponding kinetic rate constants of the reaction, unpublished until now, were obtained. The results obtained were contrasted with 13C NMR. The joint interval test of slope and intercept for detecting bias was not significant (alpha=5%).

  6. Development and validation of a computational model to study the effect of foot constraint on ankle injury due to external rotation.

    PubMed

    Wei, Feng; Hunley, Stanley C; Powell, John W; Haut, Roger C

    2011-02-01

    Recent studies, using two different manners of foot constraint, potted and taped, document altered failure characteristics in the human cadaver ankle under controlled external rotation of the foot. The posterior talofibular ligament (PTaFL) was commonly injured when the foot was constrained in potting material, while the frequency of deltoid ligament injury was higher for the taped foot. In this study an existing multibody computational modeling approach was validated to include the influence of foot constraint, determine the kinematics of the joint under external foot rotation, and consequently obtain strains in various ligaments. It was hypothesized that the location of ankle injury due to excessive levels of external foot rotation is a function of foot constraint. The results from this model simulation supported this hypothesis and helped to explain the mechanisms of injury in the cadaver experiments. An excessive external foot rotation might generate a PTaFL injury for a rigid foot constraint, and an anterior deltoid ligament injury for a pliant foot constraint. The computational models may be further developed and modified to simulate the human response for different shoe designs, as well as on various athletic shoe-surface interfaces, so as to provide a computational basis for optimizing athletic performance with minimal injury risk.

  7. An Approximation Solution to Refinery Crude Oil Scheduling Problem with Demand Uncertainty Using Joint Constrained Programming

    PubMed Central

    Duan, Qianqian; Yang, Genke; Xu, Guanglin; Pan, Changchun

    2014-01-01

    This paper is devoted to develop an approximation method for scheduling refinery crude oil operations by taking into consideration the demand uncertainty. In the stochastic model the demand uncertainty is modeled as random variables which follow a joint multivariate distribution with a specific correlation structure. Compared to deterministic models in existing works, the stochastic model can be more practical for optimizing crude oil operations. Using joint chance constraints, the demand uncertainty is treated by specifying proximity level on the satisfaction of product demands. However, the joint chance constraints usually hold strong nonlinearity and consequently, it is still hard to handle it directly. In this paper, an approximation method combines a relax-and-tight technique to approximately transform the joint chance constraints to a serial of parameterized linear constraints so that the complicated problem can be attacked iteratively. The basic idea behind this approach is to approximate, as much as possible, nonlinear constraints by a lot of easily handled linear constraints which will lead to a well balance between the problem complexity and tractability. Case studies are conducted to demonstrate the proposed methods. Results show that the operation cost can be reduced effectively compared with the case without considering the demand correlation. PMID:24757433

  8. An approximation solution to refinery crude oil scheduling problem with demand uncertainty using joint constrained programming.

    PubMed

    Duan, Qianqian; Yang, Genke; Xu, Guanglin; Pan, Changchun

    2014-01-01

    This paper is devoted to develop an approximation method for scheduling refinery crude oil operations by taking into consideration the demand uncertainty. In the stochastic model the demand uncertainty is modeled as random variables which follow a joint multivariate distribution with a specific correlation structure. Compared to deterministic models in existing works, the stochastic model can be more practical for optimizing crude oil operations. Using joint chance constraints, the demand uncertainty is treated by specifying proximity level on the satisfaction of product demands. However, the joint chance constraints usually hold strong nonlinearity and consequently, it is still hard to handle it directly. In this paper, an approximation method combines a relax-and-tight technique to approximately transform the joint chance constraints to a serial of parameterized linear constraints so that the complicated problem can be attacked iteratively. The basic idea behind this approach is to approximate, as much as possible, nonlinear constraints by a lot of easily handled linear constraints which will lead to a well balance between the problem complexity and tractability. Case studies are conducted to demonstrate the proposed methods. Results show that the operation cost can be reduced effectively compared with the case without considering the demand correlation.

  9. Masked areas in shear peak statistics. A forward modeling approach

    DOE PAGES

    Bard, D.; Kratochvil, J. M.; Dawson, W.

    2016-03-09

    The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impactmore » of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.« less

  10. MASKED AREAS IN SHEAR PEAK STATISTICS: A FORWARD MODELING APPROACH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bard, D.; Kratochvil, J. M.; Dawson, W., E-mail: djbard@slac.stanford.edu

    2016-03-10

    The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impactmore » of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.« less

  11. Use of 13Cα Chemical-Shifts in Protein Structure Determination

    PubMed Central

    Vila, Jorge A.; Ripoll, Daniel R.; Scheraga, Harold A.

    2008-01-01

    A physics-based method, aimed at determining protein structures by using NOE-derived distances together with observed and computed 13C chemical shifts, is proposed. The approach makes use of 13Cα chemical shifts, computed at the density functional level of theory, to obtain torsional constraints for all backbone and side-chain torsional angles without making a priori use of the occupancy of any region of the Ramachandran map by the amino acid residues. The torsional constraints are not fixed but are changed dynamically in each step of the procedure, following an iterative self-consistent approach intended to identify a set of conformations for which the computed 13Cα chemical shifts match the experimental ones. A test is carried out on a 76-amino acid all-α-helical protein, namely the B. Subtilis acyl carrier protein. It is shown that, starting from randomly generated conformations, the final protein models are more accurate than an existing NMR-derived structure model of this protein, in terms of both the agreement between predicted and observed 13Cα chemical shifts and some stereochemical quality indicators, and of similar accuracy as one of the protein models solved at a high level of resolution. The results provide evidence that this methodology can be used not only for structure determination but also for additional protein structure refinement of NMR-derived models deposited in the Protein Data Bank. PMID:17516673

  12. Random walks based multi-image segmentation: Quasiconvexity results and GPU-based solutions

    PubMed Central

    Collins, Maxwell D.; Xu, Jia; Grady, Leo; Singh, Vikas

    2012-01-01

    We recast the Cosegmentation problem using Random Walker (RW) segmentation as the core segmentation algorithm, rather than the traditional MRF approach adopted in the literature so far. Our formulation is similar to previous approaches in the sense that it also permits Cosegmentation constraints (which impose consistency between the extracted objects from ≥ 2 images) using a nonparametric model. However, several previous nonparametric cosegmentation methods have the serious limitation that they require adding one auxiliary node (or variable) for every pair of pixels that are similar (which effectively limits such methods to describing only those objects that have high entropy appearance models). In contrast, our proposed model completely eliminates this restrictive dependence –the resulting improvements are quite significant. Our model further allows an optimization scheme exploiting quasiconvexity for model-based segmentation with no dependence on the scale of the segmented foreground. Finally, we show that the optimization can be expressed in terms of linear algebra operations on sparse matrices which are easily mapped to GPU architecture. We provide a highly specialized CUDA library for Cosegmentation exploiting this special structure, and report experimental results showing these advantages. PMID:25278742

  13. Development of a method for comprehensive water quality forecasting and its application in Miyun reservoir of Beijing, China.

    PubMed

    Zhang, Lei; Zou, Zhihong; Shan, Wei

    2017-06-01

    Water quality forecasting is an essential part of water resource management. Spatiotemporal variations of water quality and their inherent constraints make it very complex. This study explored a data-based method for short-term water quality forecasting. Prediction of water quality indicators including dissolved oxygen, chemical oxygen demand by KMnO 4 and ammonia nitrogen using support vector machine was taken as inputs of the particle swarm algorithm based optimal wavelet neural network to forecast the whole status index of water quality. Gubeikou monitoring section of Miyun reservoir in Beijing, China was taken as the study case to examine effectiveness of this approach. The experiment results also revealed that the proposed model has advantages of stability and time reduction in comparison with other data-driven models including traditional BP neural network model, wavelet neural network model and Gradient Boosting Decision Tree model. It can be used as an effective approach to perform short-term comprehensive water quality prediction. Copyright © 2016. Published by Elsevier B.V.

  14. An LMI approach to design H(infinity) controllers for discrete-time nonlinear systems based on unified models.

    PubMed

    Liu, Meiqin; Zhang, Senlin

    2008-10-01

    A unified neural network model termed standard neural network model (SNNM) is advanced. Based on the robust L(2) gain (i.e. robust H(infinity) performance) analysis of the SNNM with external disturbances, a state-feedback control law is designed for the SNNM to stabilize the closed-loop system and eliminate the effect of external disturbances. The control design constraints are shown to be a set of linear matrix inequalities (LMIs) which can be easily solved by various convex optimization algorithms (e.g. interior-point algorithms) to determine the control law. Most discrete-time recurrent neural network (RNNs) and discrete-time nonlinear systems modelled by neural networks or Takagi and Sugeno (T-S) fuzzy models can be transformed into the SNNMs to be robust H(infinity) performance analyzed or robust H(infinity) controller synthesized in a unified SNNM's framework. Finally, some examples are presented to illustrate the wide application of the SNNMs to the nonlinear systems, and the proposed approach is compared with related methods reported in the literature.

  15. Spectrum sensing and resource allocation for multicarrier cognitive radio systems under interference and power constraints

    NASA Astrophysics Data System (ADS)

    Dikmese, Sener; Srinivasan, Sudharsan; Shaat, Musbah; Bader, Faouzi; Renfors, Markku

    2014-12-01

    Multicarrier waveforms have been commonly recognized as strong candidates for cognitive radio. In this paper, we study the dynamics of spectrum sensing and spectrum allocation functions in cognitive radio context using very practical signal models for the primary users (PUs), including the effects of power amplifier nonlinearities. We start by sensing the spectrum with energy detection-based wideband multichannel spectrum sensing algorithm and continue by investigating optimal resource allocation methods. Along the way, we examine the effects of spectral regrowth due to the inevitable power amplifier nonlinearities of the PU transmitters. The signal model includes frequency selective block-fading channel models for both secondary and primary transmissions. Filter bank-based wideband spectrum sensing techniques are applied for detecting spectral holes and filter bank-based multicarrier (FBMC) modulation is selected for transmission as an alternative multicarrier waveform to avoid the disadvantage of limited spectral containment of orthogonal frequency-division multiplexing (OFDM)-based multicarrier systems. The optimization technique used for the resource allocation approach considered in this study utilizes the information obtained through spectrum sensing and knowledge of spectrum leakage effects of the underlying waveforms, including a practical power amplifier model for the PU transmitter. This study utilizes a computationally efficient algorithm to maximize the SU link capacity with power and interference constraints. It is seen that the SU transmission capacity depends critically on the spectral containment of the PU waveform, and these effects are quantified in a case study using an 802.11-g WLAN scenario.

  16. Next Generation Safeguards Initiative research to determine the Pu mass in spent fuel assemblies: Purpose, approach, constraints, implementation, and calibration

    NASA Astrophysics Data System (ADS)

    Tobin, S. J.; Menlove, H. O.; Swinhoe, M. T.; Schear, M. A.

    2011-10-01

    The Next Generation Safeguards Initiative (NGSI) of the U.S. Department of Energy has funded a multi-lab/multi-university collaboration to quantify the plutonium mass in spent nuclear fuel assemblies and to detect the diversion of pins from them. The goal of this research effort is to quantify the capability of various non-destructive assay (NDA) technologies as well as to train a future generation of safeguards practitioners. This research is "technology driven" in the sense that we will quantify the capabilities of a wide range of safeguards technologies of interest to regulators and policy makers; a key benefit to this approach is that the techniques are being tested in a unified manner. When the results of the Monte Carlo modeling are evaluated and integrated, practical constraints are part of defining the potential context in which a given technology might be applied. This paper organizes the commercial spent fuel safeguard needs into four facility types in order to identify any constraints on the NDA system design. These four facility types are the following: future reprocessing plants, current reprocessing plants, once-through spent fuel repositories, and any other sites that store individual spent fuel assemblies (reactor sites are the most common facility type in this category). Dry storage is not of interest since individual assemblies are not accessible. This paper will overview the purpose and approach of the NGSI spent fuel effort and describe the constraints inherent in commercial fuel facilities. It will conclude by discussing implementation and calibration of measurement systems. This report will also provide some motivation for considering a couple of other safeguards concepts (base measurement and fingerprinting) that might meet the safeguards need but not require the determination of plutonium mass.

  17. Robust model predictive control for multi-step short range spacecraft rendezvous

    NASA Astrophysics Data System (ADS)

    Zhu, Shuyi; Sun, Ran; Wang, Jiaolong; Wang, Jihe; Shao, Xiaowei

    2018-07-01

    This work presents a robust model predictive control (MPC) approach for the multi-step short range spacecraft rendezvous problem. During the specific short range phase concerned, the chaser is supposed to be initially outside the line-of-sight (LOS) cone. Therefore, the rendezvous process naturally includes two steps: the first step is to transfer the chaser into the LOS cone and the second step is to transfer the chaser into the aimed region with its motion confined within the LOS cone. A novel MPC framework named after Mixed MPC (M-MPC) is proposed, which is the combination of the Variable-Horizon MPC (VH-MPC) framework and the Fixed-Instant MPC (FI-MPC) framework. The M-MPC framework enables the optimization for the two steps to be implemented jointly rather than to be separated factitiously, and its computation workload is acceptable for the usually low-power processors onboard spacecraft. Then considering that disturbances including modeling error, sensor noise and thrust uncertainty may induce undesired constraint violations, a robust technique is developed and it is attached to the above M-MPC framework to form a robust M-MPC approach. The robust technique is based on the chance-constrained idea, which ensures that constraints can be satisfied with a prescribed probability. It improves the robust technique proposed by Gavilan et al., because it eliminates the unnecessary conservativeness by explicitly incorporating known statistical properties of the navigation uncertainty. The efficacy of the robust M-MPC approach is shown in a simulation study.

  18. Glass Property Models and Constraints for Estimating the Glass to be Produced at Hanford by Implementing Current Advanced Glass Formulation Efforts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vienna, John D.; Kim, Dong-Sang; Skorski, Daniel C.

    2013-07-01

    Recent glass formulation and melter testing data have suggested that significant increases in waste loading in HLW and LAW glasses are possible over current system planning estimates. The data (although limited in some cases) were evaluated to determine a set of constraints and models that could be used to estimate the maximum loading of specific waste compositions in glass. It is recommended that these models and constraints be used to estimate the likely HLW and LAW glass volumes that would result if the current glass formulation studies are successfully completed. It is recognized that some of the models are preliminarymore » in nature and will change in the coming years. Plus the models do not currently address the prediction uncertainties that would be needed before they could be used in plant operations. The models and constraints are only meant to give an indication of rough glass volumes and are not intended to be used in plant operation or waste form qualification activities. A current research program is in place to develop the data, models, and uncertainty descriptions for that purpose. A fundamental tenet underlying the research reported in this document is to try to be less conservative than previous studies when developing constraints for estimating the glass to be produced by implementing current advanced glass formulation efforts. The less conservative approach documented herein should allow for the estimate of glass masses that may be realized if the current efforts in advanced glass formulations are completed over the coming years and are as successful as early indications suggest they may be. Because of this approach there is an unquantifiable uncertainty in the ultimate glass volume projections due to model prediction uncertainties that has to be considered along with other system uncertainties such as waste compositions and amounts to be immobilized, split factors between LAW and HLW, etc.« less

  19. Metamodeling and the Critic-based approach to multi-level optimization.

    PubMed

    Werbos, Ludmilla; Kozma, Robert; Silva-Lugo, Rodrigo; Pazienza, Giovanni E; Werbos, Paul J

    2012-08-01

    Large-scale networks with hundreds of thousands of variables and constraints are becoming more and more common in logistics, communications, and distribution domains. Traditionally, the utility functions defined on such networks are optimized using some variation of Linear Programming, such as Mixed Integer Programming (MIP). Despite enormous progress both in hardware (multiprocessor systems and specialized processors) and software (Gurobi) we are reaching the limits of what these tools can handle in real time. Modern logistic problems, for example, call for expanding the problem both vertically (from one day up to several days) and horizontally (combining separate solution stages into an integrated model). The complexity of such integrated models calls for alternative methods of solution, such as Approximate Dynamic Programming (ADP), which provide a further increase in the performance necessary for the daily operation. In this paper, we present the theoretical basis and related experiments for solving the multistage decision problems based on the results obtained for shorter periods, as building blocks for the models and the solution, via Critic-Model-Action cycles, where various types of neural networks are combined with traditional MIP models in a unified optimization system. In this system architecture, fast and simple feed-forward networks are trained to reasonably initialize more complicated recurrent networks, which serve as approximators of the value function (Critic). The combination of interrelated neural networks and optimization modules allows for multiple queries for the same system, providing flexibility and optimizing performance for large-scale real-life problems. A MATLAB implementation of our solution procedure for a realistic set of data and constraints shows promising results, compared to the iterative MIP approach. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Route constraints model based on polychromatic sets

    NASA Astrophysics Data System (ADS)

    Yin, Xianjun; Cai, Chao; Wang, Houjun; Li, Dongwu

    2018-03-01

    With the development of unmanned aerial vehicle (UAV) technology, the fields of its application are constantly expanding. The mission planning of UAV is especially important, and the planning result directly influences whether the UAV can accomplish the task. In order to make the results of mission planning for unmanned aerial vehicle more realistic, it is necessary to consider not only the physical properties of the aircraft, but also the constraints among the various equipment on the UAV. However, constraints among the equipment of UAV are complex, and the equipment has strong diversity and variability, which makes these constraints difficult to be described. In order to solve the above problem, this paper, referring to the polychromatic sets theory used in the advanced manufacturing field to describe complex systems, presents a mission constraint model of UAV based on polychromatic sets.

  1. Image-optimized Coronal Magnetic Field Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M., E-mail: shaela.i.jones-mecholsky@nasa.gov, E-mail: shaela.i.jonesmecholsky@nasa.gov

    We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outsidemore » of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.« less

  2. The Betelgeuse Project: Constraints from Rotation

    NASA Astrophysics Data System (ADS)

    Diaz, Manuel; Nance, Sarafina; Sullivan, James; Wheeler, J. Craig

    2017-01-01

    In order to constrain the evolutionary state of the red supergiant Betelgeuse, we have produced a suite of models with ZAMS masses from 15 to 25 Msun in intervals of 1 Msun including the effects of rotation computed with the stellar evolutionary code MESA. For non--rotating models we find results that are similar to other work. It is somewhat difficult to find models that agree within 1 σ of the observed values of R, Teff and L, but modestly easy within 3 σ uncertainty. Incorporating the nominal observed rotational velocity, ~15 km/s, yields significantly different, and challenging, constraints. This velocity constraint is only matched when the models first approach the base of the red supergiant branch (RSB), having crossed the Hertzsprung gap, but not yet having ascended the RSB and most violate even generous error bars on R, Teff and L. Models at the tip of the RSB typically rotate at only ~0.1 km/s, independent of any reasonable choice of initial rotation. We discuss the possible uncertainties in our modeling and the observations, including the distance to Betelgeuse, the rotation velocity, and model parameters. We summarize various options to account for the rotational velocity and suggest that one possibility is that Betelgeuse merged with a companion star of about 1 Msun as it ascended the RSB, in the process producing the ring structure observed at about 7' away. A past coalescence would complicate attempts to understand the evolutionary history and future of Betelgeuse. To that end, we also present asteroseismology models with acoustic waves driven by inner convective regions that could elucidate the inner structure and evolutionary state.

  3. Investigations into Generalization of Constraint-Based Scheduling Theories with Applications to Space Telescope Observation Scheduling

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Smith, Steven S.

    1996-01-01

    This final report summarizes research performed under NASA contract NCC 2-531 toward generalization of constraint-based scheduling theories and techniques for application to space telescope observation scheduling problems. Our work into theories and techniques for solution of this class of problems has led to the development of the Heuristic Scheduling Testbed System (HSTS), a software system for integrated planning and scheduling. Within HSTS, planning and scheduling are treated as two complementary aspects of the more general process of constructing a feasible set of behaviors of a target system. We have validated the HSTS approach by applying it to the generation of observation schedules for the Hubble Space Telescope. This report summarizes the HSTS framework and its application to the Hubble Space Telescope domain. First, the HSTS software architecture is described, indicating (1) how the structure and dynamics of a system is modeled in HSTS, (2) how schedules are represented at multiple levels of abstraction, and (3) the problem solving machinery that is provided. Next, the specific scheduler developed within this software architecture for detailed management of Hubble Space Telescope operations is presented. Finally, experimental performance results are given that confirm the utility and practicality of the approach.

  4. Functional Characterization of Alternate Optimal Solutions of Escherichia coli's Transcriptional and Translational Machinery

    PubMed Central

    Thiele, Ines; Fleming, Ronan M.T.; Bordbar, Aarash; Schellenberger, Jan; Palsson, Bernhard Ø.

    2010-01-01

    Abstract The constraint-based reconstruction and analysis approach has recently been extended to describe Escherichia coli's transcriptional and translational machinery. Here, we introduce the concept of reaction coupling to represent the dependency between protein synthesis and utilization. These coupling constraints lead to a significant contraction of the feasible set of steady-state fluxes. The subset of alternate optimal solutions (AOS) consistent with maximal ribosome production was calculated. The majority of transcriptional and translational reactions were active for all of these AOS, showing that the network has a low degree of redundancy. Furthermore, all calculated AOS contained the qualitative expression of at least 92% of the known essential genes. Principal component analysis of AOS demonstrated that energy currencies (ATP, GTP, and phosphate) dominate the network's capability to produce ribosomes. Additionally, we identified regulatory control points of the network, which include the transcription reactions of σ70 (RpoD) as well as that of a degradosome component (Rne) and of tRNA charging (ValS). These reactions contribute significant variance among AOS. These results show that constraint-based modeling can be applied to gain insight into the systemic properties of E. coli's transcriptional and translational machinery. PMID:20483314

  5. Analysis of conserved noncoding DNA in Drosophila reveals similar constraints in intergenic and intronic sequences.

    PubMed

    Bergman, C M; Kreitman, M

    2001-08-01

    Comparative genomic approaches to gene and cis-regulatory prediction are based on the principle that differential DNA sequence conservation reflects variation in functional constraint. Using this principle, we analyze noncoding sequence conservation in Drosophila for 40 loci with known or suspected cis-regulatory function encompassing >100 kb of DNA. We estimate the fraction of noncoding DNA conserved in both intergenic and intronic regions and describe the length distribution of ungapped conserved noncoding blocks. On average, 22%-26% of noncoding sequences surveyed are conserved in Drosophila, with median block length approximately 19 bp. We show that point substitution in conserved noncoding blocks exhibits transition bias as well as lineage effects in base composition, and occurs more than an order of magnitude more frequently than insertion/deletion (indel) substitution. Overall, patterns of noncoding DNA structure and evolution differ remarkably little between intergenic and intronic conserved blocks, suggesting that the effects of transcription per se contribute minimally to the constraints operating on these sequences. The results of this study have implications for the development of alignment and prediction algorithms specific to noncoding DNA, as well as for models of cis-regulatory DNA sequence evolution.

  6. Modeling Complex Biological Flows in Multi-Scale Systems using the APDEC Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trebotich, D

    We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA-laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscousmore » flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.« less

  7. Modeling complex biological flows in multi-scale systems using the APDEC framework

    NASA Astrophysics Data System (ADS)

    Trebotich, David

    2006-09-01

    We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscous flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.

  8. Integrated Model Reduction and Control of Aircraft with Flexible Wings

    NASA Technical Reports Server (NTRS)

    Swei, Sean Shan-Min; Zhu, Guoming G.; Nguyen, Nhan T.

    2013-01-01

    This paper presents an integrated approach to the modeling and control of aircraft with exible wings. The coupled aircraft rigid body dynamics with a high-order elastic wing model can be represented in a nite dimensional state-space form. Given a set of desired output covariance, a model reduction process is performed by using the weighted Modal Cost Analysis (MCA). A dynamic output feedback controller, which is designed based on the reduced-order model, is developed by utilizing output covariance constraint (OCC) algorithm, and the resulting OCC design weighting matrix is used for the next iteration of the weighted cost analysis. This controller is then validated for full-order evaluation model to ensure that the aircraft's handling qualities are met and the uttering motion of the wings suppressed. An iterative algorithm is developed in CONDUIT environment to realize the integration of model reduction and controller design. The proposed integrated approach is applied to NASA Generic Transport Model (GTM) for demonstration.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Baker, Kyri; Summers, Tyler

    The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative boundsmore » that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Jialin, E-mail: 2004pjl@163.com; Zhang, Hongbo; Hu, Peijun

    Purpose: Efficient and accurate 3D liver segmentations from contrast-enhanced computed tomography (CT) images play an important role in therapeutic strategies for hepatic diseases. However, inhomogeneous appearances, ambiguous boundaries, and large variance in shape often make it a challenging task. The existence of liver abnormalities poses further difficulty. Despite the significant intensity difference, liver tumors should be segmented as part of the liver. This study aims to address these challenges, especially when the target livers contain subregions with distinct appearances. Methods: The authors propose a novel multiregion-appearance based approach with graph cuts to delineate the liver surface. For livers with multiplemore » subregions, a geodesic distance based appearance selection scheme is introduced to utilize proper appearance constraint for each subregion. A special case of the proposed method, which uses only one appearance constraint to segment the liver, is also presented. The segmentation process is modeled with energy functions incorporating both boundary and region information. Rather than a simple fixed combination, an adaptive balancing weight is introduced and learned from training sets. The proposed method only calls initialization inside the liver surface. No additional constraints from user interaction are utilized. Results: The proposed method was validated on 50 3D CT images from three datasets, i.e., Medical Image Computing and Computer Assisted Intervention (MICCAI) training and testing set, and local dataset. On MICCAI testing set, the proposed method achieved a total score of 83.4 ± 3.1, outperforming nonexpert manual segmentation (average score of 75.0). When applying their method to MICCAI training set and local dataset, it yielded a mean Dice similarity coefficient (DSC) of 97.7% ± 0.5% and 97.5% ± 0.4%, respectively. These results demonstrated the accuracy of the method when applied to different computed tomography (CT) datasets. In addition, user operator variability experiments showed its good reproducibility. Conclusions: A multiregion-appearance based method is proposed and evaluated to segment liver. This approach does not require prior model construction and so eliminates the burdens associated with model construction and matching. The proposed method provides comparable results with state-of-the-art methods. Validation results suggest that it may be suitable for the clinical use.« less

  11. Group-Based Learning in an Authoritarian Setting? Novel Extension Approaches in Vietnam's Northern Uplands

    ERIC Educational Resources Information Center

    Schad, Iven; Roessler, Regina; Neef, Andreas; Zarate, Anne Valle; Hoffmann, Volker

    2011-01-01

    This study aims to analyze the potential and constraints of group-based extension approaches as an institutional innovation in the Vietnamese agricultural extension system. Our analysis therefore unfolds around the challenges of how to foster this kind of approach within the hierarchical extension policy setting and how to effectively shape and…

  12. Human motion analysis with detection of subpart deformations

    NASA Astrophysics Data System (ADS)

    Wang, Juhui; Lorette, Guy; Bouthemy, Patrick

    1992-06-01

    One essential constraint used in 3-D motion estimation from optical projections is the rigidity assumption. Because of muscle deformations in human motion, this rigidity requirement is often violated for some regions on the human body. Global methods usually fail to bring stable solutions. This paper presents a model-based approach to combating the effect of muscle deformations in human motion analysis. The approach developed is based on two main stages. In the first stage, the human body is partitioned into different areas, where each area is consistent with a general motion model (not necessarily corresponding to a physical existing motion pattern). In the second stage, the regions are eliminated under the hypothesis that they are not induced by a specific human motion pattern. Each hypothesis is generated by making use of specific knowledge about human motion. A global method is used to estimate the 3-D motion parameters in basis of valid segments. Experiments based on a cycling motion sequence are presented.

  13. Geometric approach to segmentation and protein localization in cell culture assays.

    PubMed

    Raman, S; Maxwell, C A; Barcellos-Hoff, M H; Parvin, B

    2007-01-01

    Cell-based fluorescence imaging assays are heterogeneous and require the collection of a large number of images for detailed quantitative analysis. Complexities arise as a result of variation in spatial nonuniformity, shape, overlapping compartments and scale (size). A new technique and methodology has been developed and tested for delineating subcellular morphology and partitioning overlapping compartments at multiple scales. This system is packaged as an integrated software platform for quantifying images that are obtained through fluorescence microscopy. Proposed methods are model based, leveraging geometric shape properties of subcellular compartments and corresponding protein localization. From the morphological perspective, convexity constraint is imposed to delineate and partition nuclear compartments. From the protein localization perspective, radial symmetry is imposed to localize punctate protein events at submicron resolution. Convexity constraint is imposed against boundary information, which are extracted through a combination of zero-crossing and gradient operator. If the convexity constraint fails for the boundary then positive curvature maxima are localized along the contour and the entire blob is partitioned into disjointed convex objects representing individual nuclear compartment, by enforcing geometric constraints. Nuclear compartments provide the context for protein localization, which may be diffuse or punctate. Punctate signal are localized through iterative voting and radial symmetries for improved reliability and robustness. The technique has been tested against 196 images that were generated to study centrosome abnormalities. Corresponding computed representations are compared against manual counts for validation.

  14. Line-of-sight effects in strong lensing: putting theory into practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birrer, Simon; Welschen, Cyril; Amara, Adam

    2017-04-01

    We present a simple method to accurately infer line of sight (LOS) integrated lensing effects for galaxy scale strong lens systems through image reconstruction. Our approach enables us to separate weak lensing LOS effects from the main strong lens deflector. We test our method using mock data and show that strong lens systems can be accurate probes of cosmic shear with a precision on the shear terms of ± 0.003 (statistical error) for an HST-like dataset. We apply our formalism to reconstruct the lens COSMOS 0038+4133 and its LOS. In addition, we estimate the LOS properties with a halo-rendering estimatemore » based on the COSMOS field galaxies and a galaxy-halo connection. The two approaches are independent and complementary in their information content. We find that when estimating the convergence at the strong lens system, performing a joint analysis improves the measure by a factor of two compared to a halo model only analysis. Furthermore the constraints of the strong lens reconstruction lead to tighter constraints on the halo masses of the LOS galaxies. Joint constraints of multiple strong lens systems may add valuable information to the galaxy-halo connection and may allow independent weak lensing shear measurement calibrations.« less

  15. Io's Heat Flow: A Model Including "Warm" Polar Regions

    NASA Astrophysics Data System (ADS)

    Veeder, G. J.; Matson, D. L.; Johnson, T. V.; Davies, A. G.; Blaney, D. L.

    2002-12-01

    Some 90 percent of Io's surface is thermally "passive" material. It is separate from the sites of active volcanic eruptions. Though "passive", its thermal behavior continues to be a challenge for modelers. The usual approach is to take albedo, average daytime temperature, temperature as a function of time of day, etc., and attempt to match these constraints with a uniform surface with a single value of thermal inertia. Io is a case where even globally averaged observations are inconsistent with a single-thermal-inertia model approach. The Veeder et al. (1994) model for "passive" thermal emission addressed seven constraints derived from a decade of ground-based, global observations - average albedo plus infrared fluxes at three separate wavelengths (4.8, 8.7, and 20 microns) for both daytime and eclipsed conditions. This model has only two components - a unit of infinite thermal inertia and a unit of zero thermal inertia. The free parameters are the areal coverage ratio of the two units and their relative albedos (constrained to match the known average albedo). This two-parameter model agreed with the global radiometric data and also predicted significantly higher non-volcanic nighttime temperatures than traditional ("lunar-like") single thermal inertia models. Recent observations from the Galileo infrared radiometer show relatively uniform minimum-night-time temperatures. In particular, they show little variation with either latitude or time of night (Spencer et al., 2000; Rathbun et al., 2002). Additionally, detailed analyses of Io's scattering properties and reflectance variations have led to the interesting conclusion that Io's albedo at regional scales varies little with latitude (Simonelli, et al., 2001). This effectively adds four new observational constraints - lack of albedo variation with latitude, average minimum nighttime temperature and lack of variation of temperature with either latitude or longitude. We have made the fewest modifications necessary for the Veeder et al. model to match these new constrains - we added two model parameters to characterize the volcanically heated high-latitude units. These are the latitude above which the unit exists and its nighttime temperature. The resulting four-parameter model is the first that encompasses all of the available observations of Io's thermal emission and that quantitatively satisfies all eleven observational constraints. While no model is unique, this model is significant because it is the first to accommodate widespread polar regions that are relatively "warm". This work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract to NASA.

  16. HOROPLAN: computer-assisted nurse scheduling using constraint-based programming.

    PubMed

    Darmoni, S J; Fajner, A; Mahé, N; Leforestier, A; Vondracek, M; Stelian, O; Baldenweck, M

    1995-01-01

    Nurse scheduling is a difficult and time consuming task. The schedule has to determine the day to day shift assignments of each nurse for a specified period of time in a way that satisfies the given requirements as much as possible, taking into account the wishes of nurses as closely as possible. This paper presents a constraint-based, artificial intelligence approach by describing a prototype implementation developed with the Charme language and the first results of its use in the Rouen University Hospital. Horoplan implements a non-cyclical constraint-based scheduling, using some heuristics. Four levels of constraints were defined to give a maximum of flexibility: French level (e.g. number of worked hours in a year), hospital level (e.g. specific day-off), department level (e.g. specific shift) and care unit level (e.g. specific pattern for week-ends). Some constraints must always be verified and can not be overruled and some constraints can be overruled at a certain cost. Rescheduling is possible at any time specially in case of an unscheduled absence.

  17. A Variational Assimilation Method for Satellite and Conventional Data: Development of Basic Model for Diagnosis of Cyclone Systems

    NASA Technical Reports Server (NTRS)

    Achtemeier, Gary L.; Scott, Robert W.; Chen, J.

    1991-01-01

    A summary is presented of the progress toward the completion of a comprehensive diagnostic objective analysis system based upon the calculus of variations. The approach was to first develop the objective analysis subject to the constraints that the final product satisfies the five basic primitive equations for a dry inviscid atmosphere: the two nonlinear horizontal momentum equations, the continuity equation, the hydrostatic equation, and the thermodynamic equation. Then, having derived the basic model, there would be added to it the equations for moist atmospheric processes and the radiative transfer equation.

  18. 3-D model-based vehicle tracking.

    PubMed

    Lou, Jianguang; Tan, Tieniu; Hu, Weiming; Yang, Hao; Maybank, Steven J

    2005-10-01

    This paper aims at tracking vehicles from monocular intensity image sequences and presents an efficient and robust approach to three-dimensional (3-D) model-based vehicle tracking. Under the weak perspective assumption and the ground-plane constraint, the movements of model projection in the two-dimensional image plane can be decomposed into two motions: translation and rotation. They are the results of the corresponding movements of 3-D translation on the ground plane (GP) and rotation around the normal of the GP, which can be determined separately. A new metric based on point-to-line segment distance is proposed to evaluate the similarity between an image region and an instantiation of a 3-D vehicle model under a given pose. Based on this, we provide an efficient pose refinement method to refine the vehicle's pose parameters. An improved EKF is also proposed to track and to predict vehicle motion with a precise kinematics model. Experimental results with both indoor and outdoor data show that the algorithm obtains desirable performance even under severe occlusion and clutter.

  19. Do not Lose Your Students in Large Lectures: A Five-Step Paper-Based Model to Foster Students’ Participation

    PubMed Central

    Aburahma, Mona Hassan

    2015-01-01

    Like most of the pharmacy colleges in developing countries with high population growth, public pharmacy colleges in Egypt are experiencing a significant increase in students’ enrollment annually due to the large youth population, accompanied with the keenness of students to join pharmacy colleges as a step to a better future career. In this context, large lectures represent a popular approach for teaching the students as economic and logistic constraints prevent splitting them into smaller groups. Nevertheless, the impact of large lectures in relation to student learning has been widely questioned due to their educational limitations, which are related to the passive role the students maintain in lectures. Despite the reported feebleness underlying large lectures and lecturing in general, large lectures will likely continue to be taught in the same format in these countries. Accordingly, to soften the negative impacts of large lectures, this article describes a simple and feasible 5-step paper-based model to transform lectures from a passive information delivery space into an active learning environment. This model mainly suits educational establishments with financial constraints, nevertheless, it can be applied in lectures presented in any educational environment to improve active participation of students. The components and the expected advantages of employing the 5-step paper-based model in large lectures as well as its limitations and ways to overcome them are presented briefly. The impact of applying this model on students’ engagement and learning is currently being investigated. PMID:28975906

  20. Multi-criteria dynamic decision under uncertainty: a stochastic viability analysis and an application to sustainable fishery management.

    PubMed

    De Lara, M; Martinet, V

    2009-02-01

    Managing natural resources in a sustainable way is a hard task, due to uncertainties, dynamics and conflicting objectives (ecological, social, and economical). We propose a stochastic viability approach to address such problems. We consider a discrete-time control dynamical model with uncertainties, representing a bioeconomic system. The sustainability of this system is described by a set of constraints, defined in practice by indicators - namely, state, control and uncertainty functions - together with thresholds. This approach aims at identifying decision rules such that a set of constraints, representing various objectives, is respected with maximal probability. Under appropriate monotonicity properties of dynamics and constraints, having economic and biological content, we characterize an optimal feedback. The connection is made between this approach and the so-called Management Strategy Evaluation for fisheries. A numerical application to sustainable management of Bay of Biscay nephrops-hakes mixed fishery is given.

  1. A simple approach to estimate daily loads of total, refractory, and labile organic carbon from their seasonal loads in a watershed.

    PubMed

    Ouyang, Ying; Grace, Johnny M; Zipperer, Wayne C; Hatten, Jeff; Dewey, Janet

    2018-05-22

    Loads of naturally occurring total organic carbons (TOC), refractory organic carbon (ROC), and labile organic carbon (LOC) in streams control the availability of nutrients and the solubility and toxicity of contaminants and affect biological activities through absorption of light and complex metals with production of carcinogenic compounds. Although computer models have become increasingly popular in understanding and management of TOC, ROC, and LOC loads in streams, the usefulness of these models hinges on the availability of daily data for model calibration and validation. Unfortunately, these daily data are usually insufficient and/or unavailable for most watersheds due to a variety of reasons, such as budget and time constraints. A simple approach was developed here to calculate daily loads of TOC, ROC, and LOC in streams based on their seasonal loads. We concluded that the predictions from our approach adequately match field measurements based on statistical comparisons between model calculations and field measurements. Our approach demonstrates that an increase in stream discharge results in increased stream TOC, ROC, and LOC concentrations and loads, although high peak discharge did not necessarily result in high peaks of TOC, ROC, and LOC concentrations and loads. The approach developed herein is a useful tool to convert seasonal loads of TOC, ROC, and LOC into daily loads in the absence of measured daily load data.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler

    This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less

  3. Toward a model-based cognitive neuroscience of mind wandering.

    PubMed

    Hawkins, G E; Mittner, M; Boekel, W; Heathcote, A; Forstmann, B U

    2015-12-03

    People often "mind wander" during everyday tasks, temporarily losing track of time, place, or current task goals. In laboratory-based tasks, mind wandering is often associated with performance decrements in behavioral variables and changes in neural recordings. Such empirical associations provide descriptive accounts of mind wandering - how it affects ongoing task performance - but fail to provide true explanatory accounts - why it affects task performance. In this perspectives paper, we consider mind wandering as a neural state or process that affects the parameters of quantitative cognitive process models, which in turn affect observed behavioral performance. Our approach thus uses cognitive process models to bridge the explanatory divide between neural and behavioral data. We provide an overview of two general frameworks for developing a model-based cognitive neuroscience of mind wandering. The first approach uses neural data to segment observed performance into a discrete mixture of latent task-related and task-unrelated states, and the second regresses single-trial measures of neural activity onto structured trial-by-trial variation in the parameters of cognitive process models. We discuss the relative merits of the two approaches, and the research questions they can answer, and highlight that both approaches allow neural data to provide additional constraint on the parameters of cognitive models, which will lead to a more precise account of the effect of mind wandering on brain and behavior. We conclude by summarizing prospects for mind wandering as conceived within a model-based cognitive neuroscience framework, highlighting the opportunities for its continued study and the benefits that arise from using well-developed quantitative techniques to study abstract theoretical constructs. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. The AMSR2 Satellite-based Microwave Snow Algorithm (SMSA) to estimate regional to global snow depth and snow water equivalent

    NASA Astrophysics Data System (ADS)

    Kelly, R. E. J.; Saberi, N.; Li, Q.

    2017-12-01

    With moderate to high spatial resolution (<1 km) regional to global snow water equivalent (SWE) observation approaches yet to be fully scoped and developed, the long-term satellite passive microwave record remains an important tool for cryosphere-climate diagnostics. A new satellite microwave remote sensing approach is described for estimating snow depth (SD) and snow water equivalent (SWE). The algorithm, called the Satellite-based Microwave Snow Algorithm (SMSA), uses Advanced Microwave Scanning Radiometer - 2 (AMSR2) observations aboard the Global Change Observation Mission - Water mission launched by the Japan Aerospace Exploration Agency in 2012. The approach is unique since it leverages observed brightness temperatures (Tb) with static ancillary data to parameterize a physically-based retrieval without requiring parameter constraints from in situ snow depth observations or historical snow depth climatology. After screening snow from non-snow surface targets (water bodies [including freeze/thaw state], rainfall, high altitude plateau regions [e.g. Tibetan plateau]), moderate and shallow snow depths are estimated by minimizing the difference between Dense Media Radiative Transfer model estimates (Tsang et al., 2000; Picard et al., 2011) and AMSR2 Tb observations to retrieve SWE and SD. Parameterization of the model combines a parsimonious snow grain size and density approach originally developed by Kelly et al. (2003). Evaluation of the SMSA performance is achieved using in situ snow depth data from a variety of standard and experiment data sources. Results presented from winter seasons 2012-13 to 2016-17 illustrate the improved performance of the new approach in comparison with the baseline AMSR2 algorithm estimates and approach the performance of the model assimilation-based approach of GlobSnow. Given the variation in estimation power of SWE by different land surface/climate models and selected satellite-derived passive microwave approaches, SMSA provides SWE estimates that are independent of real or near real-time in situ and model data.

  5. Parametrized modified gravity and the CMB bispectrum

    NASA Astrophysics Data System (ADS)

    Di Valentino, Eleonora; Melchiorri, Alessandro; Salvatelli, Valentina; Silvestri, Alessandra

    2012-09-01

    We forecast the constraints on modified theories of gravity from the cosmic microwave background (CMB) anisotropies bispectrum that arises from correlations between lensing and the Integrated Sachs-Wolfe effect. In models of modified gravity the evolution of the metric potentials is generally altered and the contribution to the CMB bispectrum signal can differ significantly from the one expected in the standard cosmological model. We adopt a parametrized approach and focus on three different classes of models: Linder’s growth index, Chameleon-type models, and f(R) theories. We show that the constraints on the parameters of the models will significantly improve with future CMB bispectrum measurements.

  6. Data-based Non-Markovian Model Inference

    NASA Astrophysics Data System (ADS)

    Ghil, Michael

    2015-04-01

    This talk concentrates on obtaining stable and efficient data-based models for simulation and prediction in the geosciences and life sciences. The proposed model derivation relies on using a multivariate time series of partial observations from a large-dimensional system, and the resulting low-order models are compared with the optimal closures predicted by the non-Markovian Mori-Zwanzig formalism of statistical physics. Multilayer stochastic models (MSMs) are introduced as both a very broad generalization and a time-continuous limit of existing multilevel, regression-based approaches to data-based closure, in particular of empirical model reduction (EMR). We show that the multilayer structure of MSMs can provide a natural Markov approximation to the generalized Langevin equation (GLE) of the Mori-Zwanzig formalism. A simple correlation-based stopping criterion for an EMR-MSM model is derived to assess how well it approximates the GLE solution. Sufficient conditions are given for the nonlinear cross-interactions between the constitutive layers of a given MSM to guarantee the existence of a global random attractor. This existence ensures that no blow-up can occur for a very broad class of MSM applications. The EMR-MSM methodology is first applied to a conceptual, nonlinear, stochastic climate model of coupled slow and fast variables, in which only slow variables are observed. The resulting reduced model with energy-conserving nonlinearities captures the main statistical features of the slow variables, even when there is no formal scale separation and the fast variables are quite energetic. Second, an MSM is shown to successfully reproduce the statistics of a partially observed, generalized Lokta-Volterra model of population dynamics in its chaotic regime. The positivity constraint on the solutions' components replaces here the quadratic-energy-preserving constraint of fluid-flow problems and it successfully prevents blow-up. This work is based on a close collaboration with M.D. Chekroun, D. Kondrashov, S. Kravtsov and A.W. Robertson.

  7. Risk-based water resources planning: Coupling water allocation and water quality management under extreme droughts

    NASA Astrophysics Data System (ADS)

    Mortazavi-Naeini, M.; Bussi, G.; Hall, J. W.; Whitehead, P. G.

    2016-12-01

    The main aim of water companies is to have a reliable and safe water supply system. To fulfil their duty the water companies have to consider both water quality and quantity issues and challenges. Climate change and population growth will have an impact on water resources both in terms of available water and river water quality. Traditionally, a distinct separation between water quality and abstraction has existed. However, water quality can be a bottleneck in a system since water treatment works can only treat water if it meets certain standards. For instance, high turbidity and large phytoplankton content can increase sharply the cost of treatment or even make river water unfit for human consumption purposes. It is vital for water companies to be able to characterise the quantity and quality of water under extreme weather events and to consider the occurrence of eventual periods when water abstraction has to cease due to water quality constraints. This will give them opportunity to decide on water resource planning and potential changes to reduce the system failure risk. We present a risk-based approach for incorporating extreme events, based on future climate change scenarios from a large ensemble of climate model realisations, into integrated water resources model through combined use of water allocation (WATHNET) and water quality (INCA) models. The annual frequency of imposed restrictions on demand is considered as measure of reliability. We tested our approach on Thames region, in the UK, with 100 extreme events. The results show increase in frequency of imposed restrictions when water quality constraints were considered. This indicates importance of considering water quality issues in drought management plans.

  8. A Decision Support System for Concrete Bridge Maintenance

    NASA Astrophysics Data System (ADS)

    Rashidi, Maria; Lemass, Brett; Gibson, Peter

    2010-05-01

    The maintenance of bridges as a key element in transportation infrastructure has become a major concern for asset managers and society due to increasing traffic volumes, deterioration of existing bridges and well-publicised bridge failures. A pivotal responsibility for asset managers in charge of bridge remediation is to identify the risks and assess the consequences of remediation programs to ensure that the decisions are transparent and lead to the lowest predicted losses in recognized constraint areas. The ranking of bridge remediation treatments can be quantitatively assessed using a weighted constraint approach to structure the otherwise ill-structured phases of problem definition, conceptualization and embodiment [1]. This Decision Support System helps asset managers in making the best decision with regards to financial limitations and other dominant constraints imposed upon the problem at hand. The risk management framework in this paper deals with the development of a quantitative intelligent decision support system for bridge maintenance which has the ability to provide a source for consistent decisions through selecting appropriate remediation treatments based upon cost, service life, product durability/sustainability, client preferences, legal and environmental constraints. Model verification and validation through industry case studies is ongoing.

  9. A probability space for quantum models

    NASA Astrophysics Data System (ADS)

    Lemmens, L. F.

    2017-06-01

    A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.

  10. New learning based super-resolution: use of DWT and IGMRF prior.

    PubMed

    Gajjar, Prakash P; Joshi, Manjunath V

    2010-05-01

    In this paper, we propose a new learning-based approach for super-resolving an image captured at low spatial resolution. Given the low spatial resolution test image and a database consisting of low and high spatial resolution images, we obtain super-resolution for the test image. We first obtain an initial high-resolution (HR) estimate by learning the high-frequency details from the available database. A new discrete wavelet transform (DWT) based approach is proposed for learning that uses a set of low-resolution (LR) images and their corresponding HR versions. Since the super-resolution is an ill-posed problem, we obtain the final solution using a regularization framework. The LR image is modeled as the aliased and noisy version of the corresponding HR image, and the aliasing matrix entries are estimated using the test image and the initial HR estimate. The prior model for the super-resolved image is chosen as an Inhomogeneous Gaussian Markov random field (IGMRF) and the model parameters are estimated using the same initial HR estimate. A maximum a posteriori (MAP) estimation is used to arrive at the cost function which is minimized using a simple gradient descent approach. We demonstrate the effectiveness of the proposed approach by conducting the experiments on gray scale as well as on color images. The method is compared with the standard interpolation technique and also with existing learning-based approaches. The proposed approach can be used in applications such as wildlife sensor networks, remote surveillance where the memory, the transmission bandwidth, and the camera cost are the main constraints.

  11. Finite-element approach to Brownian dynamics of polymers.

    PubMed

    Cyron, Christian J; Wall, Wolfgang A

    2009-12-01

    In the last decades simulation tools for Brownian dynamics of polymers have attracted more and more interest. Such simulation tools have been applied to a large variety of problems and accelerated the scientific progress significantly. However, the currently most frequently used explicit bead models exhibit severe limitations, especially with respect to time step size, the necessity of artificial constraints and the lack of a sound mathematical foundation. Here we present a framework for simulations of Brownian polymer dynamics based on the finite-element method. This approach allows simulating a wide range of physical phenomena at a highly attractive computational cost on the basis of a far-developed mathematical background.

  12. Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki

    2013-01-01

    A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.

  13. Communication: Introducing prescribed biases in out-of-equilibrium Markov models

    NASA Astrophysics Data System (ADS)

    Dixit, Purushottam D.

    2018-03-01

    Markov models are often used in modeling complex out-of-equilibrium chemical and biochemical systems. However, many times their predictions do not agree with experiments. We need a systematic framework to update existing Markov models to make them consistent with constraints that are derived from experiments. Here, we present a framework based on the principle of maximum relative path entropy (minimum Kullback-Leibler divergence) to update Markov models using stationary state and dynamical trajectory-based constraints. We illustrate the framework using a biochemical model network of growth factor-based signaling. We also show how to find the closest detailed balanced Markov model to a given Markov model. Further applications and generalizations are discussed.

  14. An approach of traffic signal control based on NLRSQP algorithm

    NASA Astrophysics Data System (ADS)

    Zou, Yuan-Yang; Hu, Yu

    2017-11-01

    This paper presents a linear program model with linear complementarity constraints (LPLCC) to solve traffic signal optimization problem. The objective function of the model is to obtain the minimization of total queue length with weight factors at the end of each cycle. Then, a combination algorithm based on the nonlinear least regression and sequence quadratic program (NLRSQP) is proposed, by which the local optimal solution can be obtained. Furthermore, four numerical experiments are proposed to study how to set the initial solution of the algorithm that can get a better local optimal solution more quickly. In particular, the results of numerical experiments show that: The model is effective for different arrival rates and weight factors; and the lower bound of the initial solution is, the better optimal solution can be obtained.

  15. Black hole thermodynamics from Euclidean horizon constraints.

    PubMed

    Carlip, S

    2007-07-13

    To explain black hole thermodynamics in quantum gravity, one must introduce constraints to ensure that a black hole is actually present. I show that for a large class of black holes, such "horizon constraints" allow the use of conformal field theory techniques to compute the density of states, reproducing the Bekenstein-Hawking entropy in a nearly model-independent manner. One standard string theory approach to black hole entropy arises as a special case, lending support to the claim that the mechanism may be "universal." I argue that the relevant degrees of freedom are Goldstone-boson-like excitations arising from the weak breaking of symmetry by the constraints.

  16. Genetic programming over context-free languages with linear constraints for the knapsack problem: first results.

    PubMed

    Bruhn, Peter; Geyer-Schulz, Andreas

    2002-01-01

    In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.

  17. A Random Walk Approach to Query Informative Constraints for Clustering.

    PubMed

    Abin, Ahmad Ali

    2017-08-09

    This paper presents a random walk approach to the problem of querying informative constraints for clustering. The proposed method is based on the properties of the commute time, that is the expected time taken for a random walk to travel between two nodes and return, on the adjacency graph of data. Commute time has the nice property of that, the more short paths connect two given nodes in a graph, the more similar those nodes are. Since computing the commute time takes the Laplacian eigenspectrum into account, we use this property in a recursive fashion to query informative constraints for clustering. At each recursion, the proposed method constructs the adjacency graph of data and utilizes the spectral properties of the commute time matrix to bipartition the adjacency graph. Thereafter, the proposed method benefits from the commute times distance on graph to query informative constraints between partitions. This process iterates for each partition until the stop condition becomes true. Experiments on real-world data show the efficiency of the proposed method for constraints selection.

  18. Constraints on the magnetic fields in galaxies implied by the infrared-to-radio correlation

    NASA Technical Reports Server (NTRS)

    Helou, George; Bicay, M. D.

    1990-01-01

    A physical model is proposed for understanding the tight correlation between far-IR and nonthermal radio luminosities in star-forming galaxies. The approach suggests that the only constraint implied by the correlation is a universal relation whereby magnetic field strength scales with gas density to a power beta between 1/3 and 2/3, inclusive.

  19. Fluid-structure interaction including volumetric coupling with homogenised subdomains for modeling respiratory mechanics.

    PubMed

    Yoshihara, Lena; Roth, Christian J; Wall, Wolfgang A

    2017-04-01

    In this article, a novel approach is presented for combining standard fluid-structure interaction with additional volumetric constraints to model fluid flow into and from homogenised solid domains. The proposed algorithm is particularly interesting for investigations in the field of respiratory mechanics as it enables the mutual coupling of airflow in the conducting part and local tissue deformation in the respiratory part of the lung by means of a volume constraint. In combination with a classical monolithic fluid-structure interaction approach, a comprehensive model of the human lung can be established that will be useful to gain new insights into respiratory mechanics in health and disease. To illustrate the validity and versatility of the novel approach, three numerical examples including a patient-specific lung model are presented. The proposed algorithm proves its capability of computing clinically relevant airflow distribution and tissue strain data at a level of detail that is not yet achievable, neither with current imaging techniques nor with existing computational models. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  1. Thematic Approaches to Teaching Rhetorical Criticism.

    ERIC Educational Resources Information Center

    Henry, David; Sharp, Harry, Jr.

    1989-01-01

    Argues that a thematic approach to teaching criticism--based on frequent, integrated writing tasks--accommodates the constraints found in the typical undergraduate course on rhetorical criticism. Illustrates this approach with reference to two themes: Ronald Reagan's discourse and the rhetoric of war and peace. (MM)

  2. Solar system and equivalence principle constraints on f(R) gravity by the chameleon approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Capozziello, Salvatore; Tsujikawa, Shinji

    2008-05-15

    We study constraints on f(R) dark energy models from solar system experiments combined with experiments on the violation of the equivalence principle. When the mass of an equivalent scalar field degree of freedom is heavy in a region with high density, a spherically symmetric body has a thin shell so that an effective coupling of the fifth force is suppressed through a chameleon mechanism. We place experimental bounds on the cosmologically viable models recently proposed in the literature that have an asymptotic form f(R)=R-{lambda}R{sub c}[1-(R{sub c}/R){sup 2n}] in the regime R>>R{sub c}. From the solar system constraints on the post-Newtonianmore » parameter {gamma}, we derive the bound n>0.5, whereas the constraints from the violations of the weak and strong equivalence principles give the bound n>0.9. This allows a possibility to find the deviation from the {lambda}-cold dark matter ({lambda}CDM) cosmological model. For the model f(R)=R-{lambda}R{sub c}(R/R{sub c}){sup p} with 0

  3. Integration of a constraint-based metabolic model of Brassica napus developing seeds with 13C-metabolic flux analysis

    PubMed Central

    Hay, Jordan O.; Shi, Hai; Heinzel, Nicolas; Hebbelmann, Inga; Rolletschek, Hardy; Schwender, Jorg

    2014-01-01

    The use of large-scale or genome-scale metabolic reconstructions for modeling and simulation of plant metabolism and integration of those models with large-scale omics and experimental flux data is becoming increasingly important in plant metabolic research. Here we report an updated version of bna572, a bottom-up reconstruction of oilseed rape (Brassica napus L.; Brassicaceae) developing seeds with emphasis on representation of biomass-component biosynthesis. New features include additional seed-relevant pathways for isoprenoid, sterol, phenylpropanoid, flavonoid, and choline biosynthesis. Being now based on standardized data formats and procedures for model reconstruction, bna572+ is available as a COBRA-compliant Systems Biology Markup Language (SBML) model and conforms to the Minimum Information Requested in the Annotation of Biochemical Models (MIRIAM) standards for annotation of external data resources. Bna572+ contains 966 genes, 671 reactions, and 666 metabolites distributed among 11 subcellular compartments. It is referenced to the Arabidopsis thaliana genome, with gene-protein-reaction (GPR) associations resolving subcellular localization. Detailed mass and charge balancing and confidence scoring were applied to all reactions. Using B. napus seed specific transcriptome data, expression was verified for 78% of bna572+ genes and 97% of reactions. Alongside bna572+ we also present a revised carbon centric model for 13C-Metabolic Flux Analysis (13C-MFA) with all its reactions being referenced to bna572+ based on linear projections. By integration of flux ratio constraints obtained from 13C-MFA and by elimination of infinite flux bounds around thermodynamically infeasible loops based on COBRA loopless methods, we demonstrate improvements in predictive power of Flux Variability Analysis (FVA). Using this combined approach we characterize the difference in metabolic flux of developing seeds of two B. napus genotypes contrasting in starch and oil content. PMID:25566296

  4. Constraint-based modeling in microbial food biotechnology

    PubMed Central

    Rau, Martin H.

    2018-01-01

    Genome-scale metabolic network reconstruction offers a means to leverage the value of the exponentially growing genomics data and integrate it with other biological knowledge in a structured format. Constraint-based modeling (CBM) enables both the qualitative and quantitative analyses of the reconstructed networks. The rapid advancements in these areas can benefit both the industrial production of microbial food cultures and their application in food processing. CBM provides several avenues for improving our mechanistic understanding of physiology and genotype–phenotype relationships. This is essential for the rational improvement of industrial strains, which can further be facilitated through various model-guided strain design approaches. CBM of microbial communities offers a valuable tool for the rational design of defined food cultures, where it can catalyze hypothesis generation and provide unintuitive rationales for the development of enhanced community phenotypes and, consequently, novel or improved food products. In the industrial-scale production of microorganisms for food cultures, CBM may enable a knowledge-driven bioprocess optimization by rationally identifying strategies for growth and stability improvement. Through these applications, we believe that CBM can become a powerful tool for guiding the areas of strain development, culture development and process optimization in the production of food cultures. Nevertheless, in order to make the correct choice of the modeling framework for a particular application and to interpret model predictions in a biologically meaningful manner, one should be aware of the current limitations of CBM. PMID:29588387

  5. Text Mining for Protein Docking

    PubMed Central

    Badal, Varsha D.; Kundrotas, Petras J.; Vakser, Ilya A.

    2015-01-01

    The rapidly growing amount of publicly available information from biomedical research is readily accessible on the Internet, providing a powerful resource for predictive biomolecular modeling. The accumulated data on experimentally determined structures transformed structure prediction of proteins and protein complexes. Instead of exploring the enormous search space, predictive tools can simply proceed to the solution based on similarity to the existing, previously determined structures. A similar major paradigm shift is emerging due to the rapidly expanding amount of information, other than experimentally determined structures, which still can be used as constraints in biomolecular structure prediction. Automated text mining has been widely used in recreating protein interaction networks, as well as in detecting small ligand binding sites on protein structures. Combining and expanding these two well-developed areas of research, we applied the text mining to structural modeling of protein-protein complexes (protein docking). Protein docking can be significantly improved when constraints on the docking mode are available. We developed a procedure that retrieves published abstracts on a specific protein-protein interaction and extracts information relevant to docking. The procedure was assessed on protein complexes from Dockground (http://dockground.compbio.ku.edu). The results show that correct information on binding residues can be extracted for about half of the complexes. The amount of irrelevant information was reduced by conceptual analysis of a subset of the retrieved abstracts, based on the bag-of-words (features) approach. Support Vector Machine models were trained and validated on the subset. The remaining abstracts were filtered by the best-performing models, which decreased the irrelevant information for ~ 25% complexes in the dataset. The extracted constraints were incorporated in the docking protocol and tested on the Dockground unbound benchmark set, significantly increasing the docking success rate. PMID:26650466

  6. Spatial and Spin Symmetry Breaking in Semidefinite-Programming-Based Hartree-Fock Theory.

    PubMed

    Nascimento, Daniel R; DePrince, A Eugene

    2018-05-08

    The Hartree-Fock problem was recently recast as a semidefinite optimization over the space of rank-constrained two-body reduced-density matrices (RDMs) [ Phys. Rev. A 2014 , 89 , 010502(R) ]. This formulation of the problem transfers the nonconvexity of the Hartree-Fock energy functional to the rank constraint on the two-body RDM. We consider an equivalent optimization over the space of positive semidefinite one-electron RDMs (1-RDMs) that retains the nonconvexity of the Hartree-Fock energy expression. The optimized 1-RDM satisfies ensemble N-representability conditions, and ensemble spin-state conditions may be imposed as well. The spin-state conditions place additional linear and nonlinear constraints on the 1-RDM. We apply this RDM-based approach to several molecular systems and explore its spatial (point group) and spin ( Ŝ 2 and Ŝ 3 ) symmetry breaking properties. When imposing Ŝ 2 and Ŝ 3 symmetry but relaxing point group symmetry, the procedure often locates spatial-symmetry-broken solutions that are difficult to identify using standard algorithms. For example, the RDM-based approach yields a smooth, spatial-symmetry-broken potential energy curve for the well-known Be-H 2 insertion pathway. We also demonstrate numerically that, upon relaxation of Ŝ 2 and Ŝ 3 symmetry constraints, the RDM-based approach is equivalent to real-valued generalized Hartree-Fock theory.

  7. Robust extraction of basis functions for simultaneous and proportional myoelectric control via sparse non-negative matrix factorization

    NASA Astrophysics Data System (ADS)

    Lin, Chuang; Wang, Binghui; Jiang, Ning; Farina, Dario

    2018-04-01

    Objective. This paper proposes a novel simultaneous and proportional multiple degree of freedom (DOF) myoelectric control method for active prostheses. Approach. The approach is based on non-negative matrix factorization (NMF) of surface EMG signals with the inclusion of sparseness constraints. By applying a sparseness constraint to the control signal matrix, it is possible to extract the basis information from arbitrary movements (quasi-unsupervised approach) for multiple DOFs concurrently. Main Results. In online testing based on target hitting, able-bodied subjects reached a greater throughput (TP) when using sparse NMF (SNMF) than with classic NMF or with linear regression (LR). Accordingly, the completion time (CT) was shorter for SNMF than NMF or LR. The same observations were made in two patients with unilateral limb deficiencies. Significance. The addition of sparseness constraints to NMF allows for a quasi-unsupervised approach to myoelectric control with superior results with respect to previous methods for the simultaneous and proportional control of multi-DOF. The proposed factorization algorithm allows robust simultaneous and proportional control, is superior to previous supervised algorithms, and, because of minimal supervision, paves the way to online adaptation in myoelectric control.

  8. An investigation of constraint-based component-modeling for knowledge representation in computer-aided conceptual design

    NASA Technical Reports Server (NTRS)

    Kolb, Mark A.

    1990-01-01

    Originally, computer programs for engineering design focused on detailed geometric design. Later, computer programs for algorithmically performing the preliminary design of specific well-defined classes of objects became commonplace. However, due to the need for extreme flexibility, it appears unlikely that conventional programming techniques will prove fruitful in developing computer aids for engineering conceptual design. The use of symbolic processing techniques, such as object-oriented programming and constraint propagation, facilitate such flexibility. Object-oriented programming allows programs to be organized around the objects and behavior to be simulated, rather than around fixed sequences of function- and subroutine-calls. Constraint propagation allows declarative statements to be understood as designating multi-directional mathematical relationships among all the variables of an equation, rather than as unidirectional assignments to the variable on the left-hand side of the equation, as in conventional computer programs. The research has concentrated on applying these two techniques to the development of a general-purpose computer aid for engineering conceptual design. Object-oriented programming techniques are utilized to implement a user-extensible database of design components. The mathematical relationships which model both geometry and physics of these components are managed via constraint propagation. In addition, to this component-based hierarchy, special-purpose data structures are provided for describing component interactions and supporting state-dependent parameters. In order to investigate the utility of this approach, a number of sample design problems from the field of aerospace engineering were implemented using the prototype design tool, Rubber Airplane. The additional level of organizational structure obtained by representing design knowledge in terms of components is observed to provide greater convenience to the program user, and to result in a database of engineering information which is easier both to maintain and to extend.

  9. A tool for efficient, model-independent management optimization under uncertainty

    USGS Publications Warehouse

    White, Jeremy; Fienen, Michael N.; Barlow, Paul M.; Welter, Dave E.

    2018-01-01

    To fill a need for risk-based environmental management optimization, we have developed PESTPP-OPT, a model-independent tool for resource management optimization under uncertainty. PESTPP-OPT solves a sequential linear programming (SLP) problem and also implements (optional) efficient, “on-the-fly” (without user intervention) first-order, second-moment (FOSM) uncertainty techniques to estimate model-derived constraint uncertainty. Combined with a user-specified risk value, the constraint uncertainty estimates are used to form chance-constraints for the SLP solution process, so that any optimal solution includes contributions from model input and observation uncertainty. In this way, a “single answer” that includes uncertainty is yielded from the modeling analysis. PESTPP-OPT uses the familiar PEST/PEST++ model interface protocols, which makes it widely applicable to many modeling analyses. The use of PESTPP-OPT is demonstrated with a synthetic, integrated surface-water/groundwater model. The function and implications of chance constraints for this synthetic model are discussed.

  10. Adaptive and neuroadaptive control for nonnegative and compartmental dynamical systems

    NASA Astrophysics Data System (ADS)

    Volyanskyy, Kostyantyn Y.

    Neural networks have been extensively used for adaptive system identification as well as adaptive and neuroadaptive control of highly uncertain systems. The goal of adaptive and neuroadaptive control is to achieve system performance without excessive reliance on system models. To improve robustness and the speed of adaptation of adaptive and neuroadaptive controllers several controller architectures have been proposed in the literature. In this dissertation, we develop a new neuroadaptive control architecture for nonlinear uncertain dynamical systems. The proposed framework involves a novel controller architecture with additional terms in the update laws that are constructed using a moving window of the integrated system uncertainty. These terms can be used to identify the ideal system weights of the neural network as well as effectively suppress system uncertainty. Linear and nonlinear parameterizations of the system uncertainty are considered and state and output feedback neuroadaptive controllers are developed. Furthermore, we extend the developed framework to discrete-time dynamical systems. To illustrate the efficacy of the proposed approach we apply our results to an aircraft model with wing rock dynamics, a spacecraft model with unknown moment of inertia, and an unmanned combat aerial vehicle undergoing actuator failures, and compare our results with standard neuroadaptive control methods. Nonnegative systems are essential in capturing the behavior of a wide range of dynamical systems involving dynamic states whose values are nonnegative. A sub-class of nonnegative dynamical systems are compartmental systems. These systems are derived from mass and energy balance considerations and are comprised of homogeneous interconnected microscopic subsystems or compartments which exchange variable quantities of material via intercompartmental flow laws. In this dissertation, we develop direct adaptive and neuroadaptive control framework for stabilization, disturbance rejection and noise suppression for nonnegative and compartmental dynamical systems with noise and exogenous system disturbances. We then use the developed framework to control the infusion of the anesthetic drug propofol for maintaining a desired constant level of depth of anesthesia for surgery in the face of continuing hemorrhage and hemodilution. Critical care patients, whether undergoing surgery or recovering in intensive care units, require drug administration to regulate physiological variables such as blood pressure, cardiac output, heart rate, and degree of consciousness. The rate of infusion of each administered drug is critical, requiring constant monitoring and frequent adjustments. In this dissertation, we develop a neuroadaptive output feedback control framework for nonlinear uncertain nonnegative and compartmental systems with nonnegative control inputs and noisy measurements. The proposed framework is Lyapunov-based and guarantees ultimate boundedness of the error signals. In addition, the neuroadaptive controller guarantees that the physical system states remain in the nonnegative orthant of the state space. Finally, the developed approach is used to control the infusion of the anesthetic drug propofol for maintaining a desired constant level of depth of anesthesia for surgery in the face of noisy electroencephalographic (EEG) measurements. Clinical trials demonstrate excellent regulation of unconsciousness allowing for a safe and effective administration of the anesthetic agent propofol. Furthermore, a neuroadaptive output feedback control architecture for nonlinear nonnegative dynamical systems with input amplitude and integral constraints is developed. Specifically, the neuroadaptive controller guarantees that the imposed amplitude and integral input constraints are satisfied and the physical system states remain in the nonnegative orthant of the state space. The proposed approach is used to control the infusion of the anesthetic drug propofol for maintaining a desired constant level of depth of anesthesia for noncardiac surgery in the face of infusion rate constraints and a drug dosing constraint over a specified period. In addition, the aforementioned control architecture is used to control lung volume and minute ventilation with input pressure constraints that also accounts for spontaneous breathing by the patient. Specifically, we develop a pressure- and work-limited neuroadaptive controller for mechanical ventilation based on a nonlinear multi-compartmental lung model. The control framework does not rely on any averaged data and is designed to automatically adjust the input pressure to the patient's physiological characteristics capturing lung resistance and compliance modeling uncertainty. Moreover, the controller accounts for input pressure constraints as well as work of breathing constraints. The effect of spontaneous breathing is incorporated within the lung model and the control framework. Finally, a neural network hybrid adaptive control framework for nonlinear uncertain hybrid dynamical systems is developed. The proposed hybrid adaptive control framework is Lyapunov-based and guarantees partial asymptotic stability of the closed-loop hybrid system; that is, asymptotic stability with respect to part of the closed-loop system states associated with the hybrid plant states. A numerical example is provided to demonstrate the efficacy of the proposed hybrid adaptive stabilization approach.

  11. Divergence Times and the Evolutionary Radiation of New World Monkeys (Platyrrhini, Primates): An Analysis of Fossil and Molecular Data.

    PubMed

    Perez, S Ivan; Tejedor, Marcelo F; Novo, Nelson M; Aristide, Leandro

    2013-01-01

    The estimation of phylogenetic relationships and divergence times among a group of organisms is a fundamental first step toward understanding its biological diversification. The time of the most recent or last common ancestor (LCA) of extant platyrrhines is one of the most controversial among scholars of primate evolution. Here we use two molecular based approaches to date the initial divergence of the platyrrhine clade, Bayesian estimations under a relaxed-clock model and substitution rate plus generation time and body size, employing the fossil record and genome datasets. We also explore the robustness of our estimations with respect to changes in topology, fossil constraints and substitution rate, and discuss the implications of our findings for understanding the platyrrhine radiation. Our results suggest that fossil constraints, topology and substitution rate have an important influence on our divergence time estimates. Bayesian estimates using conservative but realistic fossil constraints suggest that the LCA of extant platyrrhines existed at ca. 29 Ma, with the 95% confidence limit for the node ranging from 27-31 Ma. The LCA of extant platyrrhine monkeys based on substitution rate corrected by generation time and body size was established between 21-29 Ma. The estimates based on the two approaches used in this study recalibrate the ages of the major platyrrhine clades and corroborate the hypothesis that they constitute very old lineages. These results can help reconcile several controversial points concerning the affinities of key early Miocene fossils that have arisen among paleontologists and molecular systematists. However, they cannot resolve the controversy of whether these fossil species truly belong to the extant lineages or to a stem platyrrhine clade. That question can only be resolved by morphology. Finally, we show that the use of different approaches and well supported fossil information gives a more robust divergence time estimate of a clade.

  12. Divergence Times and the Evolutionary Radiation of New World Monkeys (Platyrrhini, Primates): An Analysis of Fossil and Molecular Data

    PubMed Central

    Perez, S. Ivan; Tejedor, Marcelo F.; Novo, Nelson M.; Aristide, Leandro

    2013-01-01

    The estimation of phylogenetic relationships and divergence times among a group of organisms is a fundamental first step toward understanding its biological diversification. The time of the most recent or last common ancestor (LCA) of extant platyrrhines is one of the most controversial among scholars of primate evolution. Here we use two molecular based approaches to date the initial divergence of the platyrrhine clade, Bayesian estimations under a relaxed-clock model and substitution rate plus generation time and body size, employing the fossil record and genome datasets. We also explore the robustness of our estimations with respect to changes in topology, fossil constraints and substitution rate, and discuss the implications of our findings for understanding the platyrrhine radiation. Our results suggest that fossil constraints, topology and substitution rate have an important influence on our divergence time estimates. Bayesian estimates using conservative but realistic fossil constraints suggest that the LCA of extant platyrrhines existed at ca. 29 Ma, with the 95% confidence limit for the node ranging from 27–31 Ma. The LCA of extant platyrrhine monkeys based on substitution rate corrected by generation time and body size was established between 21–29 Ma. The estimates based on the two approaches used in this study recalibrate the ages of the major platyrrhine clades and corroborate the hypothesis that they constitute very old lineages. These results can help reconcile several controversial points concerning the affinities of key early Miocene fossils that have arisen among paleontologists and molecular systematists. However, they cannot resolve the controversy of whether these fossil species truly belong to the extant lineages or to a stem platyrrhine clade. That question can only be resolved by morphology. Finally, we show that the use of different approaches and well supported fossil information gives a more robust divergence time estimate of a clade. PMID:23826358

  13. Sybil--efficient constraint-based modelling in R.

    PubMed

    Gelius-Dietrich, Gabriel; Desouki, Abdelmoneim Amer; Fritzemeier, Claus Jonathan; Lercher, Martin J

    2013-11-13

    Constraint-based analyses of metabolic networks are widely used to simulate the properties of genome-scale metabolic networks. Publicly available implementations tend to be slow, impeding large scale analyses such as the genome-wide computation of pairwise gene knock-outs, or the automated search for model improvements. Furthermore, available implementations cannot easily be extended or adapted by users. Here, we present sybil, an open source software library for constraint-based analyses in R; R is a free, platform-independent environment for statistical computing and graphics that is widely used in bioinformatics. Among other functions, sybil currently provides efficient methods for flux-balance analysis (FBA), MOMA, and ROOM that are about ten times faster than previous implementations when calculating the effect of whole-genome single gene deletions in silico on a complete E. coli metabolic model. Due to the object-oriented architecture of sybil, users can easily build analysis pipelines in R or even implement their own constraint-based algorithms. Based on its highly efficient communication with different mathematical optimisation programs, sybil facilitates the exploration of high-dimensional optimisation problems on small time scales. Sybil and all its dependencies are open source. Sybil and its documentation are available for download from the comprehensive R archive network (CRAN).

  14. Optimal Chemotherapy for Leukemia: A Model-Based Strategy for Individualized Treatment

    PubMed Central

    Jayachandran, Devaraj; Rundell, Ann E.; Hannemann, Robert E.; Vik, Terry A.; Ramkrishna, Doraiswami

    2014-01-01

    Acute Lymphoblastic Leukemia, commonly known as ALL, is a predominant form of cancer during childhood. With the advent of modern healthcare support, the 5-year survival rate has been impressive in the recent past. However, long-term ALL survivors embattle several treatment-related medical and socio-economic complications due to excessive and inordinate chemotherapy doses received during treatment. In this work, we present a model-based approach to personalize 6-Mercaptopurine (6-MP) treatment for childhood ALL with a provision for incorporating the pharmacogenomic variations among patients. Semi-mechanistic mathematical models were developed and validated for i) 6-MP metabolism, ii) red blood cell mean corpuscular volume (MCV) dynamics, a surrogate marker for treatment efficacy, and iii) leukopenia, a major side-effect. With the constraint of getting limited data from clinics, a global sensitivity analysis based model reduction technique was employed to reduce the parameter space arising from semi-mechanistic models. The reduced, sensitive parameters were used to individualize the average patient model to a specific patient so as to minimize the model uncertainty. Models fit the data well and mimic diverse behavior observed among patients with minimum parameters. The model was validated with real patient data obtained from literature and Riley Hospital for Children in Indianapolis. Patient models were used to optimize the dose for an individual patient through nonlinear model predictive control. The implementation of our approach in clinical practice is realizable with routinely measured complete blood counts (CBC) and a few additional metabolite measurements. The proposed approach promises to achieve model-based individualized treatment to a specific patient, as opposed to a standard-dose-for-all, and to prescribe an optimal dose for a desired outcome with minimum side-effects. PMID:25310465

  15. Estimation of cardiac conductivities in ventricular tissue by a variational approach

    NASA Astrophysics Data System (ADS)

    Yang, Huanhuan; Veneziani, Alessandro

    2015-11-01

    The bidomain model is the current standard model to simulate cardiac potential propagation. The numerical solution of this system of partial differential equations strongly depends on the model parameters and in particular on the cardiac conductivities. Unfortunately, it is quite problematic to measure these parameters in vivo and even more so in clinical practice, resulting in no common agreement in the literature. In this paper we consider a variational data assimilation approach to estimating those parameters. We consider the parameters as control variables to minimize the mismatch between the computed and the measured potentials under the constraint of the bidomain system. The existence of a minimizer of the misfit function is proved with the phenomenological Rogers-McCulloch ionic model, that completes the bidomain system. We significantly improve the numerical approaches in the literature by resorting to a derivative-based optimization method with settlement of some challenges due to discontinuity. The improvement in computational efficiency is confirmed by a 2D test as a direct comparison with approaches in the literature. The core of our numerical results is in 3D, on both idealized and real geometries, with the minimal ionic model. We demonstrate the reliability and the stability of the conductivity estimation approach in the presence of noise and with an imperfect knowledge of other model parameters.

  16. Discounting in cost-utility analysis of healthcare interventions: reassessing current practice.

    PubMed

    Cohen, Brian J

    2003-01-01

    Cost-utility analysis (CUA) is a technique that can potentially be used as a guide to allocating healthcare resources so as to obtain the maximum health benefits possible under a given budget constraint. However, it is not clear that current practice captures societal preferences regarding health benefits. In analyses of healthcare interventions providing survival benefits, the market rate of interest is the sole empirical variable that reflects societal preferences. This approach is based on the assumptions that: (i) healthcare interventions should be ranked using cost-effectiveness (CE) ratios; (ii) the discount rate for costs in CUA should be equal to that used in cost-benefit analysis (CBA); (iii) the discount rate in CBA should be the market rate of interest on long-term government bonds; and (iv) the Keeler-Cretin paradox is applicable to CUA of healthcare interventions, so that the discount rate for benefits in CUA should be set equal to the discount rate for costs. This approach ignores a fundamental difference between CBA and CUA, namely that CUA assumes that a budget constraint has been specified prior to the analysis. It starts with the assumption that a given amount of funds have been withdrawn from the economy to fund healthcare, so there is no opportunity cost to consider. For that reason, the principles on which the choice of discount rate rests differ in the two techniques. Furthermore, use of CE ratios to rank interventions assumes that the budget constraint can be expressed as a single constraint. But healthcare budgets are multiyear budgets that are roughly constant from year to year. A more realistic model would involve multiple constraints and would require linear programming for solution. This can be reduced to a series of single constraints, thereby allowing use of the simpler CE ratio approach, if we assume that the budget being allocated is intended for one cohort at a time, i.e. all people for whom a new funding decision must be made in a given year. In general, we assume that future cohorts will be allotted comparable funding. However, the Keeler-Cretin paradox depends on the assumption that cohorts are competing with each other for resources, and is therefore not applicable to CUA of healthcare. Other approaches are therefore needed to assign utilities to healthcare interventions providing survival benefits. Methods should be developed that allow analyses to reflect a range of philosophical approaches through sensitivity analysis.

  17. Credibilistic multi-period portfolio optimization based on scenario tree

    NASA Astrophysics Data System (ADS)

    Mohebbi, Negin; Najafi, Amir Abbas

    2018-02-01

    In this paper, we consider a multi-period fuzzy portfolio optimization model with considering transaction costs and the possibility of risk-free investment. We formulate a bi-objective mean-VaR portfolio selection model based on the integration of fuzzy credibility theory and scenario tree in order to dealing with the markets uncertainty. The scenario tree is also a proper method for modeling multi-period portfolio problems since the length and continuity of their horizon. We take the return and risk as well cardinality, threshold, class, and liquidity constraints into consideration for further compliance of the model with reality. Then, an interactive dynamic programming method, which is based on a two-phase fuzzy interactive approach, is employed to solve the proposed model. In order to verify the proposed model, we present an empirical application in NYSE under different circumstances. The results show that the consideration of data uncertainty and other real-world assumptions lead to more practical and efficient solutions.

  18. Modelling the spreading rate of controlled communicable epidemics through an entropy-based thermodynamic model

    NASA Astrophysics Data System (ADS)

    Wang, WenBin; Wu, ZiNiu; Wang, ChunFeng; Hu, RuiFeng

    2013-11-01

    A model based on a thermodynamic approach is proposed for predicting the dynamics of communicable epidemics assumed to be governed by controlling efforts of multiple scales so that an entropy is associated with the system. All the epidemic details are factored into a single and time-dependent coefficient, the functional form of this coefficient is found through four constraints, including notably the existence of an inflexion point and a maximum. The model is solved to give a log-normal distribution for the spread rate, for which a Shannon entropy can be defined. The only parameter, that characterizes the width of the distribution function, is uniquely determined through maximizing the rate of entropy production. This entropy-based thermodynamic (EBT) model predicts the number of hospitalized cases with a reasonable accuracy for SARS in the year 2003. This EBT model can be of use for potential epidemics such as avian influenza and H7N9 in China.

  19. Automatic data partitioning on distributed memory multicomputers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gupta, Manish

    1992-01-01

    Distributed-memory parallel computers are increasingly being used to provide high levels of performance for scientific applications. Unfortunately, such machines are not very easy to program. A number of research efforts seek to alleviate this problem by developing compilers that take over the task of generating communication. The communication overheads and the extent of parallelism exploited in the resulting target program are determined largely by the manner in which data is partitioned across different processors of the machine. Most of the compilers provide no assistance to the programmer in the crucial task of determining a good data partitioning scheme. A novel approach is presented, the constraints-based approach, to the problem of automatic data partitioning for numeric programs. In this approach, the compiler identifies some desirable requirements on the distribution of various arrays being referenced in each statement, based on performance considerations. These desirable requirements are referred to as constraints. For each constraint, the compiler determines a quality measure that captures its importance with respect to the performance of the program. The quality measure is obtained through static performance estimation, without actually generating the target data-parallel program with explicit communication. Each data distribution decision is taken by combining all the relevant constraints. The compiler attempts to resolve any conflicts between constraints such that the overall execution time of the parallel program is minimized. This approach has been implemented as part of a compiler called Paradigm, that accepts Fortran 77 programs, and specifies the partitioning scheme to be used for each array in the program. We have obtained results on some programs taken from the Linpack and Eispack libraries, and the Perfect Benchmarks. These results are quite promising, and demonstrate the feasibility of automatic data partitioning for a significant class of scientific application programs with regular computations.

  20. Finite Element Method-Based Kinematics and Closed-Loop Control of Soft, Continuum Manipulators.

    PubMed

    Bieze, Thor Morales; Largilliere, Frederick; Kruszewski, Alexandre; Zhang, Zhongkai; Merzouki, Rochdi; Duriez, Christian

    2018-06-01

    This article presents a modeling methodology and experimental validation for soft manipulators to obtain forward kinematic model (FKM) and inverse kinematic model (IKM) under quasi-static conditions (in the literature, these manipulators are usually classified as continuum robots. However, their main characteristic of interest in this article is that they create motion by deformation, as opposed to the classical use of articulations). It offers a way to obtain the kinematic characteristics of this type of soft robots that is suitable for offline path planning and position control. The modeling methodology presented relies on continuum mechanics, which does not provide analytic solutions in the general case. Our approach proposes a real-time numerical integration strategy based on finite element method with a numerical optimization based on Lagrange multipliers to obtain FKM and IKM. To reduce the dimension of the problem, at each step, a projection of the model to the constraint space (gathering actuators, sensors, and end-effector) is performed to obtain the smallest number possible of mathematical equations to be solved. This methodology is applied to obtain the kinematics of two different manipulators with complex structural geometry. An experimental comparison is also performed in one of the robots, between two other geometric approaches and the approach that is showcased in this article. A closed-loop controller based on a state estimator is proposed. The controller is experimentally validated and its robustness is evaluated using Lypunov stability method.

Top