Sample records for model-driven constraint programming

  1. RSM 1.0 user's guide: A resupply scheduler using integer optimization

    NASA Technical Reports Server (NTRS)

    Viterna, Larry A.; Green, Robert D.; Reed, David M.

    1991-01-01

    The Resupply Scheduling Model (RSM) is a PC based, fully menu-driven computer program. It uses integer programming techniques to determine an optimum schedule to replace components on or before a fixed replacement period, subject to user defined constraints such as transportation mass and volume limits or available repair crew time. Principal input for RSJ includes properties such as mass and volume and an assembly sequence. Resource constraints are entered for each period corresponding to the component properties. Though written to analyze the electrical power system on the Space Station Freedom, RSM is quite general and can be used to model the resupply of almost any system subject to user defined resource constraints. Presented here is a step by step procedure for preparing the input, performing the analysis, and interpreting the results. Instructions for installing the program and information on the algorithms are given.

  2. Real-Time MENTAT programming language and architecture

    NASA Technical Reports Server (NTRS)

    Grimshaw, Andrew S.; Silberman, Ami; Liu, Jane W. S.

    1989-01-01

    Real-time MENTAT, a programming environment designed to simplify the task of programming real-time applications in distributed and parallel environments, is described. It is based on the same data-driven computation model and object-oriented programming paradigm as MENTAT. It provides an easy-to-use mechanism to exploit parallelism, language constructs for the expression and enforcement of timing constraints, and run-time support for scheduling and exciting real-time programs. The real-time MENTAT programming language is an extended C++. The extensions are added to facilitate automatic detection of data flow and generation of data flow graphs, to express the timing constraints of individual granules of computation, and to provide scheduling directives for the runtime system. A high-level view of the real-time MENTAT system architecture and programming language constructs is provided.

  3. Modeling and Simulation Behavior Validation Methodology and Extension Model Validation for the Individual Soldier

    DTIC Science & Technology

    2015-03-01

    domains. Major model functions include: • Ground combat: Light and heavy forces. • Air mobile forces. • Future forces. • Fixed-wing and rotary-wing...Constraints: • Study must be completed no later than 31 December 2014. • Entity behavior limited to select COMBATXXI Mobility , Unmanned Aerial System...and SQL backend , as well as any open application programming interface API. • Allows data transparency and data driven navigation through the model

  4. Reanalysis of Clause Boundaries in Japanese as a Constraint-Driven Process.

    ERIC Educational Resources Information Center

    Miyamoto, Edson T.

    2003-01-01

    Reports on two experiments that focus on clause boundaries in Japanese that suggest that minimal change restriction is unnecessary to characterize reanalysis. Proposes that the data and previous observations are more naturally explained by a constraint-driven model in which revisions are performed only when required by parsing constraints.…

  5. Robust Synchronization Models for Presentation System Using SMIL-Driven Approach

    ERIC Educational Resources Information Center

    Asnawi, Rustam; Ahmad, Wan Fatimah Wan; Rambli, Dayang Rohaya Awang

    2013-01-01

    Current common Presentation System (PS) models are slide based oriented and lack synchronization analysis either with temporal or spatial constraints. Such models, in fact, tend to lead to synchronization problems, particularly on parallel synchronization with spatial constraints between multimedia element presentations. However, parallel…

  6. Model-based control strategies for systems with constraints of the program type

    NASA Astrophysics Data System (ADS)

    Jarzębowska, Elżbieta

    2006-08-01

    The paper presents a model-based tracking control strategy for constrained mechanical systems. Constraints we consider can be material and non-material ones referred to as program constraints. The program constraint equations represent tasks put upon system motions and they can be differential equations of orders higher than one or two, and be non-integrable. The tracking control strategy relies upon two dynamic models: a reference model, which is a dynamic model of a system with arbitrary order differential constraints and a dynamic control model. The reference model serves as a motion planner, which generates inputs to the dynamic control model. It is based upon a generalized program motion equations (GPME) method. The method enables to combine material and program constraints and merge them both into the motion equations. Lagrange's equations with multipliers are the peculiar case of the GPME, since they can be applied to systems with constraints of first orders. Our tracking strategy referred to as a model reference program motion tracking control strategy enables tracking of any program motion predefined by the program constraints. It extends the "trajectory tracking" to the "program motion tracking". We also demonstrate that our tracking strategy can be extended to a hybrid program motion/force tracking.

  7. Using Data-Driven Model-Brain Mappings to Constrain Formal Models of Cognition

    PubMed Central

    Borst, Jelmer P.; Nijboer, Menno; Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John R.

    2015-01-01

    In this paper we propose a method to create data-driven mappings from components of cognitive models to brain regions. Cognitive models are notoriously hard to evaluate, especially based on behavioral measures alone. Neuroimaging data can provide additional constraints, but this requires a mapping from model components to brain regions. Although such mappings can be based on the experience of the modeler or on a reading of the literature, a formal method is preferred to prevent researcher-based biases. In this paper we used model-based fMRI analysis to create a data-driven model-brain mapping for five modules of the ACT-R cognitive architecture. We then validated this mapping by applying it to two new datasets with associated models. The new mapping was at least as powerful as an existing mapping that was based on the literature, and indicated where the models were supported by the data and where they have to be improved. We conclude that data-driven model-brain mappings can provide strong constraints on cognitive models, and that model-based fMRI is a suitable way to create such mappings. PMID:25747601

  8. SATware: A Semantic Approach for Building Sentient Spaces

    NASA Astrophysics Data System (ADS)

    Massaguer, Daniel; Mehrotra, Sharad; Vaisenberg, Ronen; Venkatasubramanian, Nalini

    This chapter describes the architecture of a semantic-based middleware environment for building sensor-driven sentient spaces. The proposed middleware explicitly models sentient space semantics (i.e., entities, spaces, activities) and supports mechanisms to map sensor observations to the state of the sentient space. We argue how such a semantic approach provides a powerful programming environment for building sensor spaces. In addition, the approach provides natural ways to exploit semantics for variety of purposes including scheduling under resource constraints and sensor recalibration.

  9. Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems

    NASA Astrophysics Data System (ADS)

    Watkins, Edward Francis

    1995-01-01

    A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.

  10. Constraint Logic Programming approach to protein structure prediction.

    PubMed

    Dal Palù, Alessandro; Dovier, Agostino; Fogolari, Federico

    2004-11-30

    The protein structure prediction problem is one of the most challenging problems in biological sciences. Many approaches have been proposed using database information and/or simplified protein models. The protein structure prediction problem can be cast in the form of an optimization problem. Notwithstanding its importance, the problem has very seldom been tackled by Constraint Logic Programming, a declarative programming paradigm suitable for solving combinatorial optimization problems. Constraint Logic Programming techniques have been applied to the protein structure prediction problem on the face-centered cube lattice model. Molecular dynamics techniques, endowed with the notion of constraint, have been also exploited. Even using a very simplified model, Constraint Logic Programming on the face-centered cube lattice model allowed us to obtain acceptable results for a few small proteins. As a test implementation their (known) secondary structure and the presence of disulfide bridges are used as constraints. Simplified structures obtained in this way have been converted to all atom models with plausible structure. Results have been compared with a similar approach using a well-established technique as molecular dynamics. The results obtained on small proteins show that Constraint Logic Programming techniques can be employed for studying protein simplified models, which can be converted into realistic all atom models. The advantage of Constraint Logic Programming over other, much more explored, methodologies, resides in the rapid software prototyping, in the easy way of encoding heuristics, and in exploiting all the advances made in this research area, e.g. in constraint propagation and its use for pruning the huge search space.

  11. Scholarly Concentration Program Development: A Generalizable, Data-Driven Approach.

    PubMed

    Burk-Rafel, Jesse; Mullan, Patricia B; Wagenschutz, Heather; Pulst-Korenberg, Alexandra; Skye, Eric; Davis, Matthew M

    2016-11-01

    Scholarly concentration programs-also known as scholarly projects, pathways, tracks, or pursuits-are increasingly common in U.S. medical schools. However, systematic, data-driven program development methods have not been described. The authors examined scholarly concentration programs at U.S. medical schools that U.S. News & World Report ranked as top 25 for research or primary care (n = 43 institutions), coding concentrations and mission statements. Subsequently, the authors conducted a targeted needs assessment via a student-led, institution-wide survey, eliciting learners' preferences for 10 "Pathways" (i.e., concentrations) and 30 "Topics" (i.e., potential content) augmenting core curricula at their institution. Exploratory factor analysis (EFA) and a capacity optimization algorithm characterized best institutional options for learner-focused Pathway development. The authors identified scholarly concentration programs at 32 of 43 medical schools (74%), comprising 199 distinct concentrations (mean concentrations per program: 6.2, mode: 5, range: 1-16). Thematic analysis identified 10 content domains; most common were "Global/Public Health" (30 institutions; 94%) and "Clinical/Translational Research" (26 institutions; 81%). The institutional needs assessment (n = 468 medical students; response rate 60% overall, 97% among first-year students) demonstrated myriad student preferences for Pathways and Topics. EFA of Topic preferences identified eight factors, systematically related to Pathway preferences, informing content development. Capacity modeling indicated that offering six Pathways could guarantee 95% of first-year students (162/171) their first- or second-choice Pathway. This study demonstrates a generalizable, data-driven approach to scholarly concentration program development that reflects student preferences and institutional strengths, while optimizing program diversity within capacity constraints.

  12. Numerical model for learning concepts of streamflow simulation

    USGS Publications Warehouse

    DeLong, L.L.; ,

    1993-01-01

    Numerical models are useful for demonstrating principles of open-channel flow. Such models can allow experimentation with cause-and-effect relations, testing concepts of physics and numerical techniques. Four PT is a numerical model written primarily as a teaching supplement for a course in one-dimensional stream-flow modeling. Four PT options particularly useful in training include selection of governing equations, boundary-value perturbation, and user-programmable constraint equations. The model can simulate non-trivial concepts such as flow in complex interconnected channel networks, meandering channels with variable effective flow lengths, hydraulic structures defined by unique three-parameter relations, and density-driven flow.The model is coded in FORTRAN 77, and data encapsulation is used extensively to simplify maintenance and modification and to enhance the use of Four PT modules by other programs and programmers.

  13. Defense AT and L. Volume 43, Number 2. March-April 2014

    DTIC Science & Technology

    2016-09-16

    are in the public domain and may be reprinted or posted on the Internet. When reprint- ing, please credit the author and Defense AT&L. Some photos...driven strategies. The next section describes ex - amples of programs initiated with schedule-driven constraints, while the following section discusses...get to full-rate production (FRP), because those numbers can be very high yet require significant post - production costs to repair or add capability

  14. Constraint-Driven Software Design: An Escape from the Waterfall Model.

    ERIC Educational Resources Information Center

    de Hoog, Robert; And Others

    1994-01-01

    Presents the principles of a development methodology for software design based on a nonlinear, product-driven approach that integrates quality aspects. Two examples are given to show that the flexibility needed for building high quality systems leads to integrated development environments in which methodology, product, and tools are closely…

  15. A mathematical model for maximizing the value of phase 3 drug development portfolios incorporating budget constraints and risk.

    PubMed

    Patel, Nitin R; Ankolekar, Suresh; Antonijevic, Zoran; Rajicic, Natasa

    2013-05-10

    We describe a value-driven approach to optimizing pharmaceutical portfolios. Our approach incorporates inputs from research and development and commercial functions by simultaneously addressing internal and external factors. This approach differentiates itself from current practices in that it recognizes the impact of study design parameters, sample size in particular, on the portfolio value. We develop an integer programming (IP) model as the basis for Bayesian decision analysis to optimize phase 3 development portfolios using expected net present value as the criterion. We show how this framework can be used to determine optimal sample sizes and trial schedules to maximize the value of a portfolio under budget constraints. We then illustrate the remarkable flexibility of the IP model to answer a variety of 'what-if' questions that reflect situations that arise in practice. We extend the IP model to a stochastic IP model to incorporate uncertainty in the availability of drugs from earlier development phases for phase 3 development in the future. We show how to use stochastic IP to re-optimize the portfolio development strategy over time as new information accumulates and budget changes occur. Copyright © 2013 John Wiley & Sons, Ltd.

  16. A comparison of Heuristic method and Llewellyn’s rules for identification of redundant constraints

    NASA Astrophysics Data System (ADS)

    Estiningsih, Y.; Farikhin; Tjahjana, R. H.

    2018-03-01

    Important techniques in linear programming is modelling and solving practical optimization. Redundant constraints are consider for their effects on general linear programming problems. Identification and reduce redundant constraints are for avoidance of all the calculations associated when solving an associated linear programming problems. Many researchers have been proposed for identification redundant constraints. This paper a compararison of Heuristic method and Llewellyn’s rules for identification of redundant constraints.

  17. RSM 1.0 - A RESUPPLY SCHEDULER USING INTEGER OPTIMIZATION

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1994-01-01

    RSM, Resupply Scheduling Modeler, is a fully menu-driven program that uses integer programming techniques to determine an optimum schedule for replacing components on or before the end of a fixed replacement period. Although written to analyze the electrical power system on the Space Station Freedom, RSM is quite general and can be used to model the resupply of almost any system subject to user-defined resource constraints. RSM is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more computationally intensive, integer programming was required for accuracy when modeling systems with small quantities of components. Input values for component life cane be real numbers, RSM converts them to integers by dividing the lifetime by the period duration, then reducing the result to the next lowest integer. For each component, there is a set of constraints that insure that it is replaced before its lifetime expires. RSM includes user-defined constraints such as transportation mass and volume limits, as well as component life, available repair crew time and assembly sequences. A weighting factor allows the program to minimize factors such as cost. The program then performs an iterative analysis, which is displayed during the processing. A message gives the first period in which resources are being exceeded on each iteration. If the scheduling problem is unfeasible, the final message will also indicate the first period in which resources were exceeded. RSM is written in APL2 for IBM PC series computers and compatibles. A stand-alone executable version of RSM is provided; however, this is a "packed" version of RSM which can only utilize the memory within the 640K DOS limit. This executable requires at least 640K of memory and DOS 3.1 or higher. Source code for an APL2/PC workspace version is also provided. This version of RSM can make full use of any installed extended memory but must be run with the APL2 interpreter; and it requires an 80486 based microcomputer or an 80386 based microcomputer with an 80387 math coprocessor, at least 2Mb of extended memory, and DOS 3.3 or higher. The standard distribution medium for this package is one 5.25 inch 360K MS-DOS format diskette. RSM was developed in 1991. APL2 and IBM PC are registered trademarks of International Business Machines Corporation. MS-DOS is a registered trademark of Microsoft Corporation.

  18. Algebraic reasoning for the enhancement of data-driven building reconstructions

    NASA Astrophysics Data System (ADS)

    Meidow, Jochen; Hammer, Horst

    2016-04-01

    Data-driven approaches for the reconstruction of buildings feature the flexibility needed to capture objects of arbitrary shape. To recognize man-made structures, geometric relations such as orthogonality or parallelism have to be detected. These constraints are typically formulated as sets of multivariate polynomials. For the enforcement of the constraints within an adjustment process, a set of independent and consistent geometric constraints has to be determined. Gröbner bases are an ideal tool to identify such sets exactly. A complete workflow for geometric reasoning is presented to obtain boundary representations of solids based on given point clouds. The constraints are formulated in homogeneous coordinates, which results in simple polynomials suitable for the successful derivation of Gröbner bases for algebraic reasoning. Strategies for the reduction of the algebraical complexity are presented. To enforce the constraints, an adjustment model is introduced, which is able to cope with homogeneous coordinates along with their singular covariance matrices. The feasibility and the potential of the approach are demonstrated by the analysis of a real data set.

  19. Reliable fuzzy H∞ control for active suspension of in-wheel motor driven electric vehicles with dynamic damping

    NASA Astrophysics Data System (ADS)

    Shao, Xinxin; Naghdy, Fazel; Du, Haiping

    2017-03-01

    A fault-tolerant fuzzy H∞ control design approach for active suspension of in-wheel motor driven electric vehicles in the presence of sprung mass variation, actuator faults and control input constraints is proposed. The controller is designed based on the quarter-car active suspension model with a dynamic-damping-in-wheel-motor-driven-system, in which the suspended motor is operated as a dynamic absorber. The Takagi-Sugeno (T-S) fuzzy model is used to model this suspension with possible sprung mass variation. The parallel-distributed compensation (PDC) scheme is deployed to derive a fault-tolerant fuzzy controller for the T-S fuzzy suspension model. In order to reduce the motor wear caused by the dynamic force transmitted to the in-wheel motor, the dynamic force is taken as an additional controlled output besides the traditional optimization objectives such as sprung mass acceleration, suspension deflection and actuator saturation. The H∞ performance of the proposed controller is derived as linear matrix inequalities (LMIs) comprising three equality constraints which are solved efficiently by means of MATLAB LMI Toolbox. The proposed controller is applied to an electric vehicle suspension and its effectiveness is demonstrated through computer simulation.

  20. A novel approach for inventory problem in the pharmaceutical supply chain.

    PubMed

    Candan, Gökçe; Yazgan, Harun Reşit

    2016-02-24

    In pharmaceutical enterprises, keeping up with global market conditions is possible with properly selected supply chain management policies. Generally; demand-driven classical supply chain model is used in the pharmaceutical industry. In this study, a new mathematical model is developed to solve an inventory problem in the pharmaceutical supply chain. Unlike the studies in literature, the "shelf life and product transition times" constraints are considered, simultaneously, first time in the pharmaceutical production inventory problem. The problem is formulated as a mixed-integer linear programming (MILP) model with a hybrid time representation. The objective is to maximize total net profit. Effectiveness of the proposed model is illustrated considering a classical and a vendor managed inventory (VMI) supply chain on an experimental study. To show the effectiveness of the model, an experimental study is performed; which contains 2 different supply chain policy (Classical and VMI), 24 and 30 months planning horizon, 10 and 15 different cephalosporin products. Finally the mathematical model is compared to another model in literature and the results show that proposed model is superior. This study suggest a novel approach for solving pharmaceutical inventory problem. The developed model is maximizing total net profit while determining optimal production plan under shelf life and product transition constraints in the pharmaceutical industry. And we believe that the proposed model is much more closed to real life unlike the other studies in literature.

  1. Genetic Programming for Automatic Hydrological Modelling

    NASA Astrophysics Data System (ADS)

    Chadalawada, Jayashree; Babovic, Vladan

    2017-04-01

    One of the recent challenges for the hydrologic research community is the need for the development of coupled systems that involves the integration of hydrologic, atmospheric and socio-economic relationships. This poses a requirement for novel modelling frameworks that can accurately represent complex systems, given, the limited understanding of underlying processes, increasing volume of data and high levels of uncertainity. Each of the existing hydrological models vary in terms of conceptualization and process representation and is the best suited to capture the environmental dynamics of a particular hydrological system. Data driven approaches can be used in the integration of alternative process hypotheses in order to achieve a unified theory at catchment scale. The key steps in the implementation of integrated modelling framework that is influenced by prior understanding and data, include, choice of the technique for the induction of knowledge from data, identification of alternative structural hypotheses, definition of rules, constraints for meaningful, intelligent combination of model component hypotheses and definition of evaluation metrics. This study aims at defining a Genetic Programming based modelling framework that test different conceptual model constructs based on wide range of objective functions and evolves accurate and parsimonious models that capture dominant hydrological processes at catchment scale. In this paper, GP initializes the evolutionary process using the modelling decisions inspired from the Superflex framework [Fenicia et al., 2011] and automatically combines them into model structures that are scrutinized against observed data using statistical, hydrological and flow duration curve based performance metrics. The collaboration between data driven and physical, conceptual modelling paradigms improves the ability to model and manage hydrologic systems. Fenicia, F., D. Kavetski, and H. H. Savenije (2011), Elements of a flexible approach for conceptual hydrological modeling: 1. Motivation and theoretical development, Water Resources Research, 47(11).

  2. Conviction, Confrontation, and Risk in New Teachers' Advocating for Equity

    ERIC Educational Resources Information Center

    Athanases, Steven Z.; de Oliveira, Luciana C.

    2007-01-01

    Despite frustration with school constraints, new teachers who graduated from a program focused on advocacy for equity spoke for students in need in school forums and spoke up about issues of equity. Speaking for students, driven by convictions about equitable access to resources and a responsibility to act, often helped garner support and affected…

  3. Negative differential mobility in interacting particle systems

    NASA Astrophysics Data System (ADS)

    Chatterjee, Amit Kumar; Basu, Urna; Mohanty, P. K.

    2018-05-01

    Driven particles in the presence of crowded environment, obstacles, or kinetic constraints often exhibit negative differential mobility (NDM) due to their decreased dynamical activity. Based on the empirical studies of conserved lattice gas model, two species exclusion model and other interacting particle systems we propose a new mechanism for complex many-particle systems where slowing down of certain non-driven degrees of freedom by the external field can give rise to NDM. To prove that the slowing down of the non-driven degrees is indeed the underlying cause, we consider several driven diffusive systems including two species exclusion models, misanthrope process, and show from the exact steady state results that NDM indeed appears when some non-driven modes are slowed down deliberately. For clarity, we also provide a simple pedagogical example of two interacting random walkers on a ring which conforms to the proposed scenario.

  4. Time scale of random sequential adsorption.

    PubMed

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

  5. Multi-Objective Programming for Lot-Sizing with Quantity Discount

    NASA Astrophysics Data System (ADS)

    Kang, He-Yau; Lee, Amy H. I.; Lai, Chun-Mei; Kang, Mei-Sung

    2011-11-01

    Multi-objective programming (MOP) is one of the popular methods for decision making in a complex environment. In a MOP, decision makers try to optimize two or more objectives simultaneously under various constraints. A complete optimal solution seldom exists, and a Pareto-optimal solution is usually used. Some methods, such as the weighting method which assigns priorities to the objectives and sets aspiration levels for the objectives, are used to derive a compromise solution. The ɛ-constraint method is a modified weight method. One of the objective functions is optimized while the other objective functions are treated as constraints and are incorporated in the constraint part of the model. This research considers a stochastic lot-sizing problem with multi-suppliers and quantity discounts. The model is transformed into a mixed integer programming (MIP) model next based on the ɛ-constraint method. An illustrative example is used to illustrate the practicality of the proposed model. The results demonstrate that the model is an effective and accurate tool for determining the replenishment of a manufacturer from multiple suppliers for multi-periods.

  6. Pressure-Driven Poiseuille Flow: A Major Component of the Torque-Balance Governing Pacific Plate Motion

    NASA Astrophysics Data System (ADS)

    Stotz, I. L.; Iaffaldano, G.; Davies, D. R.

    2018-01-01

    The Pacific Plate is thought to be driven mainly by slab pull, associated with subduction along the Aleutians-Japan, Marianas-Izu-Bonin, and Tonga-Kermadec trenches. This implies that viscous flow within the sub-Pacific asthenosphere is mainly generated by overlying plate motion (i.e., Couette flow) and that the associated shear stresses at the lithosphere's base are resisting such motion. Recent studies on glacial isostatic adjustment and lithosphere dynamics provide tighter constraints on the viscosity and thickness of Earth's asthenosphere and, therefore, on the amount of shear stress that asthenosphere and lithosphere mutually exchange, by virtue of Newton's third law of motion. In light of these constraints, the notion that subduction is the main driver of present-day Pacific Plate motion becomes somewhat unviable, as the pulling force that would be required by slabs exceeds the maximum available from their negative buoyancy. Here we use coupled global models of mantle and lithosphere dynamics to show that the sub-Pacific asthenosphere features a significant component of pressure-driven (i.e., Poiseuille) flow and that this has driven at least 50% of the Pacific Plate motion since, at least, 15 Ma. A corollary of our models is that a sublithospheric pressure difference as high as ±50 MPa is required across the Pacific domain.

  7. Constraints on long-term carbon-climate feedbacks from spatially resolved CO2 growth rate fluctuations linked to temperature and precipitation

    NASA Astrophysics Data System (ADS)

    Keppel-Aleks, G.; Hoffman, F. M.

    2014-12-01

    Feedbacks between the global carbon cycle and climate represent one of the largest uncertainties in climate prediction. A promising method for reducing uncertainty in predictions of carbon-climate feedbacks is based on identifying an "emergent constraint" that leverages correlations between mechanistically linked long-term feedbacks and short-term variations within the model ensemble. By applying contemporary observations to evaluate model skill in simulating short-term variations, we may be able to better assess the probability of simulated long-term feedbacks. We probed the constraint on long-term terrestrial carbon stocks provided by climate-driven fluctuations in the atmospheric CO2 growth rate at contemporary timescales. We considered the impact of both temperature and precipitation anomalies on terrestrial ecosystem exchange and further separated the direct influence of fire where possible. When we explicitly considered the role of atmospheric transport in smoothing the imprint of climate-driven flux anomalies on atmospheric CO2 patterns, we found that the extent of temporal averaging of both the observations and ESM output leads to estimates for the long-term climate sensitivity of tropical land carbon storage that are different by a factor of two. In the context of these results, we discuss strategies for applying emergent constraints for benchmarking biogeochemical feedbacks in ESMs. Specifically, our results underscore the importance of selecting appropriate observational benchmarks and, for future model intercomparison projects, outputting fields that most closely correspond to available observational datasets.

  8. A satellite-driven, client-server hydro-economic model prototype for agricultural water management

    NASA Astrophysics Data System (ADS)

    Maneta, Marco; Kimball, John; He, Mingzhu; Payton Gardner, W.

    2017-04-01

    Anticipating agricultural water demand, land reallocation, and impact on farm revenues associated with different policy or climate constraints is a challenge for water managers and for policy makers. While current integrated decision support systems based on programming methods provide estimates of farmer reaction to external constraints, they have important shortcomings such as the high cost of data collection surveys necessary to calibrate the model, biases associated with inadequate farm sampling, infrequent model updates and recalibration, model overfitting, or their deterministic nature, among other problems. In addition, the administration of water supplies and the generation of policies that promote sustainable agricultural regions depend on more than one bureau or office. Unfortunately, managers from local and regional agencies often use different datasets of variable quality, which complicates coordinated action. To overcome these limitations, we present a client-server, integrated hydro-economic modeling and observation framework driven by satellite remote sensing and other ancillary information from regional monitoring networks. The core of the framework is a stochastic data assimilation system that sequentially ingests remote sensing observations and corrects the parameters of the hydro-economic model at unprecedented spatial and temporal resolutions. An economic model of agricultural production, based on mathematical programming, requires information on crop type and extent, crop yield, crop transpiration and irrigation technology. A regional hydro-climatologic model provides biophysical constraints to an economic model of agricultural production with a level of detail that permits the study of the spatial impact of large- and small-scale water use decisions. Crop type and extent is obtained from the Cropland Data Layer (CDL), which is multi-sensor operational classification of crops maintained by the United States Department of Agriculture. Because this product is only available for the conterminous United States, the framework is currently only applicable in this region. To obtain information on crop phenology, productivity and transpiration at adequate spatial and temporal frequencies we blend high spatial resolution Landsat information with high temporal fidelity MODIS imagery. The result is a 30 m, 8-day fused dataset of crop greenness that is subsequently transformed into productivity and transpiration by adapting existing forest productivity and transpiration algorithms for agricultural applications. To ensure all involved agencies work with identical information and that end-users are sheltered from the computational burden of storing and processing remote sensing data, this modeling framework is integrated in a client-server architecture based on the Hydra platform (www.hydraplatform.org). Assimilation and processing of resource-intensive remote sensing information, as well as hydrologic and other ancillary data, occur on the server side. With this architecture, our decision support system becomes a light weight 'app' that connects to the server to retrieve the latest information regarding water demands, land use, yields and hydrologic information required to run different management scenarios. This architecture ensures that all agencies and teams involved in water management use the same, up-to-date information in their simulations.

  9. Genetic programming over context-free languages with linear constraints for the knapsack problem: first results.

    PubMed

    Bruhn, Peter; Geyer-Schulz, Andreas

    2002-01-01

    In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.

  10. Hybridized Kibble-Zurek scaling in the driven critical dynamics across an overlapping critical region

    NASA Astrophysics Data System (ADS)

    Zhai, Liang-Jun; Wang, Huai-Yu; Yin, Shuai

    2018-04-01

    The conventional Kibble-Zurek scaling describes the scaling behavior in the driven dynamics across a single critical region. In this paper, we study the driven dynamics across an overlapping critical region, in which a critical region (Region A) is overlaid by another critical region (Region B). We develop a hybridized Kibble-Zurek scaling (HKZS) to characterize the scaling behavior in the driven process. According to the HKZS, the driven dynamics in the overlapping region can be described by the critical theories for both Region A and Region B simultaneously. This results in a constraint on the scaling function in the overlapping critical region. We take the quantum Ising chain in an imaginary longitudinal field as an example. In this model, the critical region of the Yang-Lee edge singularity and the critical region of the ferromagnetic-paramagnetic phase transition overlap with each other. We numerically confirm the HKZS by simulating the driven dynamics in this overlapping critical region. The HKZSs in other models are also discussed.

  11. A model of the human in a cognitive prediction task.

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.

    1973-01-01

    The human decision maker's behavior when predicting future states of discrete linear dynamic systems driven by zero-mean Gaussian processes is modeled. The task is on a slow enough time scale that physiological constraints are insignificant compared with cognitive limitations. The model is basically a linear regression system identifier with a limited memory and noisy observations. Experimental data are presented and compared to the model.

  12. Feedback-Driven Dynamic Invariant Discovery

    NASA Technical Reports Server (NTRS)

    Zhang, Lingming; Yang, Guowei; Rungta, Neha S.; Person, Suzette; Khurshid, Sarfraz

    2014-01-01

    Program invariants can help software developers identify program properties that must be preserved as the software evolves, however, formulating correct invariants can be challenging. In this work, we introduce iDiscovery, a technique which leverages symbolic execution to improve the quality of dynamically discovered invariants computed by Daikon. Candidate invariants generated by Daikon are synthesized into assertions and instrumented onto the program. The instrumented code is executed symbolically to generate new test cases that are fed back to Daikon to help further re ne the set of candidate invariants. This feedback loop is executed until a x-point is reached. To mitigate the cost of symbolic execution, we present optimizations to prune the symbolic state space and to reduce the complexity of the generated path conditions. We also leverage recent advances in constraint solution reuse techniques to avoid computing results for the same constraints across iterations. Experimental results show that iDiscovery converges to a set of higher quality invariants compared to the initial set of candidate invariants in a small number of iterations.

  13. Parallelizing Data-Centric Programs

    DTIC Science & Technology

    2013-09-25

    results than current techniques, such as ImageWebs [HGO+10], given the same budget of matches performed. 4.2 Scalable Parallel Similarity Search The work...algorithms. 5 Data-Driven Applications in the Cloud In this project, we investigated what happens when data-centric software is moved from expensive custom ...returns appropriate answer tuples. Figure 9 (b) shows the mutual constraint satisfaction that takes place in answering for 122. The intent is that

  14. A Structural Evaluation of a Large-Scale Quasi-Experimental Microfinance Initiative.

    PubMed

    Kaboski, Joseph P; Townsend, Robert M

    2011-09-01

    This paper uses a structural model to understand, predict, and evaluate the impact of an exogenous microcredit intervention program, the Thai Million Baht Village Fund program. We model household decisions in the face of borrowing constraints, income uncertainty, and high-yield indivisible investment opportunities. After estimation of parameters using pre-program data, we evaluate the model's ability to predict and interpret the impact of the village fund intervention. Simulations from the model mirror the data in yielding a greater increase in consumption than credit, which is interpreted as evidence of credit constraints. A cost-benefit analysis using the model indicates that some households value the program much more than its per household cost, but overall the program costs 20 percent more than the sum of these benefits.

  15. Precision orbit raising trajectories. [solar electric propulsion orbital transfer program

    NASA Technical Reports Server (NTRS)

    Flanagan, P. F.; Horsewood, J. L.; Pines, S.

    1975-01-01

    A precision trajectory program has been developed to serve as a test bed for geocentric orbit raising steering laws. The steering laws to be evaluated have been developed using optimization methods employing averaging techniques. This program provides the capability of testing the steering laws in a precision simulation. The principal system models incorporated in the program are described, including the radiation environment, the solar array model, the thrusters and power processors, the geopotential, and the solar system. Steering and array orientation constraints are discussed, and the impact of these constraints on program design is considered.

  16. Novel Formulation of Adaptive MPC as EKF Using ANN Model: Multiproduct Semibatch Polymerization Reactor Case Study.

    PubMed

    Kamesh, Reddi; Rani, Kalipatnapu Yamuna

    2017-12-01

    In this paper, a novel formulation for nonlinear model predictive control (MPC) has been proposed incorporating the extended Kalman filter (EKF) control concept using a purely data-driven artificial neural network (ANN) model based on measurements for supervisory control. The proposed scheme consists of two modules focusing on online parameter estimation based on past measurements and control estimation over control horizon based on minimizing the deviation of model output predictions from set points along the prediction horizon. An industrial case study for temperature control of a multiproduct semibatch polymerization reactor posed as a challenge problem has been considered as a test bed to apply the proposed ANN-EKFMPC strategy at supervisory level as a cascade control configuration along with proportional integral controller [ANN-EKFMPC with PI (ANN-EKFMPC-PI)]. The proposed approach is formulated incorporating all aspects of MPC including move suppression factor for control effort minimization and constraint-handling capability including terminal constraints. The nominal stability analysis and offset-free tracking capabilities of the proposed controller are proved. Its performance is evaluated by comparison with a standard MPC-based cascade control approach using the same adaptive ANN model. The ANN-EKFMPC-PI control configuration has shown better controller performance in terms of temperature tracking, smoother input profiles, as well as constraint-handling ability compared with the ANN-MPC with PI approach for two products in summer and winter. The proposed scheme is found to be versatile although it is based on a purely data-driven model with online parameter estimation.

  17. Space station pressurized laboratory safety guidelines

    NASA Technical Reports Server (NTRS)

    Mcgonigal, Les

    1990-01-01

    Before technical safety guidelines and requirements are established, a common understanding of their origin and importance must be shared between Space Station Program Management, the User Community, and the Safety organizations involved. Safety guidelines and requirements are driven by the nature of the experiments, and the degree of crew interaction. Hazard identification; development of technical safety requirements; operating procedures and constraints; provision of training and education; conduct of reviews and evaluations; and emergency preplanning are briefly discussed.

  18. Improving Systematic Constraint-driven Analysis Using Incremental and Parallel Techniques

    DTIC Science & Technology

    2012-05-01

    and modeling latency of a cloud based subsystem. Members of my research group provided useful comments and ideas on my work in group meetings and...122 5.7.1 One structurally complex argument . . . . . . . . . . . . . . 122 5.7.2 Multiple independent arguments...Subject Tools . . . . . . . . . . . . . . . . . 131 6.1.1.1 JPF — Model Checker . . . . . . . . . . . . . . . . 131 6.1.1.2 Alloy — Using a SAT

  19. Solving Constraint-Satisfaction Problems with Distributed Neocortical-Like Neuronal Networks.

    PubMed

    Rutishauser, Ueli; Slotine, Jean-Jacques; Douglas, Rodney J

    2018-05-01

    Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSP's planar four-color graph coloring, maximum independent set, and sudoku on this substrate and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of nonsaturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by nonlinear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation and offer insight into the computational role of dual inhibitory mechanisms in neural circuits.

  20. The wind of EG Andromedae is not dust driven

    NASA Technical Reports Server (NTRS)

    Van Buren, Dave; Dgani, Ruth; Noriega-Crespo, Alberto

    1994-01-01

    The symbiotic star EG Andromedae has recently been the subject of several studies investigating its wind properties. Late-type giants are usually considered to have winds driven by radiation pressure on dust. Indeed, the derived wind velocity for EG Andromedae is consistent with this model. We point out here that there is no appreciable dust opacity in the wind of EG Andromedae using constraints on extinction limits from International Ultraviolet Explorer (IUE) and far infrared fluxes from Infrared Astronomy Satellite (IRAS). An alternate mechanism must operate in this star. We suggest that the wind can be driven by radiation pressure on molecular lines.

  1. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  2. A Structural Evaluation of a Large-Scale Quasi-Experimental Microfinance Initiative

    PubMed Central

    Kaboski, Joseph P.; Townsend, Robert M.

    2010-01-01

    This paper uses a structural model to understand, predict, and evaluate the impact of an exogenous microcredit intervention program, the Thai Million Baht Village Fund program. We model household decisions in the face of borrowing constraints, income uncertainty, and high-yield indivisible investment opportunities. After estimation of parameters using pre-program data, we evaluate the model’s ability to predict and interpret the impact of the village fund intervention. Simulations from the model mirror the data in yielding a greater increase in consumption than credit, which is interpreted as evidence of credit constraints. A cost-benefit analysis using the model indicates that some households value the program much more than its per household cost, but overall the program costs 20 percent more than the sum of these benefits. PMID:22162594

  3. Content and Workflow Management for Library Websites: Case Studies

    ERIC Educational Resources Information Center

    Yu, Holly, Ed.

    2005-01-01

    Using database-driven web pages or web content management (WCM) systems to manage increasingly diverse web content and to streamline workflows is a commonly practiced solution recognized in libraries today. However, limited library web content management models and funding constraints prevent many libraries from purchasing commercially available…

  4. Situations, Interaction, Process and Affordances: An Ecological Psychology Perspective.

    ERIC Educational Resources Information Center

    Young, Michael F.; DePalma, Andrew; Garrett, Steven

    2002-01-01

    From an ecological psychology perspective, a full analysis of any learning context must acknowledge the complex nonlinear dynamics that unfold as an intentionally-driven learner interacts with a technology-based purposefully designed learning environment. A full situation model would need to incorporate constraints from the environment and also…

  5. An Ethnographic Study of a Developing Virtual Organization in Education

    ERIC Educational Resources Information Center

    Couch, Stephanie R.

    2012-01-01

    This ethnographic study answers calls for research into the ways that virtual organizations (or innovation-driven collaborative teams) form and develop, what supports and constraints their development, and the leadership models that support the organizations' work. The study examines how a virtual organization emerged from an intersegmental…

  6. A frictionally and hydraulically constrained model of the convectively driven mean flow in partially enclosed seas

    NASA Astrophysics Data System (ADS)

    Maxworthy, T.

    1997-08-01

    A simple three-layer model of the dynamics of partially enclosed seas, driven by a surface buoyancy flux, is presented. It contains two major elements, a hydraulic constraint at the exit contraction and friction in the interior of the main body of the sea; both together determine the vertical structure and magnitudes of the interior flow variables, i.e. velocity and density. Application of the model to the large-scale dynamics of the Red Sea gives results that are not in disagreement with observation once the model is applied, also, to predict the dense outflow from the Gulf of Suez. The latter appears to be the agent responsible for the formation of dense bottom water in this system. Also, the model is reasonably successful in predicting the density of the outflow from the Persian Gulf, and can be applied to any number of other examples of convectively driven flow in long, narrow channels, with or without sills and constrictions at their exits.

  7. The Physics and Physical Chemistry of Molecular Machines.

    PubMed

    Astumian, R Dean; Mukherjee, Shayantani; Warshel, Arieh

    2016-06-17

    The concept of a "power stroke"-a free-energy releasing conformational change-appears in almost every textbook that deals with the molecular details of muscle, the flagellar rotor, and many other biomolecular machines. Here, it is shown by using the constraints of microscopic reversibility that the power stroke model is incorrect as an explanation of how chemical energy is used by a molecular machine to do mechanical work. Instead, chemically driven molecular machines operating under thermodynamic constraints imposed by the reactant and product concentrations in the bulk function as information ratchets in which the directionality and stopping torque or stopping force are controlled entirely by the gating of the chemical reaction that provides the fuel for the machine. The gating of the chemical free energy occurs through chemical state dependent conformational changes of the molecular machine that, in turn, are capable of generating directional mechanical motions. In strong contrast to this general conclusion for molecular machines driven by catalysis of a chemical reaction, a power stroke may be (and often is) an essential component for a molecular machine driven by external modulation of pH or redox potential or by light. This difference between optical and chemical driving properties arises from the fundamental symmetry difference between the physics of optical processes, governed by the Bose-Einstein relations, and the constraints of microscopic reversibility for thermally activated processes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Optimization of Regional Geodynamic Models for Mantle Dynamics

    NASA Astrophysics Data System (ADS)

    Knepley, M.; Isaac, T.; Jadamec, M. A.

    2016-12-01

    The SubductionGenerator program is used to construct high resolution, 3D regional thermal structures for mantle convection simulations using a variety of data sources, including sea floor ages and geographically referenced 3D slab locations based on seismic observations. The initial bulk temperature field is constructed using a half-space cooling model or plate cooling model, and related smoothing functions based on a diffusion length-scale analysis. In this work, we seek to improve the 3D thermal model and test different model geometries and dynamically driven flow fields using constraints from observed seismic velocities and plate motions. Through a formal adjoint analysis, we construct the primal-dual version of the multi-objective PDE-constrained optimization problem for the plate motions and seismic misfit. We have efficient, scalable preconditioners for both the forward and adjoint problems based upon a block preconditioning strategy, and a simple gradient update is used to improve the control residual. The full optimal control problem is formulated on a nested hierarchy of grids, allowing a nonlinear multigrid method to accelerate the solution.

  9. Building flexible real-time systems using the Flex language

    NASA Technical Reports Server (NTRS)

    Kenny, Kevin B.; Lin, Kwei-Jay

    1991-01-01

    The design and implementation of a real-time programming language called Flex, which is a derivative of C++, are presented. It is shown how different types of timing requirements might be expressed and enforced in Flex, how they might be fulfilled in a flexible way using different program models, and how the programming environment can help in making binding and scheduling decisions. The timing constraint primitives in Flex are easy to use yet powerful enough to define both independent and relative timing constraints. Program models like imprecise computation and performance polymorphism can carry out flexible real-time programs. In addition, programmers can use a performance measurement tool that produces statistically correct timing models to predict the expected execution time of a program and to help make binding decisions. A real-time programming environment is also presented.

  10. The wind of the M-type AGB star RT Virginis probed by VLTI/MIDI

    NASA Astrophysics Data System (ADS)

    Sacuto, S.; Ramstedt, S.; Höfner, S.; Olofsson, H.; Bladh, S.; Eriksson, K.; Aringer, B.; Klotz, D.; Maercker, M.

    2013-03-01

    Aims: We study the circumstellar environment of the M-type AGB star RT Vir using mid-infrared high spatial resolution observations from the ESO-VLTI focal instrument MIDI. The aim of this study is to provide observational constraints on theoretical prediction that the winds of M-type AGB objects can be driven by photon scattering on iron-free silicate grains located in the close environment (about 2 to 3 stellar radii) of the star. Methods: We interpreted spectro-interferometric data, first using wavelength-dependent geometric models. We then used a self-consistent dynamic model atmosphere containing a time-dependent description of grain growth for pure forsterite dust particles to reproduce the photometric, spectrometric, and interferometric measurements of RT Vir. Since the hydrodynamic computation needs stellar parameters as input, a considerable effort was first made to determine these parameters. Results: MIDI differential phases reveal the presence of an asymmetry in the stellar vicinity. Results from the geometrical modeling give us clues to the presence of aluminum and silicate dust in the close circumstellar environment (<5 stellar radii). Comparison between spectro-interferometric data and a self-consistent dust-driven wind model reveals that silicate dust has to be present in the region between 2 to 3 stellar radii to reproduce the 59 and 63 m baseline visibility measurements around 9.8 μm. This gives additional observational evidence in favor of winds driven by photon scattering on iron-free silicate grains located in the close vicinity of an M-type star. However, other sources of opacity are clearly missing to reproduce the 10-13 μm visibility measurements for all baselines. Conclusions: This study is a first attempt to understand the wind mechanism of M-type AGB stars by comparing photometric, spectrometric, and interferometric measurements with state-of-the-art, self-consistent dust-driven wind models. The agreement of the dynamic model atmosphere with interferometric measurements in the 8-10 μm spectral region gives additional observational evidence that the winds of M-type stars can be driven by photon scattering on iron-free silicate grains. Finally, a larger statistical study and progress in advanced self-consistent 3D modeling are still required to solve the remaining problems. Based on observations made with the Very Large Telescope Interferometer at Paranal Observatory under programs 083.D-0234 and 086.D-0737 (Open Time Observations).

  11. A cost constraint alone has adverse effects on food selection and nutrient density: an analysis of human diets by linear programming.

    PubMed

    Darmon, Nicole; Ferguson, Elaine L; Briend, André

    2002-12-01

    Economic constraints may contribute to the unhealthy food choices observed among low socioeconomic groups in industrialized countries. The objective of the present study was to predict the food choices a rational individual would make to reduce his or her food budget, while retaining a diet as close as possible to the average population diet. Isoenergetic diets were modeled by linear programming. To ensure these diets were consistent with habitual food consumption patterns, departure from the average French diet was minimized and constraints that limited portion size and the amount of energy from food groups were introduced into the models. A cost constraint was introduced and progressively strengthened to assess the effect of cost on the selection of foods by the program. Strengthening the cost constraint reduced the proportion of energy contributed by fruits and vegetables, meat and dairy products and increased the proportion from cereals, sweets and added fats, a pattern similar to that observed among low socioeconomic groups. This decreased the nutritional quality of modeled diets, notably the lowest cost linear programming diets had lower vitamin C and beta-carotene densities than the mean French adult diet (i.e., <25% and 10% of the mean density, respectively). These results indicate that a simple cost constraint can decrease the nutrient densities of diets and influence food selection in ways that reproduce the food intake patterns observed among low socioeconomic groups. They suggest that economic measures will be needed to effectively improve the nutritional quality of diets consumed by these populations.

  12. Abstract knowledge versus direct experience in processing of binomial expressions

    PubMed Central

    Morgan, Emily; Levy, Roger

    2016-01-01

    We ask whether word order preferences for binomial expressions of the form A and B (e.g. bread and butter) are driven by abstract linguistic knowledge of ordering constraints referencing the semantic, phonological, and lexical properties of the constituent words, or by prior direct experience with the specific items in questions. Using forced-choice and self-paced reading tasks, we demonstrate that online processing of never-before-seen binomials is influenced by abstract knowledge of ordering constraints, which we estimate with a probabilistic model. In contrast, online processing of highly frequent binomials is primarily driven by direct experience, which we estimate from corpus frequency counts. We propose a trade-off wherein processing of novel expressions relies upon abstract knowledge, while reliance upon direct experience increases with increased exposure to an expression. Our findings support theories of language processing in which both compositional generation and direct, holistic reuse of multi-word expressions play crucial roles. PMID:27776281

  13. Rubber airplane: Constraint-based component-modeling for knowledge representation in computer-aided conceptual design

    NASA Technical Reports Server (NTRS)

    Kolb, Mark A.

    1990-01-01

    Viewgraphs on Rubber Airplane: Constraint-based Component-Modeling for Knowledge Representation in Computer Aided Conceptual Design are presented. Topics covered include: computer aided design; object oriented programming; airfoil design; surveillance aircraft; commercial aircraft; aircraft design; and launch vehicles.

  14. National Space Agencies vs. Commercial Space: Towards Improved Space Safety

    NASA Astrophysics Data System (ADS)

    Pelton, J.

    2013-09-01

    Traditional space policies as developed at the national level includes many elements but they are most typically driven by economic and political objectives. Legislatively administered programs apportion limited public funds to achieve "gains" that can involve employment, stimulus to the economy, national defense or other advancements. Yet political advantage is seldom far from the picture.Within the context of traditional space policies, safety issues cannot truly be described as "afterthoughts", but they are usually, at best, a secondary or even tertiary consideration. "Space safety" is often simply assumed to be "in there" somewhere. The current key question is can "safety and risk minimization", within new commercial space programs actually be elevated in importance and effectively be "designed in" at the outset. This has long been the case with commercial aviation and there is at least reasonable hope that this could also be the case for the commercial space industry in coming years. The cooperative role that the insurance industry has now played for centuries in the shipping industry and for decades in aviation can perhaps now play a constructive role in risk minimization in the commercial space domain as well. This paper begins by examining two historical case studies in the context of traditional national space policy development to see how major space policy decisions involving "manned space programs" have given undue primacy to "political considerations" over "safety" and other factors. The specific case histories examined here include first the decision to undertake the Space Shuttle Program (i.e. 1970-1972) and the second is the International Space Station. In both cases the key and overarching decisions were driven by political, schedule and cost considerations, and safety seems absence as a prime consideration. In publicly funded space programs—whether in the United States, Europe, Russia, Japan, China, India or elsewhere—it seems realistic to assume that thiscondition will not change. This seems particularly true for high profile, multi-billion dollar programs.The second part of the paper focuses on new commercial space programs that appear to be undertaken in a less restrictive manner; i.e. outside the constraints of politically-driven national space policies. Here the drivers—even within international consortia—seem to be on reliable performance and commercial return. Since sustained accident-free performance is critical to commercial programs very existence and profitability, the inherent role of safety in commercial space industry would seem clear. The question of prime interest for this paper is whether or not it might be possible for smaller and more focused commercial space entities, free from the constraints of space agency organizational and political constraints, to be more "risk adverse" and thus be more nimble in designing "safe" vehicles? If so how can this "safety first" corporate philosophy and management practice be detected and even objectively measured? Could, in the future, risk reduction at the level of design, quality verification, etc., be objectively measured?

  15. Origin of the main r-process elements

    NASA Astrophysics Data System (ADS)

    Otsuki, K.; Truran, J.; Wiescher, M.; Gorres, J.; Mathews, G.; Frekers, D.; Mengoni, A.; Bartlett, A.; Tostevin, J.

    2006-07-01

    The r-process is supposed to be a primary process which assembles heavy nuclei from a photo-dissociated nucleon gas. Hence, the reaction flow through light elements can be important as a constraint on the conditions for the r-process. We have studied the impact of di-neutron capture and the neutron-capture of light (Z<10) elements on r-process nucleosynthesis in three different environments: neutrino-driven winds in Type II supernovae; the prompt explosion of low mass supernovae; and neutron star mergers. Although the effect of di-neutron capture is not significant for the neutrino-driven wind model or low-mass supernovae, it becomes significant in the neutron-star merger model. The neutron-capture of light elements, which has been studied extensively for neutrino-driven wind models, also impacts the other two models. We show that it may be possible to identify the astrophysical site for the main r-process if the nuclear physics uncertainties in current r-process calculations could be reduced.

  16. A Mars Exploration Discovery Program

    NASA Astrophysics Data System (ADS)

    Hansen, C. J.; Paige, D. A.

    2000-07-01

    The Mars Exploration Program should consider following the Discovery Program model. In the Discovery Program a team of scientists led by a PI develop the science goals of their mission, decide what payload achieves the necessary measurements most effectively, and then choose a spacecraft with the capabilities needed to carry the payload to the desired target body. The primary constraints associated with the Discovery missions are time and money. The proposer must convince reviewers that their mission has scientific merit and is feasible. Every Announcement of Opportunity has resulted in a collection of creative ideas that fit within advertised constraints. Following this model, a "Mars Discovery Program" would issue an Announcement of Opportunity for each launch opportunity with schedule constraints dictated by the launch window and fiscal constraints in accord with the program budget. All else would be left to the proposer to choose, based on the science the team wants to accomplish, consistent with the program theme of "Life, Climate and Resources". A proposer could propose a lander, an orbiter, a fleet of SCOUT vehicles or penetrators, an airplane, a balloon mission, a large rover, a small rover, etc. depending on what made the most sense for the science investigation and payload. As in the Discovery program, overall feasibility relative to cost, schedule and technology readiness would be evaluated and be part of the selection process.

  17. A Mars Exploration Discovery Program

    NASA Technical Reports Server (NTRS)

    Hansen, C. J.; Paige, D. A.

    2000-01-01

    The Mars Exploration Program should consider following the Discovery Program model. In the Discovery Program a team of scientists led by a PI develop the science goals of their mission, decide what payload achieves the necessary measurements most effectively, and then choose a spacecraft with the capabilities needed to carry the payload to the desired target body. The primary constraints associated with the Discovery missions are time and money. The proposer must convince reviewers that their mission has scientific merit and is feasible. Every Announcement of Opportunity has resulted in a collection of creative ideas that fit within advertised constraints. Following this model, a "Mars Discovery Program" would issue an Announcement of Opportunity for each launch opportunity with schedule constraints dictated by the launch window and fiscal constraints in accord with the program budget. All else would be left to the proposer to choose, based on the science the team wants to accomplish, consistent with the program theme of "Life, Climate and Resources". A proposer could propose a lander, an orbiter, a fleet of SCOUT vehicles or penetrators, an airplane, a balloon mission, a large rover, a small rover, etc. depending on what made the most sense for the science investigation and payload. As in the Discovery program, overall feasibility relative to cost, schedule and technology readiness would be evaluated and be part of the selection process.

  18. Constraints on single-field inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pirtskhalava, David; Santoni, Luca; Trincherini, Enrico

    2016-06-28

    Many alternatives to canonical slow-roll inflation have been proposed over the years, one of the main motivations being to have a model, capable of generating observable values of non-Gaussianity. In this work, we (re-)explore the physical implications of a great majority of such models within a single, effective field theory framework (including novel models with large non-Gaussianity discussed for the first time below). The constraints we apply — both theoretical and experimental — are found to be rather robust, determined to a great extent by just three parameters: the coefficients of the quadratic EFT operators (δN){sup 2} and δNδE, andmore » the slow-roll parameter ε. This allows to significantly limit the majority of single-field alternatives to canonical slow-roll inflation. While the existing data still leaves some room for most of the considered models, the situation would change dramatically if the current upper limit on the tensor-to-scalar ratio decreased down to r<10{sup −2}. Apart from inflationary models driven by plateau-like potentials, the single-field model that would have a chance of surviving this bound is the recently proposed slow-roll inflation with weakly-broken galileon symmetry. In contrast to canonical slow-roll inflation, the latter model can support r<10{sup −2} even if driven by a convex potential, as well as generate observable values for the amplitude of non-Gaussianity.« less

  19. Insights into asthenospheric anisotropy and deformation in Mainland China

    NASA Astrophysics Data System (ADS)

    Zhu, Tao

    2018-03-01

    Seismic anisotropy can provide direct constraints on asthenospheric deformation which also can be induced by the inherent mantle flow within our planet. Mantle flow calculations thus have been an effective tool to probe asthenospheric anisotropy. To explore the source of seismic anisotropy, asthenospheric deformation and the effects of mantle flow on seismic anisotropy in Mainland China, mantle flow models driven by plate motion (plate-driven) and by a combination of plate motion and mantle density heterogeneity (plate-density-driven) are used to predict the fast polarization direction of shear wave splitting. Our results indicate that: (1) plate-driven or plate-density-driven mantle flow significantly affects the predicted fast polarization direction when compared with simple asthenospheric flow commonly used in interpreting the asthenospheric source of seismic anisotropy, and thus new insights are presented; (2) plate-driven flow controls the fast polarization direction while thermal mantle flow affects asthenospheric deformation rate and local deformation direction significantly; (3) asthenospheric flow is an assignable contributor to seismic anisotropy, and the asthenosphere is undergoing low, large or moderate shear deformation controlled by the strain model, the flow plane/flow direction model or both in most regions of central and eastern China; and (4) the asthenosphere is under more rapid extension deformation in eastern China than in western China.

  20. Constraint programming based biomarker optimization.

    PubMed

    Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng

    2015-01-01

    Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.

  1. Impact of a cost constraint on nutritionally adequate food choices for French women: an analysis by linear programming.

    PubMed

    Darmon, Nicole; Ferguson, Elaine L; Briend, André

    2006-01-01

    To predict, for French women, the impact of a cost constraint on the food choices required to provide a nutritionally adequate diet. Isocaloric daily diets fulfilling both palatability and nutritional constraints were modeled in linear programming, using different cost constraint levels. For each modeled diet, total departure from an observed French population's average food group pattern ("mean observed diet") was minimized. To achieve the nutritional recommendations without a cost constraint, the modeled diet provided more energy from fish, fresh fruits and green vegetables and less energy from animal fats and cheese than the "mean observed diet." Introducing and strengthening a cost constraint decreased the energy provided by meat, fresh vegetables, fresh fruits, vegetable fat, and yogurts and increased the energy from processed meat, eggs, offal, and milk. For the lowest cost diet (ie, 3.18 euros/d), marked changes from the "mean observed diet" were required, including a marked reduction in the amount of energy from fresh fruits (-85%) and green vegetables (-70%), and an increase in the amount of energy from nuts, dried fruits, roots, legumes, and fruit juices. Nutrition education for low-income French women must emphasize these affordable food choices.

  2. Symbolic PathFinder: Symbolic Execution of Java Bytecode

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Rungta, Neha

    2010-01-01

    Symbolic Pathfinder (SPF) combines symbolic execution with model checking and constraint solving for automated test case generation and error detection in Java programs with unspecified inputs. In this tool, programs are executed on symbolic inputs representing multiple concrete inputs. Values of variables are represented as constraints generated from the analysis of Java bytecode. The constraints are solved using off-the shelf solvers to generate test inputs guaranteed to achieve complex coverage criteria. SPF has been used successfully at NASA, in academia, and in industry.

  3. Waste management under multiple complexities: inexact piecewise-linearization-based fuzzy flexible programming.

    PubMed

    Sun, Wei; Huang, Guo H; Lv, Ying; Li, Gongchen

    2012-06-01

    To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerance intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    NASA Astrophysics Data System (ADS)

    Shaw, Amelia R.; Smith Sawyer, Heather; LeBoeuf, Eugene J.; McDonald, Mark P.; Hadjerioua, Boualem

    2017-11-01

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2 is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. The reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.

  5. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    DOE PAGES

    Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.; ...

    2017-10-24

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less

  6. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Amelia R.; Sawyer, Heather Smith; LeBoeuf, Eugene J.

    Hydropower operations optimization subject to environmental constraints is limited by challenges associated with dimensionality and spatial and temporal resolution. The need for high-fidelity hydrodynamic and water quality models within optimization schemes is driven by improved computational capabilities, increased requirements to meet specific points of compliance with greater resolution, and the need to optimize operations of not just single reservoirs but systems of reservoirs. This study describes an important advancement for computing hourly power generation schemes for a hydropower reservoir using high-fidelity models, surrogate modeling techniques, and optimization methods. The predictive power of the high-fidelity hydrodynamic and water quality model CE-QUAL-W2more » is successfully emulated by an artificial neural network, then integrated into a genetic algorithm optimization approach to maximize hydropower generation subject to constraints on dam operations and water quality. This methodology is applied to a multipurpose reservoir near Nashville, Tennessee, USA. The model successfully reproduced high-fidelity reservoir information while enabling 6.8% and 6.6% increases in hydropower production value relative to actual operations for dissolved oxygen (DO) limits of 5 and 6 mg/L, respectively, while witnessing an expected decrease in power generation at more restrictive DO constraints. Exploration of simultaneous temperature and DO constraints revealed capability to address multiple water quality constraints at specified locations. Here, the reduced computational requirements of the new modeling approach demonstrated an ability to provide decision support for reservoir operations scheduling while maintaining high-fidelity hydrodynamic and water quality information as part of the optimization decision support routines.« less

  7. Waste management under multiple complexities: Inexact piecewise-linearization-based fuzzy flexible programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun Wei; Huang, Guo H., E-mail: huang@iseis.org; Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan, S4S 0A2

    2012-06-15

    Highlights: Black-Right-Pointing-Pointer Inexact piecewise-linearization-based fuzzy flexible programming is proposed. Black-Right-Pointing-Pointer It's the first application to waste management under multiple complexities. Black-Right-Pointing-Pointer It tackles nonlinear economies-of-scale effects in interval-parameter constraints. Black-Right-Pointing-Pointer It estimates costs more accurately than the linear-regression-based model. Black-Right-Pointing-Pointer Uncertainties are decreased and more satisfactory interval solutions are obtained. - Abstract: To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerancemore » intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities.« less

  8. Constraint Programming to Solve Maximal Density Still Life

    NASA Astrophysics Data System (ADS)

    Chu, Geoffrey; Petrie, Karen Elizabeth; Yorke-Smith, Neil

    The Maximum Density Still Life problem fills a finite Game of Life board with a stable pattern of cells that has as many live cells as possible. Although simple to state, this problem is computationally challenging for any but the smallest sizes of board. Especially difficult is to prove that the maximum number of live cells has been found. Various approaches have been employed. The most successful are approaches based on Constraint Programming (CP). We describe the Maximum Density Still Life problem, introduce the concept of constraint programming, give an overview on how the problem can be modelled and solved with CP, and report on best-known results for the problem.

  9. Modeling protein conformational changes by iterative fitting of distance constraints using reoriented normal modes.

    PubMed

    Zheng, Wenjun; Brooks, Bernard R

    2006-06-15

    Recently we have developed a normal-modes-based algorithm that predicts the direction of protein conformational changes given the initial state crystal structure together with a small number of pairwise distance constraints for the end state. Here we significantly extend this method to accurately model both the direction and amplitude of protein conformational changes. The new protocol implements a multisteps search in the conformational space that is driven by iteratively minimizing the error of fitting the given distance constraints and simultaneously enforcing the restraint of low elastic energy. At each step, an incremental structural displacement is computed as a linear combination of the lowest 10 normal modes derived from an elastic network model, whose eigenvectors are reorientated to correct for the distortions caused by the structural displacements in the previous steps. We test this method on a list of 16 pairs of protein structures for which relatively large conformational changes are observed (root mean square deviation >3 angstroms), using up to 10 pairwise distance constraints selected by a fluctuation analysis of the initial state structures. This method has achieved a near-optimal performance in almost all cases, and in many cases the final structural models lie within root mean square deviation of 1 approximately 2 angstroms from the native end state structures.

  10. Mock Trials versus Management or Litigation-Driven Models of Business Law Instruction

    ERIC Educational Resources Information Center

    Gershuny, Pamela; McAllister, Charles; Rainey, Carolyn

    2012-01-01

    This study was designed to gain a greater understanding of the learning outcomes associated with the mock trial as an active teaching method. Participating in a product liability mock trial presents students with the complex interplay of administrative regulations and common law. As in real life, the harsh constraints of time pressures, less than…

  11. Latest Results from MINOS and MINOS+

    DOE PAGES

    De Rijck, Simon

    2017-07-27

    The MINOS and MINOS+ experiments collected accelerator beam neutrino and antineutrino data for 11 years corresponding to peak energies of 3 GeV and 6 GeV, respectively, over a baseline of 735 km. These proceedings report on new limits and constraints set by MINOS and MINOS+ in and beyond the standard three-flavor paradigm. The atmospheric neutrino mass splitting in the three-flavor model is measured to be (2.42 ± 0.09) × 10 -3 eV 2 for normal mass ordering andmore » $$-({2.48}_{-0.11}^{+0.09})\\times {10}^{-3}{\\ {\\rm{eV}}}^{2}$$ for inverted mass ordering. Constraints are set on sterile neutrinos and antineutrinos in the four-flavor model by looking for sterile-driven ν μ and $${\\bar{\

  12. Behavior generation strategy of artificial behavioral system by self-learning paradigm for autonomous robot tasks

    NASA Astrophysics Data System (ADS)

    Dağlarli, Evren; Temeltaş, Hakan

    2008-04-01

    In this study, behavior generation and self-learning paradigms are investigated for the real-time applications of multi-goal mobile robot tasks. The method is capable to generate new behaviors and it combines them in order to achieve multi goal tasks. The proposed method is composed from three layers: Behavior Generating Module, Coordination Level and Emotion -Motivation Level. Last two levels use Hidden Markov models to manage dynamical structure of behaviors. The kinematics and dynamic model of the mobile robot with non-holonomic constraints are considered in the behavior based control architecture. The proposed method is tested on a four-wheel driven and four-wheel steered mobile robot with constraints in simulation environment and results are obtained successfully.

  13. A Monthly Water-Balance Model Driven By a Graphical User Interface

    USGS Publications Warehouse

    McCabe, Gregory J.; Markstrom, Steven L.

    2007-01-01

    This report describes a monthly water-balance model driven by a graphical user interface, referred to as the Thornthwaite monthly water-balance program. Computations of monthly water-balance components of the hydrologic cycle are made for a specified location. The program can be used as a research tool, an assessment tool, and a tool for classroom instruction.

  14. Water-resources optimization model for Santa Barbara, California

    USGS Publications Warehouse

    Nishikawa, Tracy

    1998-01-01

    A simulation-optimization model has been developed for the optimal management of the city of Santa Barbara's water resources during a drought. The model, which links groundwater simulation with linear programming, has a planning horizon of 5 years. The objective is to minimize the cost of water supply subject to: water demand constraints, hydraulic head constraints to control seawater intrusion, and water capacity constraints. The decision variables are montly water deliveries from surface water and groundwater. The state variables are hydraulic heads. The drought of 1947-51 is the city's worst drought on record, and simulated surface-water supplies for this period were used as a basis for testing optimal management of current water resources under drought conditions. The simulation-optimization model was applied using three reservoir operation rules. In addition, the model's sensitivity to demand, carry over [the storage of water in one year for use in the later year(s)], head constraints, and capacity constraints was tested.

  15. Reducing usage of the computational resources by event driven approach to model predictive control

    NASA Astrophysics Data System (ADS)

    Misik, Stefan; Bradac, Zdenek; Cela, Arben

    2017-08-01

    This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.

  16. Constraints on single entity driven inflationary and radiation eras

    NASA Astrophysics Data System (ADS)

    Bouhmadi-López, Mariam; Chen, Pisin; Liu, Yen-Wei

    2012-07-01

    We present a model that attempts to fuse the inflationary era and the subsequent radiation dominated era under a unified framework so as to provide a smooth transition between the two. The model is based on a modification of the generalized Chaplygin gas. We constrain the model observationally by mapping the primordial power spectrum of the scalar perturbations to the latest data of WMAP7. We compute as well the spectrum of the primordial gravitational waves as would be measured today.

  17. ON THE MIGRATION OF JUPITER AND SATURN: CONSTRAINTS FROM LINEAR MODELS OF SECULAR RESONANT COUPLING WITH THE TERRESTRIAL PLANETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agnor, Craig B.; Lin, D. N. C.

    We examine how the late divergent migration of Jupiter and Saturn may have perturbed the terrestrial planets. Using a modified secular model we have identified six secular resonances between the {nu}{sub 5} frequency of Jupiter and Saturn and the four apsidal eigenfrequencies of the terrestrial planets (g{sub 1-4}). We derive analytic upper limits on the eccentricity and orbital migration timescale of Jupiter and Saturn when these resonances were encountered to avoid perturbing the eccentricities of the terrestrial planets to values larger than the observed ones. Because of the small amplitudes of the j = 2, 3 terrestrial eigenmodes the g{submore » 2} - {nu}{sub 5} and g{sub 3} - {nu}{sub 5} resonances provide the strongest constraints on giant planet migration. If Jupiter and Saturn migrated with eccentricities comparable to their present-day values, smooth migration with exponential timescales characteristic of planetesimal-driven migration ({tau} {approx} 5-10 Myr) would have perturbed the eccentricities of the terrestrial planets to values greatly exceeding the observed ones. This excitation may be mitigated if the eccentricity of Jupiter was small during the migration epoch, migration was very rapid (e.g., {tau} {approx}< 0.5 Myr perhaps via planet-planet scattering or instability-driven migration) or the observed small eccentricity amplitudes of the j = 2, 3 terrestrial modes result from low probability cancellation of several large amplitude contributions. Results of orbital integrations show that very short migration timescales ({tau} < 0.5 Myr), characteristic of instability-driven migration, may also perturb the terrestrial planets' eccentricities by amounts comparable to their observed values. We discuss the implications of these constraints for the relative timing of terrestrial planet formation, giant planet migration, and the origin of the so-called Late Heavy Bombardment of the Moon 3.9 {+-} 0.1 Ga ago. We suggest that the simplest way to satisfy these dynamical constraints may be for the bulk of any giant planet migration to be complete in the first 30-100 Myr of solar system history.« less

  18. Brief Lags in Interrupted Sequential Performance: Evaluating a Model and Model Evaluation Method

    DTIC Science & Technology

    2015-01-05

    rehearsal mechanism in the model. To evaluate the model we developed a simple new goodness-of-fit test based on analysis of variance that offers an...repeated step). Sequen- tial constraints are common in medicine, equipment maintenance, computer programming and technical support, data analysis ...legal analysis , accounting, and many other home and workplace environ- ments. Sequential constraints also play a role in such basic cognitive processes

  19. A tool for efficient, model-independent management optimization under uncertainty

    USGS Publications Warehouse

    White, Jeremy; Fienen, Michael N.; Barlow, Paul M.; Welter, Dave E.

    2018-01-01

    To fill a need for risk-based environmental management optimization, we have developed PESTPP-OPT, a model-independent tool for resource management optimization under uncertainty. PESTPP-OPT solves a sequential linear programming (SLP) problem and also implements (optional) efficient, “on-the-fly” (without user intervention) first-order, second-moment (FOSM) uncertainty techniques to estimate model-derived constraint uncertainty. Combined with a user-specified risk value, the constraint uncertainty estimates are used to form chance-constraints for the SLP solution process, so that any optimal solution includes contributions from model input and observation uncertainty. In this way, a “single answer” that includes uncertainty is yielded from the modeling analysis. PESTPP-OPT uses the familiar PEST/PEST++ model interface protocols, which makes it widely applicable to many modeling analyses. The use of PESTPP-OPT is demonstrated with a synthetic, integrated surface-water/groundwater model. The function and implications of chance constraints for this synthetic model are discussed.

  20. WINDOWAC (Wing Design Optimization With Aeroelastic Constraints): Program manual

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Starnes, J. H., Jr.

    1974-01-01

    User and programer documentation for the WIDOWAC programs is given. WIDOWAC may be used for the design of minimum mass wing structures subjected to flutter, strength, and minimum gage constraints. The wing structure is modeled by finite elements, flutter conditions may be both subsonic and supersonic, and mathematical programing methods are used for the optimization procedure. The user documentation gives general directions on how the programs may be used and describes their limitations; in addition, program input and output are described, and example problems are presented. A discussion of computational algorithms and flow charts of the WIDOWAC programs and major subroutines is also given.

  1. Four-dimensional electrical conductivity monitoring of stage-driven river water intrusion: Accounting for water table effects using a transient mesh boundary and conditional inversion constraints

    DOE PAGES

    Johnson, Tim; Versteeg, Roelof; Thomle, Jon; ...

    2015-08-01

    Our paper describes and demonstrates two methods of providing a priori information to the surface-based time-lapse three-dimensional electrical resistivity tomography (ERT) problem for monitoring stage-driven or tide-driven surface water intrusion into aquifers. First, a mesh boundary is implemented that conforms to the known location of the water table through time, thereby enabling the inversion to place a sharp bulk conductivity contrast at that boundary without penalty. Moreover, a nonlinear inequality constraint is used to allow only positive or negative transient changes in EC to occur within the saturated zone, dependent on the relative contrast in fluid electrical conductivity between surfacemore » water and groundwater. A 3-D field experiment demonstrates that time-lapse imaging results using traditional smoothness constraints are unable to delineate river water intrusion. The water table and inequality constraints provide the inversion with the additional information necessary to resolve the spatial extent of river water intrusion through time.« less

  2. Four-dimensional electrical conductivity monitoring of stage-driven river water intrusion: Accounting for water table effects using a transient mesh boundary and conditional inversion constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Tim; Versteeg, Roelof; Thomle, Jon

    Our paper describes and demonstrates two methods of providing a priori information to the surface-based time-lapse three-dimensional electrical resistivity tomography (ERT) problem for monitoring stage-driven or tide-driven surface water intrusion into aquifers. First, a mesh boundary is implemented that conforms to the known location of the water table through time, thereby enabling the inversion to place a sharp bulk conductivity contrast at that boundary without penalty. Moreover, a nonlinear inequality constraint is used to allow only positive or negative transient changes in EC to occur within the saturated zone, dependent on the relative contrast in fluid electrical conductivity between surfacemore » water and groundwater. A 3-D field experiment demonstrates that time-lapse imaging results using traditional smoothness constraints are unable to delineate river water intrusion. The water table and inequality constraints provide the inversion with the additional information necessary to resolve the spatial extent of river water intrusion through time.« less

  3. Resource Allocation Modelling in Vocational Rehabilitation: A Prototype Developed with the Michigan and Rhode Island VR Agencies.

    ERIC Educational Resources Information Center

    Leff, H. Stephen; Turner, Ralph R.

    This report focuses on the use of linear programming models to address the issues of how vocational rehabilitation (VR) resources should be allocated in order to maximize program efficiency within given resource constraints. A general introduction to linear programming models is first presented that describes the major types of models available,…

  4. Motion Pattern Encapsulation for Data-Driven Constraint-Based Motion Editing

    NASA Astrophysics Data System (ADS)

    Carvalho, Schubert R.; Boulic, Ronan; Thalmann, Daniel

    The growth of motion capture systems have contributed to the proliferation of human motion database, mainly because human motion is important in many applications, ranging from games entertainment and films to sports and medicine. However, the captured motions normally attend specific needs. As an effort for adapting and reusing captured human motions in new tasks and environments and improving the animator's work, we present and discuss a new data-driven constraint-based animation system for interactive human motion editing. This method offers the compelling advantage that it provides faster deformations and more natural-looking motion results compared to goal-directed constraint-based methods found in the literature.

  5. Constraint-based component-modeling for knowledge-based design

    NASA Technical Reports Server (NTRS)

    Kolb, Mark A.

    1992-01-01

    The paper describes the application of various advanced programming techniques derived from artificial intelligence research to the development of flexible design tools for conceptual design. Special attention is given to two techniques which appear to be readily applicable to such design tools: the constraint propagation technique and the object-oriented programming. The implementation of these techniques in a prototype computer tool, Rubber Airplane, is described.

  6. Mantle Flow in the Western United States Constrained by Seismic Anisotropy

    NASA Astrophysics Data System (ADS)

    Niday, W.; Humphreys, E.

    2017-12-01

    Shear wave splitting, caused by the lattice preferred orientation (LPO) of olivine crystals under shear deformation, provide a useful constraint on numerical models of mantle flow. Although it is sometimes assumed that shear wave splitting fast directions correspond with mantle flow directions, this is only true in simple shear flows that do not vary strongly with space or time. Observed shear wave splitting in the western United States is complex and inconsistent with simple shear driven by North American and Pacific plate motion, suggesting that the effects of time-dependent subduction history and spatial heterogeneity are important. Liu and Stegman (2011) reproduce the pattern of fast seismic anomalies below the western US from Farallon subduction history, and Chaparro and Stegman (2017) reproduce the circular anisotropy field below the Great Basin. We extend this to consider anisotropic structure outside the Great Basin and evaluate the density and viscosity of seismic anomalies such as slabs and Yellowstone. We use the mantle convection code ASPECT to simulate 3D buoyancy-driven flow in the mantle below the western US, and predict LPO using the modeled flow fields. We present results from a suite of models varying the sub-lithospheric structures of the western US and constraints on density and viscosity variations in the upper mantle.

  7. A model for the wind of the M supergiant VX Sagittarii

    NASA Astrophysics Data System (ADS)

    Pijpers, F. P.

    1990-11-01

    The velocity distribution of the stellar wind from the M supergiant VX Sgr deduced from interferometric measurements of maser lines by Chapman and Cohen (1986) has been modeled using the linearized theory of stellar winds driven by short period sound waves proposed by Pijpers and Hearn (1989) and the theory of stellar winds driven by short period shocks proposed by Pijpers and Habing (1989). The effect of the radiative forces on the dust formed in the wind is included in a simple way. Good agreement with the observations is obtained by a range of parameters in the theory. A series of observations of the maser lines at invervals of one or a few days may provide additional constraints on the interpretation.

  8. Using diagnostic experiences in experience-based innovative design

    NASA Astrophysics Data System (ADS)

    Prabhakar, Sattiraju; Goel, Ashok K.

    1992-03-01

    Designing a novel class of devices requires innovation. Often, the design knowledge of these devices does not identify and address the constraints that are required for their performance in the real world operating environment. So any new design adapted from these devices tend to be similarly sketchy. In order to address this problem, we propose a case-based reasoning method called performance driven innovation (PDI). We model the design as a dynamic process, arrive at a design by adaptation from the known designs, generate failures for this design for some new constraints, and then use this failure knowledge to generate the required design knowledge for the new constraints. In this paper, we discuss two aspects of PDI: the representation of PDI cases and the translation of the failure knowledge into design knowledge for a constraint. Each case in PDI has two components: design and failure knowledge. Both of them are represented using a substance-behavior-function model. Failure knowledge has internal device failure behaviors and external environmental behaviors. The environmental behavior, for a constraint, interacting with the design behaviors, results in the failure internal behavior. The failure adaptation strategy generates functions, from the failure knowledge, which can be addressed using the routine design methods. These ideas are illustrated using a coffee-maker example.

  9. Microscopic origin and macroscopic implications of lane formation in mixtures of oppositely-driven particles

    NASA Astrophysics Data System (ADS)

    Whitelam, Stephen

    Colloidal particles of two types, driven in opposite directions, can segregate into lanes. I will describe some results on this phenomenon obtained by simple physical arguments and computer simulations. Laning results from rectification of diffusion on the scale of a particle diameter: oppositely-driven particles must, in the time taken to encounter each other in the direction of the drive, diffuse in the perpendicular direction by about one particle diameter. This geometric constraint implies that the diffusion constant of a particle, in the presence of those of the opposite type, grows approximately linearly with Peclet number, a prediction confirmed by our numerics. Such environment-dependent diffusion is statistically similar to an effective interparticle attraction; consistent with this observation, we find that oppositely-driven colloids display features characteristic of the simplest model system possessing both interparticle attractions and persistent motion, the driven Ising lattice gas. Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.

  10. An Integrated Constraint Programming Approach to Scheduling Sports Leagues with Divisional and Round-robin Tournaments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey

    Previous approaches for scheduling a league with round-robin and divisional tournaments involved decomposing the problem into easier subproblems. This approach, used to schedule the top Swedish handball league Elitserien, reduces the problem complexity but can result in suboptimal schedules. This paper presents an integrated constraint programming model that allows to perform the scheduling in a single step. Particular attention is given to identifying implied and symmetry-breaking constraints that reduce the computational complexity significantly. The experimental evaluation of the integrated approach takes considerably less computational effort than the previous approach.

  11. Dynamic optimization of metabolic networks coupled with gene expression.

    PubMed

    Waldherr, Steffen; Oyarzún, Diego A; Bockmayr, Alexander

    2015-01-21

    The regulation of metabolic activity by tuning enzyme expression levels is crucial to sustain cellular growth in changing environments. Metabolic networks are often studied at steady state using constraint-based models and optimization techniques. However, metabolic adaptations driven by changes in gene expression cannot be analyzed by steady state models, as these do not account for temporal changes in biomass composition. Here we present a dynamic optimization framework that integrates the metabolic network with the dynamics of biomass production and composition. An approximation by a timescale separation leads to a coupled model of quasi-steady state constraints on the metabolic reactions, and differential equations for the substrate concentrations and biomass composition. We propose a dynamic optimization approach to determine reaction fluxes for this model, explicitly taking into account enzyme production costs and enzymatic capacity. In contrast to the established dynamic flux balance analysis, our approach allows predicting dynamic changes in both the metabolic fluxes and the biomass composition during metabolic adaptations. Discretization of the optimization problems leads to a linear program that can be efficiently solved. We applied our algorithm in two case studies: a minimal nutrient uptake network, and an abstraction of core metabolic processes in bacteria. In the minimal model, we show that the optimized uptake rates reproduce the empirical Monod growth for bacterial cultures. For the network of core metabolic processes, the dynamic optimization algorithm predicted commonly observed metabolic adaptations, such as a diauxic switch with a preference ranking for different nutrients, re-utilization of waste products after depletion of the original substrate, and metabolic adaptation to an impending nutrient depletion. These examples illustrate how dynamic adaptations of enzyme expression can be predicted solely from an optimization principle. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Data-driven Modelling for decision making under uncertainty

    NASA Astrophysics Data System (ADS)

    Angria S, Layla; Dwi Sari, Yunita; Zarlis, Muhammad; Tulus

    2018-01-01

    The rise of the issues with the uncertainty of decision making has become a very warm conversation in operation research. Many models have been presented, one of which is with data-driven modelling (DDM). The purpose of this paper is to extract and recognize patterns in data, and find the best model in decision-making problem under uncertainty by using data-driven modeling approach with linear programming, linear and nonlinear differential equation, bayesian approach. Model criteria tested to determine the smallest error, and it will be the best model that can be used.

  13. Investment portfolio of a pension fund: Stochastic model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosch-Princep, M.; Fontanals-Albiol, H.

    1994-12-31

    This paper presents a stochastic programming model that aims at getting the optimal investment portfolio of a Pension Funds. The model has been designed bearing in mind the liabilities of the Funds to its members. The essential characteristic of the objective function and the constraints is the randomness of the coefficients and the right hand side of the constraints, so it`s necessary to use techniques of stochastic mathematical programming to get information about the amount of money that should be assigned to each sort of investment. It`s important to know the risky attitude of the person that has to takemore » decisions towards running risks. It incorporates the relation between the different coefficients of the objective function and constraints of each period of temporal horizon, through lineal and discrete random processes. Likewise, it includes the hypotheses that are related to Spanish law concerning the subject of Pension Funds.« less

  14. A Maximin Model for Test Design with Practical Constraints. Project Psychometric Aspects of Item Banking No. 25. Research Report 87-10.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Boekkooi-Timminga, Ellen

    A "maximin" model for item response theory based test design is proposed. In this model only the relative shape of the target test information function is specified. It serves as a constraint subject to which a linear programming algorithm maximizes the information in the test. In the practice of test construction there may be several…

  15. The instant sequencing task: Toward constraint-checking a complex spacecraft command sequence interactively

    NASA Technical Reports Server (NTRS)

    Horvath, Joan C.; Alkalaj, Leon J.; Schneider, Karl M.; Amador, Arthur V.; Spitale, Joseph N.

    1993-01-01

    Robotic spacecraft are controlled by sets of commands called 'sequences.' These sequences must be checked against mission constraints. Making our existing constraint checking program faster would enable new capabilities in our uplink process. Therefore, we are rewriting this program to run on a parallel computer. To do so, we had to determine how to run constraint-checking algorithms in parallel and create a new method of specifying spacecraft models and constraints. This new specification gives us a means of representing flight systems and their predicted response to commands which could be used in a variety of applications throughout the command process, particularly during anomaly or high-activity operations. This commonality could reduce operations cost and risk for future complex missions. Lessons learned in applying some parts of this system to the TOPEX/Poseidon mission will be described.

  16. Using Blended Learning as an Innovative Delivery Model for an In-House Language Program

    ERIC Educational Resources Information Center

    Gadbois, Manon; Quildon, Denise

    2013-01-01

    This paper reports on the development and implementation in 2012 of McGill University's French at Work program for McGill employees, using a blended learning model. The program is an example of how a reduction in face-to-face teaching presents one solution to employees' scheduling constraints and how this model might offer suggestions for the…

  17. Simplified phenomenology for colored dark sectors

    NASA Astrophysics Data System (ADS)

    El Hedri, Sonia; Kaminska, Anna; de Vries, Maikel; Zurita, Jose

    2017-04-01

    We perform a general study of the relic density and LHC constraints on simplified models where the dark matter coannihilates with a strongly interacting particle X. In these models, the dark matter depletion is driven by the self-annihilation of X to pairs of quarks and gluons through the strong interaction. The phenomenology of these scenarios therefore only depends on the dark matter mass and the mass splitting between dark matter and X as well as the quantum numbers of X. In this paper, we consider simplified models where X can be either a scalar, a fermion or a vector, as well as a color triplet, sextet or octet. We compute the dark matter relic density constraints taking into account Sommerfeld corrections and bound state formation. Furthermore, we examine the restrictions from thermal equilibrium, the lifetime of X and the current and future LHC bounds on X pair production. All constraints are comprehensively presented in the mass splitting versus dark matter mass plane. While the relic density constraints can lead to upper bounds on the dark matter mass ranging from 2 TeV to more than 10 TeV across our models, the prospective LHC bounds range from 800 to 1500 GeV. A full coverage of the strongly coannihilating dark matter parameter space would therefore require hadron colliders with significantly higher center-of-mass energies.

  18. WMAP7 constraints on oscillations in the primordial power spectrum

    NASA Astrophysics Data System (ADS)

    Meerburg, P. Daniel; Wijers, Ralph A. M. J.; van der Schaar, Jan Pieter

    2012-03-01

    We use the 7-year Wilkinson Microwave Anisotropy Probe (WMAP7) data to place constraints on oscillations supplementing an almost scale-invariant primordial power spectrum. Such oscillations are predicted by a variety of models, some of which amount to assuming that there is some non-trivial choice of the vacuum state at the onset of inflation. In this paper, we will explore data-driven constraints on two distinct models of initial state modifications. In both models, the frequency, phase and amplitude are degrees of freedom of the theory for which the theoretical bounds are rather weak: both the amplitude and frequency have allowed values ranging over several orders of magnitude. This requires many computationally expensive evaluations of the model cosmic microwave background (CMB) spectra and their goodness of fit, even in a Markov chain Monte Carlo (MCMC), normally the most efficient fitting method for such a problem. To search more efficiently, we first run a densely-spaced grid, with only three varying parameters: the frequency, the amplitude and the baryon density. We obtain the optimal frequency and run an MCMC at the best-fitting frequency, randomly varying all other relevant parameters. To reduce the computational time of each power spectrum computation, we adjust both comoving momentum integration and spline interpolation (in l) as a function of frequency and amplitude of the primordial power spectrum. Applying this to the WMAP7 data allows us to improve existing constraints on the presence of oscillations. We confirm earlier findings that certain frequencies can improve the fitting over a model without oscillations. For those frequencies we compute the posterior probability, allowing us to put some constraints on the primordial parameter space of both models.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler

    This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less

  20. An investigation of constraint-based component-modeling for knowledge representation in computer-aided conceptual design

    NASA Technical Reports Server (NTRS)

    Kolb, Mark A.

    1990-01-01

    Originally, computer programs for engineering design focused on detailed geometric design. Later, computer programs for algorithmically performing the preliminary design of specific well-defined classes of objects became commonplace. However, due to the need for extreme flexibility, it appears unlikely that conventional programming techniques will prove fruitful in developing computer aids for engineering conceptual design. The use of symbolic processing techniques, such as object-oriented programming and constraint propagation, facilitate such flexibility. Object-oriented programming allows programs to be organized around the objects and behavior to be simulated, rather than around fixed sequences of function- and subroutine-calls. Constraint propagation allows declarative statements to be understood as designating multi-directional mathematical relationships among all the variables of an equation, rather than as unidirectional assignments to the variable on the left-hand side of the equation, as in conventional computer programs. The research has concentrated on applying these two techniques to the development of a general-purpose computer aid for engineering conceptual design. Object-oriented programming techniques are utilized to implement a user-extensible database of design components. The mathematical relationships which model both geometry and physics of these components are managed via constraint propagation. In addition, to this component-based hierarchy, special-purpose data structures are provided for describing component interactions and supporting state-dependent parameters. In order to investigate the utility of this approach, a number of sample design problems from the field of aerospace engineering were implemented using the prototype design tool, Rubber Airplane. The additional level of organizational structure obtained by representing design knowledge in terms of components is observed to provide greater convenience to the program user, and to result in a database of engineering information which is easier both to maintain and to extend.

  1. Enforcement of entailment constraints in distributed service-based business processes.

    PubMed

    Hummer, Waldemar; Gaubatz, Patrick; Strembeck, Mark; Zdun, Uwe; Dustdar, Schahram

    2013-11-01

    A distributed business process is executed in a distributed computing environment. The service-oriented architecture (SOA) paradigm is a popular option for the integration of software services and execution of distributed business processes. Entailment constraints, such as mutual exclusion and binding constraints, are important means to control process execution. Mutually exclusive tasks result from the division of powerful rights and responsibilities to prevent fraud and abuse. In contrast, binding constraints define that a subject who performed one task must also perform the corresponding bound task(s). We aim to provide a model-driven approach for the specification and enforcement of task-based entailment constraints in distributed service-based business processes. Based on a generic metamodel, we define a domain-specific language (DSL) that maps the different modeling-level artifacts to the implementation-level. The DSL integrates elements from role-based access control (RBAC) with the tasks that are performed in a business process. Process definitions are annotated using the DSL, and our software platform uses automated model transformations to produce executable WS-BPEL specifications which enforce the entailment constraints. We evaluate the impact of constraint enforcement on runtime performance for five selected service-based processes from existing literature. Our evaluation demonstrates that the approach correctly enforces task-based entailment constraints at runtime. The performance experiments illustrate that the runtime enforcement operates with an overhead that scales well up to the order of several ten thousand logged invocations. Using our DSL annotations, the user-defined process definition remains declarative and clean of security enforcement code. Our approach decouples the concerns of (non-technical) domain experts from technical details of entailment constraint enforcement. The developed framework integrates seamlessly with WS-BPEL and the Web services technology stack. Our prototype implementation shows the feasibility of the approach, and the evaluation points to future work and further performance optimizations.

  2. Basic design considerations for free-electron lasers driven by electron beams from RF accelerators

    NASA Astrophysics Data System (ADS)

    Gover, A.; Freund, H.; Granatstein, V. L.; McAdoo, J. H.; Tang, C.-M.

    A design procedure and design criteria are derived for free-electron lasers driven by electron beams from RF accelerators. The procedure and criteria permit an estimate of the oscillation-buildup time and the laser output power of various FEL schemes: with waveguide resonator or open resonator, with initial seed-radiation injection or with spontaneous-emission radiation source, with a linear wiggler or with a helical wiggler. Expressions are derived for computing the various FEL parameters, allowing for the design and optimization of the FEL operational characteristics under ideal conditions or with nonideal design parameters that may be limited by technological or practical constraints. The design procedure enables one to derive engineering curves and scaling laws for the FEL operating parameters. This can be done most conveniently with a computer program based on flowcharts given in the appendices.

  3. Exploring galaxy evolution with latent space walks

    NASA Astrophysics Data System (ADS)

    Schawinski, Kevin; Turp, Dennis; Zhang, Ce

    2018-01-01

    We present a new approach using artificial intelligence to perform data-driven forward models of astrophysical phenomena. We describe how a variational autoencoder can be used to encode galaxies to latent space, independently manipulate properties such as the specific star formation rate, and return it to real space. Such transformations can be used for forward modeling phenomena using data as the only constraints. We demonstrate the utility of this approach using the question of the quenching of star formation in galaxies.

  4. An Integer Programming Model for the Management of a Forest in the North of Portugal

    NASA Astrophysics Data System (ADS)

    Cerveira, Adelaide; Fonseca, Teresa; Mota, Artur; Martins, Isabel

    2011-09-01

    This study aims to develop an approach for the management of a forest of maritime pine located in the north region of Portugal. The forest is classified into five public lands, the so-called baldios, extending over 4432 ha. These baldios are co-managed by the Official Forest Services and the local communities mainly for timber production purposes. The forest planning involves non-spatial and spatial constraints. Spatial constraints dictate a maximum clearcut area and an exclusion time. An integer programming model is presented and the computational results are discussed.

  5. Novel characterization of capsule x-ray drive at the National Ignition Facility.

    PubMed

    MacLaren, S A; Schneider, M B; Widmann, K; Hammer, J H; Yoxall, B E; Moody, J D; Bell, P M; Benedetti, L R; Bradley, D K; Edwards, M J; Guymer, T M; Hinkel, D E; Hsing, W W; Kervin, M L; Meezan, N B; Moore, A S; Ralph, J E

    2014-03-14

    Indirect drive experiments at the National Ignition Facility are designed to achieve fusion by imploding a fuel capsule with x rays from a laser-driven hohlraum. Previous experiments have been unable to determine whether a deficit in measured ablator implosion velocity relative to simulations is due to inadequate models of the hohlraum or ablator physics. ViewFactor experiments allow for the first time a direct measure of the x-ray drive from the capsule point of view. The experiments show a 15%-25% deficit relative to simulations and thus explain nearly all of the disagreement with the velocity data. In addition, the data from this open geometry provide much greater constraints on a predictive model of laser-driven hohlraum performance than the nominal ignition target.

  6. A flexible computer aid for conceptual design based on constraint propagation and component-modeling. [of aircraft in three dimensions

    NASA Technical Reports Server (NTRS)

    Kolb, Mark A.

    1988-01-01

    The Rubber Airplane program, which combines two symbolic processing techniques with a component-based database of design knowledge, is proposed as a computer aid for conceptual design. Using object-oriented programming, programs are organized around the objects and behavior to be simulated, and using constraint propagation, declarative statements designate mathematical relationships among all the equation variables. It is found that the additional level of organizational structure resulting from the arrangement of the design information in terms of design components provides greater flexibility and convenience.

  7. Aerodynamically and acoustically driven modes of vibration in a physical model of the vocal folds.

    PubMed

    Zhang, Zhaoyan; Neubauer, Juergen; Berry, David A

    2006-11-01

    In a single-layered, isotropic, physical model of the vocal folds, distinct phonation types were identified based on the medial surface dynamics of the vocal fold. For acoustically driven phonation, a single, in-phase, x-10 like eigenmode captured the essential dynamics, and coupled with one of the acoustic resonances of the subglottal tract. Thus, the fundamental frequency appeared to be determined primarily by a subglottal acoustic resonance. In contrast, aerodynamically driven phonation did not naturally appear in the single-layered model, but was facilitated by the introduction of a vertical constraint. For this phonation type, fundamental frequency was relatively independent of the acoustic resonances, and two eigenmodes were required to capture the essential dynamics of the vocal fold, including an out-of-phase x-11 like eigenmode and an in-phase x-10 like eigenmode, as described in earlier theoretical work. The two eigenmodes entrained to the same frequency, and were decoupled from subglottal acoustic resonances. With this independence from the acoustic resonances, vocal fold dynamics appeared to be determined primarily by near-field, fluid-structure interactions.

  8. A two-level approach to large mixed-integer programs with application to cogeneration in energy-efficient buildings

    DOE PAGES

    Lin, Fu; Leyffer, Sven; Munson, Todd

    2016-04-12

    We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less

  9. A two-level approach to large mixed-integer programs with application to cogeneration in energy-efficient buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Fu; Leyffer, Sven; Munson, Todd

    We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less

  10. Data-driven non-Markovian closure models

    NASA Astrophysics Data System (ADS)

    Kondrashov, Dmitri; Chekroun, Mickaël D.; Ghil, Michael

    2015-03-01

    This paper has two interrelated foci: (i) obtaining stable and efficient data-driven closure models by using a multivariate time series of partial observations from a large-dimensional system; and (ii) comparing these closure models with the optimal closures predicted by the Mori-Zwanzig (MZ) formalism of statistical physics. Multilayer stochastic models (MSMs) are introduced as both a generalization and a time-continuous limit of existing multilevel, regression-based approaches to closure in a data-driven setting; these approaches include empirical model reduction (EMR), as well as more recent multi-layer modeling. It is shown that the multilayer structure of MSMs can provide a natural Markov approximation to the generalized Langevin equation (GLE) of the MZ formalism. A simple correlation-based stopping criterion for an EMR-MSM model is derived to assess how well it approximates the GLE solution. Sufficient conditions are derived on the structure of the nonlinear cross-interactions between the constitutive layers of a given MSM to guarantee the existence of a global random attractor. This existence ensures that no blow-up can occur for a broad class of MSM applications, a class that includes non-polynomial predictors and nonlinearities that do not necessarily preserve quadratic energy invariants. The EMR-MSM methodology is first applied to a conceptual, nonlinear, stochastic climate model of coupled slow and fast variables, in which only slow variables are observed. It is shown that the resulting closure model with energy-conserving nonlinearities efficiently captures the main statistical features of the slow variables, even when there is no formal scale separation and the fast variables are quite energetic. Second, an MSM is shown to successfully reproduce the statistics of a partially observed, generalized Lotka-Volterra model of population dynamics in its chaotic regime. The challenges here include the rarity of strange attractors in the model's parameter space and the existence of multiple attractor basins with fractal boundaries. The positivity constraint on the solutions' components replaces here the quadratic-energy-preserving constraint of fluid-flow problems and it successfully prevents blow-up.

  11. Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Chen, Wei

    2015-07-01

    In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.

  12. Lockheed Martin approach to a Reusable Launch Vehicle (RLV)

    NASA Astrophysics Data System (ADS)

    Elvin, John D.

    1996-03-01

    This paper discusses Lockheed Martin's perspective on the development of a cost effective Reusable Launch Vehicle (RLV). Critical to a successful Single Stage To Orbit (SSTO) program are; an economic development plan sensitive to fiscal constraints; a vehicle concept satisfying present and future US launch needs; and an operations concept commensurate with a market driven program. Participation in the economic plan by government, industry, and the commercial sector is a key element of integrating our development plan and funding profile. The RLV baseline concept design, development evolution and several critical trade studies illustrate the superior performance achieved by our innovative approach to the problem of SSTO. Findings from initial aerodynamic and aerothermodynamic wind tunnel tests and trajectory analyses on this concept confirm the superior characteristics of the lifting body shape combined with the Linear Aerospike rocket engine. This Aero Ballistic Rocket (ABR) concept captures the essence of The Skunk Works approach to SSTO RLV technology integration and system engineering. These programmatic and concept development topics chronicle the key elements to implementing an innovative market driven next generation RLV.

  13. Life Support Systems for Lunar Landers

    NASA Technical Reports Server (NTRS)

    Anderson, Molly

    2008-01-01

    Engineers designing life support systems for NASA s next Lunar Landers face unique challenges. As with any vehicle that enables human spaceflight, the needs of the crew drive most of the lander requirements. The lander is also a key element of the architecture NASA will implement in the Constellation program. Many requirements, constraints, or optimization goals will be driven by interfaces with other projects, like the Crew Exploration Vehicle, the Lunar Surface Systems, and the Extravehicular Activity project. Other challenges in the life support system will be driven by the unique location of the vehicle in the environments encountered throughout the mission. This paper examines several topics that may be major design drivers for the lunar lander life support system. There are several functional requirements for the lander that may be different from previous vehicles or programs and recent experience. Some of the requirements or design drivers will change depending on the overall Lander configuration. While the configuration for a lander design is not fixed, designers can examine how these issues would impact their design and be prepared for the quick design iterations required to optimize a spacecraft.

  14. Use of an uncertainty analysis for genome-scale models as a prediction tool for microbial growth processes in subsurface environments.

    PubMed

    Klier, Christine

    2012-03-06

    The integration of genome-scale, constraint-based models of microbial cell function into simulations of contaminant transport and fate in complex groundwater systems is a promising approach to help characterize the metabolic activities of microorganisms in natural environments. In constraint-based modeling, the specific uptake flux rates of external metabolites are usually determined by Michaelis-Menten kinetic theory. However, extensive data sets based on experimentally measured values are not always available. In this study, a genome-scale model of Pseudomonas putida was used to study the key issue of uncertainty arising from the parametrization of the influx of two growth-limiting substrates: oxygen and toluene. The results showed that simulated growth rates are highly sensitive to substrate affinity constants and that uncertainties in specific substrate uptake rates have a significant influence on the variability of simulated microbial growth. Michaelis-Menten kinetic theory does not, therefore, seem to be appropriate for descriptions of substrate uptake processes in the genome-scale model of P. putida. Microbial growth rates of P. putida in subsurface environments can only be accurately predicted if the processes of complex substrate transport and microbial uptake regulation are sufficiently understood in natural environments and if data-driven uptake flux constraints can be applied.

  15. A chance-constrained stochastic approach to intermodal container routing problems.

    PubMed

    Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony

    2018-01-01

    We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.

  16. A chance-constrained stochastic approach to intermodal container routing problems

    PubMed Central

    Zhao, Yi; Zhang, Xi; Whiteing, Anthony

    2018-01-01

    We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost. PMID:29438389

  17. Modeling damaged wings: Element selection and constraint specification

    NASA Technical Reports Server (NTRS)

    Stronge, W. J.

    1975-01-01

    The NASTRAN analytical program was used for structural design, and no problems were anticipated in applying this program to a damaged structure as long as the deformations were small and the strains remained within the elastic range. In this context, NASTRAN was used to test three-dimensional analytical models of a damaged aircraft wing under static loads. A comparison was made of calculated and experimentally measured strains on primary structural components of an RF-84F wing. This comparison brought out two sensitive areas in modeling semimonocoque structures. The calculated strains were strongly affected by the type of elements used adjacent to the damaged region and by the choice of multipoint constraints sets on the damaged boundary.

  18. Model-Driven Engineering of Machine Executable Code

    NASA Astrophysics Data System (ADS)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  19. Describing teacher-student interactions: a qualitative assessment of teacher implementation of the 7th grade keepin' it REAL substance use intervention.

    PubMed

    Pettigrew, Jonathan; Miller-Day, Michelle; Shin, Youngju; Hecht, Michael L; Krieger, Janice L; Graham, John W

    2013-03-01

    Variations in the delivery of school-based substance use prevention curricula affect students' acquisition of the lesson content and program outcomes. Although adaptation is sometimes viewed as a lack of fidelity, it is unclear what types of variations actually occur in the classroom. This observational study investigated teacher and student behaviors during implementation of a middle school-based drug prevention curriculum in 25 schools across two Midwestern states. Trained observers coded videos of 276 lessons, reflecting a total of 31 predominantly Caucasian teachers (10 males and 21 females) in 73 different classes. Employing qualitative coding procedures, the study provides a working typology of implementation patterns based on varying levels of teacher control and student participation. These patterns are fairly consistent across lessons and across classes of students, suggesting a teacher-driven delivery model where teachers create a set of constraints within which students vary their engagement. Findings provide a descriptive basis grounded in observation of classroom implementation that can be used to test models of implementation fidelity and quality as well as impact training and other dissemination research.

  20. XMI2USE: A Tool for Transforming XMI to USE Specifications

    NASA Astrophysics Data System (ADS)

    Sun, Wuliang; Song, Eunjee; Grabow, Paul C.; Simmonds, Devon M.

    The UML-based Specification Environment (USE) tool supports syntactic analysis, type checking, consistency checking, and dynamic validation of invariants and pre-/post conditions specified in the Object Constraint Language (OCL). Due to its animation and analysis power, it is useful when checking critical non-functional properties such as security policies. However, the USE tool requires one to specify (i.e., "write") a model using its own textual language and does not allow one to import any model specification files created by other UML modeling tools. Hence, to make the best use of existing UML tools, we often create a model with OCL constraints using a modeling tool such as the IBM Rational Software Architect (RSA) and then use the USE tool for model validation. This approach, however, requires a manual transformation between the specifications of two different tool formats, which is error-prone and diminishes the benefit of automated model-level validations. In this paper, we describe our own implementation of a specification transformation engine that is based on the Model Driven Architecture (MDA) framework and currently supports automatic tool-level transformations from RSA to USE.

  1. JWST Operations and the Phase I and II Process

    NASA Astrophysics Data System (ADS)

    Beck, Tracy L.

    2010-07-01

    The JWST operations and Phase I and Phase II process will build upon our knowledge on the current system in use for HST. The primary observing overheads associated with JWST observations, both direct and indirect, are summarized. While some key operations constraints for JWST may cause deviations from the HST model for proposal planning, the overall interface to JWST planning will use the APT and will appear similar to the HST interface. The requirement is to have a proposal planning model simlar to HST, where proposals submitted to the TAC must have at least the minimum amount of information necessary for assessment of the strength of the science. However, a goal of the JWST planning process is to have the submitted Phase I proposal in executable form, and as complete as possible for many programs. JWST will have significant constraints on the spacecraft pointing and orient, so it is beneficial for the planning process to have these scheduling constraints on programs defined as early as possible. The guide field of JWST is also much smaller than the HST guide field, so searches for available guide stars for JWST science programs must be done at the Phase I deadline. The long range observing plan for each JWST cycle will be generated intially from the TAC accepted programs at the Phase I deadline, and the LRP will be refined after the Phase II deadline when all scheduling constraints are defined.

  2. Network-driven design principles for neuromorphic systems.

    PubMed

    Partzsch, Johannes; Schüffny, Rene

    2015-01-01

    Synaptic connectivity is typically the most resource-demanding part of neuromorphic systems. Commonly, the architecture of these systems is chosen mainly on technical considerations. As a consequence, the potential for optimization arising from the inherent constraints of connectivity models is left unused. In this article, we develop an alternative, network-driven approach to neuromorphic architecture design. We describe methods to analyse performance of existing neuromorphic architectures in emulating certain connectivity models. Furthermore, we show step-by-step how to derive a neuromorphic architecture from a given connectivity model. For this, we introduce a generalized description for architectures with a synapse matrix, which takes into account shared use of circuit components for reducing total silicon area. Architectures designed with this approach are fitted to a connectivity model, essentially adapting to its connection density. They are guaranteeing faithful reproduction of the model on chip, while requiring less total silicon area. In total, our methods allow designers to implement more area-efficient neuromorphic systems and verify usability of the connectivity resources in these systems.

  3. Network-driven design principles for neuromorphic systems

    PubMed Central

    Partzsch, Johannes; Schüffny, Rene

    2015-01-01

    Synaptic connectivity is typically the most resource-demanding part of neuromorphic systems. Commonly, the architecture of these systems is chosen mainly on technical considerations. As a consequence, the potential for optimization arising from the inherent constraints of connectivity models is left unused. In this article, we develop an alternative, network-driven approach to neuromorphic architecture design. We describe methods to analyse performance of existing neuromorphic architectures in emulating certain connectivity models. Furthermore, we show step-by-step how to derive a neuromorphic architecture from a given connectivity model. For this, we introduce a generalized description for architectures with a synapse matrix, which takes into account shared use of circuit components for reducing total silicon area. Architectures designed with this approach are fitted to a connectivity model, essentially adapting to its connection density. They are guaranteeing faithful reproduction of the model on chip, while requiring less total silicon area. In total, our methods allow designers to implement more area-efficient neuromorphic systems and verify usability of the connectivity resources in these systems. PMID:26539079

  4. Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner

    PubMed Central

    Yu, Chengyi; Chen, Xiaobo; Xi, Juntong

    2017-01-01

    A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method. PMID:28098844

  5. Modeling the Structure of Helical Assemblies with Experimental Constraints in Rosetta.

    PubMed

    André, Ingemar

    2018-01-01

    Determining high-resolution structures of proteins with helical symmetry can be challenging due to limitations in experimental data. In such instances, structure-based protein simulations driven by experimental data can provide a valuable approach for building models of helical assemblies. This chapter describes how the Rosetta macromolecular package can be used to model homomeric protein assemblies with helical symmetry in a range of modeling scenarios including energy refinement, symmetrical docking, comparative modeling, and de novo structure prediction. Data-guided structure modeling of helical assemblies with experimental information from electron density, X-ray fiber diffraction, solid-state NMR, and chemical cross-linking mass spectrometry is also described.

  6. A Model of Magnetic Braking of Solar Rotation that Satisfies Observational Constraints

    NASA Astrophysics Data System (ADS)

    Denissenkov, Pavel A.

    2010-08-01

    The model of magnetic braking of solar rotation considered by Charbonneau & MacGregor has been modified so that it is able to reproduce for the first time the rotational evolution of both the fastest and slowest rotators among solar-type stars in open clusters of different ages, without coming into conflict with other observational constraints, such as the time evolution of the atmospheric Li abundance in solar twins and the thinness of the solar tachocline. This new model assumes that rotation-driven turbulent diffusion, which is thought to amplify the viscosity and magnetic diffusivity in stellar radiative zones, is strongly anisotropic with the horizontal components of the transport coefficients strongly dominating over those in the vertical direction. Also taken into account is the poloidal field decay that helps to confine the width of the tachocline at the solar age. The model's properties are investigated by numerically solving the azimuthal components of the coupled momentum and magnetic induction equations in two dimensions using a finite element method.

  7. Effects of a Data-Driven District-Level Reform Model

    ERIC Educational Resources Information Center

    Slavin, Robert E.; Holmes, GwenCarol; Madden, Nancy A.; Chamberlain, Anne; Cheung, Alan

    2010-01-01

    Despite a quarter-century of reform, US schools serving students in poverty continue to lag far behind other schools. There are proven programs, but these are not widely used. This large-scale experiment evaluated a district-level reform model created by the Center for DataDriven Reform in Education (CDDRE). The CDDRE model provided consultation…

  8. The amplitude of the deep solar convection and the origin of the solar supergranulation

    NASA Astrophysics Data System (ADS)

    Rast, Mark

    2016-10-01

    Recent observations and models have raised questions about our understanding of the dynamics of the deep solar convection. In particular, the amplitude of low wavenumber convective motions appears to be too high in both local area radiative magnetohydrodynamic and global spherical shell magnetohydrodynamic simulations. In global simulations this results in weaker than needed rotational constraints and consequent non solar-like differential rotation profiles. In deep local area simulations it yields strong horizontal flows in the photosphere on scales much larger than the observed supergranulation. We have undertaken numerical studies that suggest that solution to this problem is closely related to the long standing question of the origin of the solar supergranulation. Two possibilities have emerged. One suggests that small scale photospherically driven motions dominate convecive transport even at depth, descending through a very nearly adiabatic interior (more more nearly adiabatic than current convection models achieve). Convection of this form can meet Rossby number constraints set by global scale motions and implies that the solar supergranulation is the largest buoyantly driven scale of motion in the Sun. The other possibility is that large scale convection driven deeep in the Sun dynamically couples to the near surface shear layer, perhaps as its origin. In this case supergranulation would be the largest non-coupled convective mode, or only weakly coupled and thus potentially explaining the observed excess power in the prograde direction. Recent helioseismic results lend some support to this. We examind both of these possibilities using carefully designed numerical experiments, and weigh thier plausibilities in light of recent observations.

  9. Using Set Covering with Item Sampling to Analyze the Infeasibility of Linear Programming Test Assembly Models

    ERIC Educational Resources Information Center

    Huitzing, Hiddo A.

    2004-01-01

    This article shows how set covering with item sampling (SCIS) methods can be used in the analysis and preanalysis of linear programming models for test assembly (LPTA). LPTA models can construct tests, fulfilling a set of constraints set by the test assembler. Sometimes, no solution to the LPTA model exists. The model is then said to be…

  10. Linear programming: an alternative approach for developing formulations for emergency food products.

    PubMed

    Sheibani, Ershad; Dabbagh Moghaddam, Arasb; Sharifan, Anousheh; Afshari, Zahra

    2018-03-01

    To minimize the mortality rates of individuals affected by disasters, providing high-quality food relief during the initial stages of an emergency is crucial. The goal of this study was to develop a formulation for a high-energy, nutrient-dense prototype using linear programming (LP) model as a novel method for developing formulations for food products. The model consisted of the objective function and the decision variables, which were the formulation costs and weights of the selected commodities, respectively. The LP constraints were the Institute of Medicine and the World Health Organization specifications of the content of nutrients in the product. Other constraints related to the product's sensory properties were also introduced to the model. Nonlinear constraints for energy ratios of nutrients were linearized to allow their use in the LP. Three focus group studies were conducted to evaluate the palatability and other aspects of the optimized formulation. New constraints were introduced to the LP model based on the focus group evaluations to improve the formulation. LP is an appropriate tool for designing formulations of food products to meet a set of nutritional requirements. This method is an excellent alternative to the traditional 'trial and error' method in designing formulations. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  11. From the Tightrope: Designing, Developing, and Delivering an Alternative Teacher Education Model

    ERIC Educational Resources Information Center

    Yancey, Patty

    2006-01-01

    In the fall of 2003 a number of factors came together to create a fertile environment for developing an alternative, pre-service teacher education model. The overarching goal of the model is to diversify a rural university's credential program(s) by developing and offering alternative paths toward teacher certification within the constraints of a…

  12. Formulating a stand-growth model for mathematical programming problems in Appalachian forests

    Treesearch

    Gary W. Miller; Jay Sullivan

    1993-01-01

    Some growth and yield simulators applicable to central hardwood forests can be formulated for use in mathematical programming models that are designed to optimize multi-stand, multi-resource management problems. Once in the required format, growth equations serve as model constraints, defining the dynamics of stand development brought about by harvesting decisions. In...

  13. Example-Based Automatic Music-Driven Conventional Dance Motion Synthesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Songhua; Fan, Rukun; Geng, Weidong

    We introduce a novel method for synthesizing dance motions that follow the emotions and contents of a piece of music. Our method employs a learning-based approach to model the music to motion mapping relationship embodied in example dance motions along with those motions' accompanying background music. A key step in our method is to train a music to motion matching quality rating function through learning the music to motion mapping relationship exhibited in synchronized music and dance motion data, which were captured from professional human dance performance. To generate an optimal sequence of dance motion segments to match with amore » piece of music, we introduce a constraint-based dynamic programming procedure. This procedure considers both music to motion matching quality and visual smoothness of a resultant dance motion sequence. We also introduce a two-way evaluation strategy, coupled with a GPU-based implementation, through which we can execute the dynamic programming process in parallel, resulting in significant speedup. To evaluate the effectiveness of our method, we quantitatively compare the dance motions synthesized by our method with motion synthesis results by several peer methods using the motions captured from professional human dancers' performance as the gold standard. We also conducted several medium-scale user studies to explore how perceptually our dance motion synthesis method can outperform existing methods in synthesizing dance motions to match with a piece of music. These user studies produced very positive results on our music-driven dance motion synthesis experiments for several Asian dance genres, confirming the advantages of our method.« less

  14. An inexact chance-constrained programming model for water quality management in Binhai New Area of Tianjin, China.

    PubMed

    Xie, Y L; Li, Y P; Huang, G H; Li, Y F; Chen, L R

    2011-04-15

    In this study, an inexact-chance-constrained water quality management (ICC-WQM) model is developed for planning regional environmental management under uncertainty. This method is based on an integration of interval linear programming (ILP) and chance-constrained programming (CCP) techniques. ICC-WQM allows uncertainties presented as both probability distributions and interval values to be incorporated within a general optimization framework. Complexities in environmental management systems can be systematically reflected, thus applicability of the modeling process can be highly enhanced. The developed method is applied to planning chemical-industry development in Binhai New Area of Tianjin, China. Interval solutions associated with different risk levels of constraint violation have been obtained. They can be used for generating decision alternatives and thus help decision makers identify desired policies under various system-reliability constraints of water environmental capacity of pollutant. Tradeoffs between system benefits and constraint-violation risks can also be tackled. They are helpful for supporting (a) decision of wastewater discharge and government investment, (b) formulation of local policies regarding water consumption, economic development and industry structure, and (c) analysis of interactions among economic benefits, system reliability and pollutant discharges. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry

    NASA Technical Reports Server (NTRS)

    Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.

    2004-01-01

    Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.

  16. Modeling the Galaxy-Halo Connection: An open-source approach with Halotools

    NASA Astrophysics Data System (ADS)

    Hearin, Andrew

    2016-03-01

    Although the modern form of galaxy-halo modeling has been in place for over ten years, there exists no common code base for carrying out large-scale structure calculations. Considering, for example, the advances in CMB science made possible by Boltzmann-solvers such as CMBFast, CAMB and CLASS, there are clear precedents for how theorists working in a well-defined subfield can mutually benefit from such a code base. Motivated by these and other examples, I present Halotools: an open-source, object-oriented python package for building and testing models of the galaxy-halo connection. Halotools is community-driven, and already includes contributions from over a dozen scientists spread across numerous universities. Designed with high-speed performance in mind, the package generates mock observations of synthetic galaxy populations with sufficient speed to conduct expansive MCMC likelihood analyses over a diverse and highly customizable set of models. The package includes an automated test suite and extensive web-hosted documentation and tutorials (halotools.readthedocs.org). I conclude the talk by describing how Halotools can be used to analyze existing datasets to obtain robust and novel constraints on galaxy evolution models, and by outlining the Halotools program to prepare the field of cosmology for the arrival of Stage IV dark energy experiments.

  17. Industrial robot

    NASA Astrophysics Data System (ADS)

    Prakashan, A.; Mukunda, H. S.; Samuel, S. D.; Colaco, J. C.

    1992-11-01

    This paper addresses the design and development of a four degree of freedom industrial manipulator, with three liner axes in the positioning mechanism and one rotary axis in the orientation mechanism. The positioning mechanism joints are driven with dc servo motors fitted with incremental shaft encoders. The rotary joint of the orientation mechanism is driven by a stepping motor. The manipulator is controlled by an IBM 386 PC/AT. Microcomputer based interface cards have been developed for independent joint control. PID controllers for dc motors have been designed. Kinematic modeling, dynamic modeling, and path planning have been carried out to generate the control sequence to accomplish a given task with reference to source and destination state constraints. This project has been sponsored by the Department of Science and Technology, Government of India, New Delhi, and has been executed in collaboration with M/s Larsen & Toubro Ltd, Mysore, India.

  18. Reduced-Size Integer Linear Programming Models for String Selection Problems: Application to the Farthest String Problem.

    PubMed

    Zörnig, Peter

    2015-08-01

    We present integer programming models for some variants of the farthest string problem. The number of variables and constraints is substantially less than that of the integer linear programming models known in the literature. Moreover, the solution of the linear programming-relaxation contains only a small proportion of noninteger values, which considerably simplifies the rounding process. Numerical tests have shown excellent results, especially when a small set of long sequences is given.

  19. Distribution-Agnostic Stochastic Optimal Power Flow for Distribution Grids: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler

    2016-09-01

    This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less

  20. Scheduling double round-robin tournaments with divisional play using constraint programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey

    We study a tournament format that extends a traditional double round-robin format with divisional single round-robin tournaments. Elitserien, the top Swedish handball league, uses such a format for its league schedule. We present a constraint programming model that characterizes the general double round-robin plus divisional single round-robin format. This integrated model allows scheduling to be performed in a single step, as opposed to common multistep approaches that decompose scheduling into smaller problems and possibly miss optimal solutions. In addition to general constraints, we introduce Elitserien-specific requirements for its tournament. These general and league-specific constraints allow us to identify implicit andmore » symmetry-breaking properties that reduce the time to solution from hours to seconds. A scalability study of the number of teams shows that our approach is reasonably fast for even larger league sizes. The experimental evaluation of the integrated approach takes considerably less computational effort to schedule Elitserien than does the previous decomposed approach. (C) 2016 Elsevier B.V. All rights reserved« less

  1. Assessing cloud radiative effects on tropospheric photolysis rates and key oxidants during aircraft campaigns using satellite cloud observations and a global chemical transport model

    NASA Astrophysics Data System (ADS)

    Zhang, B.; Liu, H.; Crawford, J. H.; Chen, G.; Voulgarakis, A.; Fairlie, T. D.; Duncan, B. N.; Ham, S. H.; Kato, S.; Payer Sulprizio, M.; Yantosca, R.

    2017-12-01

    Clouds affect tropospheric photochemistry through modifying solar radiation that determines photolysis rates. Observational and modeling studies have indicated that photolysis rates are enhanced above and in the upper portion of cloud layers and are reduced below optically thick clouds due to their dominant backscattering effect. However, large uncertainties exist in the representation of cloud spatiotemporal (especially vertical) distributions in global models, which makes understanding of cloud radiative effects on tropospheric chemistry challenging. Our previous study using a global 3-D chemical transport model (GEOS-Chem) driven by various meteorological data sets showed that the radiative effects of clouds on photochemistry are more sensitive to the differences in the vertical distribution of clouds than to those in the magnitude of column cloud optical depths. In this work, we evaluate monthly mean cloud optical properties and distributions in the MERRA-2 reanalysis with those in C3M, a 3-D cloud data product developed at NASA Langley Research Center and merged from multiple A-Train satellite (CERES, CloudSat, CALIPSO, and MODIS) observations. We conduct tropospheric chemistry simulations for the periods of several aircraft campaigns, including ARCTAS (April, June-July, 2008), DC3 (May-June, 2012), and SEAC4RS (August-September, 2013) with GEOS-Chem driven by MERRA-2. We compare model simulations with and without constraints of cloud optical properties and distributions from C3M, and evaluate model photolysis rates (J[O1D] and J[NO2]) and key oxidants (e.g., OH and ozone) with aircraft profile measurements. We will assess whether the constraints provided by C3M improve model simulations of photolysis rates and oxidants as well as their variabilities.

  2. Large-scale linear programs in planning and prediction.

    DOT National Transportation Integrated Search

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  3. Interval-parameter chance-constraint programming model for end-of-life vehicles management under rigorous environmental regulations.

    PubMed

    Simic, Vladimir

    2016-06-01

    As the number of end-of-life vehicles (ELVs) is estimated to increase to 79.3 million units per year by 2020 (e.g., 40 million units were generated in 2010), there is strong motivation to effectively manage this fast-growing waste flow. Intensive work on management of ELVs is necessary in order to more successfully tackle this important environmental challenge. This paper proposes an interval-parameter chance-constraint programming model for end-of-life vehicles management under rigorous environmental regulations. The proposed model can incorporate various uncertainty information in the modeling process. The complex relationships between different ELV management sub-systems are successfully addressed. Particularly, the formulated model can help identify optimal patterns of procurement from multiple sources of ELV supply, production and inventory planning in multiple vehicle recycling factories, and allocation of sorted material flows to multiple final destinations under rigorous environmental regulations. A case study is conducted in order to demonstrate the potentials and applicability of the proposed model. Various constraint-violation probability levels are examined in detail. Influences of parameter uncertainty on model solutions are thoroughly investigated. Useful solutions for the management of ELVs are obtained under different probabilities of violating system constraints. The formulated model is able to tackle a hard, uncertainty existing ELV management problem. The presented model has advantages in providing bases for determining long-term ELV management plans with desired compromises between economic efficiency of vehicle recycling system and system-reliability considerations. The results are helpful for supporting generation and improvement of ELV management plans. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. A chance-constrained programming model to allocate wildfire initial attack resources for a fire season

    Treesearch

    Yu Wei; Michael Bevers; Erin Belval; Benjamin Bird

    2015-01-01

    This research developed a chance-constrained two-stage stochastic programming model to support wildfire initial attack resource acquisition and location on a planning unit for a fire season. Fire growth constraints account for the interaction between fire perimeter growth and construction to prevent overestimation of resource requirements. We used this model to examine...

  5. A New Availability-Payment Model for Pricing Performance-Based Logistics Contracts

    DTIC Science & Technology

    2014-06-17

    the contractor maintains a steady revenue (with profit ). Figure 4. Affine Controller Model for Availability Contract Acquisition Research Program... a bankruptcy constraint; and and Deduction(∙) are decision variables for contract design for the level one (public sector) problem. Given...UMD-CM-14-175 ACQUISITION RESEARCH PROGRAM SPONSORED REPORT SERIES A New “Availability-Payment” Model for Pricing Performance- Based

  6. Modeling and Analysis of Power Processing Systems (MAPPS). Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Lee, F. C.; Radman, S.; Carter, R. A.; Wu, C. H.; Yu, Y.; Chang, R.

    1980-01-01

    The computer programs and derivations generated in support of the modeling and design optimization program are presented. Programs for the buck regulator, boost regulator, and buck-boost regulator are described. The computer program for the design optimization calculations is presented. Constraints for the boost and buck-boost converter were derived. Derivations of state-space equations and transfer functions are presented. Computer lists for the converters are presented, and the input parameters justified.

  7. Data-driven modeling of solar-powered urban microgrids

    PubMed Central

    Halu, Arda; Scala, Antonio; Khiyami, Abdulaziz; González, Marta C.

    2016-01-01

    Distributed generation takes center stage in today’s rapidly changing energy landscape. Particularly, locally matching demand and generation in the form of microgrids is becoming a promising alternative to the central distribution paradigm. Infrastructure networks have long been a major focus of complex networks research with their spatial considerations. We present a systemic study of solar-powered microgrids in the urban context, obeying real hourly consumption patterns and spatial constraints of the city. We propose a microgrid model and study its citywide implementation, identifying the self-sufficiency and temporal properties of microgrids. Using a simple optimization scheme, we find microgrid configurations that result in increased resilience under cost constraints. We characterize load-related failures solving power flows in the networks, and we show the robustness behavior of urban microgrids with respect to optimization using percolation methods. Our findings hint at the existence of an optimal balance between cost and robustness in urban microgrids. PMID:26824071

  8. Data-driven modeling of solar-powered urban microgrids.

    PubMed

    Halu, Arda; Scala, Antonio; Khiyami, Abdulaziz; González, Marta C

    2016-01-01

    Distributed generation takes center stage in today's rapidly changing energy landscape. Particularly, locally matching demand and generation in the form of microgrids is becoming a promising alternative to the central distribution paradigm. Infrastructure networks have long been a major focus of complex networks research with their spatial considerations. We present a systemic study of solar-powered microgrids in the urban context, obeying real hourly consumption patterns and spatial constraints of the city. We propose a microgrid model and study its citywide implementation, identifying the self-sufficiency and temporal properties of microgrids. Using a simple optimization scheme, we find microgrid configurations that result in increased resilience under cost constraints. We characterize load-related failures solving power flows in the networks, and we show the robustness behavior of urban microgrids with respect to optimization using percolation methods. Our findings hint at the existence of an optimal balance between cost and robustness in urban microgrids.

  9. Mapping the developmental constraints on working memory span performance.

    PubMed

    Bayliss, Donna M; Jarrold, Christopher; Baddeley, Alan D; Gunn, Deborah M; Leigh, Eleanor

    2005-07-01

    This study investigated the constraints underlying developmental improvements in complex working memory span performance among 120 children of between 6 and 10 years of age. Independent measures of processing efficiency, storage capacity, rehearsal speed, and basic speed of processing were assessed to determine their contribution to age-related variance in complex span. Results showed that developmental improvements in complex span were driven by 2 age-related but separable factors: 1 associated with general speed of processing and 1 associated with storage ability. In addition, there was an age-related contribution shared between working memory, processing speed, and storage ability that was important for higher level cognition. These results pose a challenge for models of complex span performance that emphasize the importance of processing speed alone.

  10. Complete denture tooth arrangement technology driven by a reconfigurable rule.

    PubMed

    Dai, Ning; Yu, Xiaoling; Fan, Qilei; Yuan, Fulai; Liu, Lele; Sun, Yuchun

    2018-01-01

    The conventional technique for the fabrication of complete dentures is complex, with a long fabrication process and difficult-to-control restoration quality. In recent years, digital complete denture design has become a research focus. Digital complete denture tooth arrangement is a challenging issue that is difficult to efficiently implement under the constraints of complex tooth arrangement rules and the patient's individualized functional aesthetics. The present study proposes a complete denture automatic tooth arrangement method driven by a reconfigurable rule; it uses four typical operators, including a position operator, a scaling operator, a posture operator, and a contact operator, to establish the constraint mapping association between the teeth and the constraint set of the individual patient. By using the process reorganization of different constraint operators, this method can flexibly implement different clinical tooth arrangement rules. When combined with a virtual occlusion algorithm based on progressive iterative Laplacian deformation, the proposed method can achieve automatic and individual tooth arrangement. Finally, the experimental results verify that the proposed method is flexible and efficient.

  11. Using a systems orientation and foundational theory to enhance theory-driven human service program evaluations.

    PubMed

    Wasserman, Deborah L

    2010-05-01

    This paper offers a framework for using a systems orientation and "foundational theory" to enhance theory-driven evaluations and logic models. The framework guides the process of identifying and explaining operative relationships and perspectives within human service program systems. Self-Determination Theory exemplifies how a foundational theory can be used to support the framework in a wide range of program evaluations. Two examples illustrate how applications of the framework have improved the evaluators' abilities to observe and explain program effect. In both exemplars improvements involved addressing and organizing into a single logic model heretofore seemingly disparate evaluation issues regarding valuing (by whose values); the role of organizational and program context; and evaluation anxiety and utilization. Copyright 2009 Elsevier Ltd. All rights reserved.

  12. Diet models with linear goal programming: impact of achievement functions.

    PubMed

    Gerdessen, J C; de Vries, J H M

    2015-11-01

    Diet models based on goal programming (GP) are valuable tools in designing diets that comply with nutritional, palatability and cost constraints. Results derived from GP models are usually very sensitive to the type of achievement function that is chosen.This paper aims to provide a methodological insight into several achievement functions. It describes the extended GP (EGP) achievement function, which enables the decision maker to use either a MinSum achievement function (which minimizes the sum of the unwanted deviations) or a MinMax achievement function (which minimizes the largest unwanted deviation), or a compromise between both. An additional advantage of EGP models is that from one set of data and weights multiple solutions can be obtained. We use small numerical examples to illustrate the 'mechanics' of achievement functions. Then, the EGP achievement function is demonstrated on a diet problem with 144 foods, 19 nutrients and several types of palatability constraints, in which the nutritional constraints are modeled with fuzzy sets. Choice of achievement function affects the results of diet models. MinSum achievement functions can give rise to solutions that are sensitive to weight changes, and that pile all unwanted deviations on a limited number of nutritional constraints. MinMax achievement functions spread the unwanted deviations as evenly as possible, but may create many (small) deviations. EGP comprises both types of achievement functions, as well as compromises between them. It can thus, from one data set, find a range of solutions with various properties.

  13. Declarative Programming with Temporal Constraints, in the Language CG.

    PubMed

    Negreanu, Lorina

    2015-01-01

    Specifying and interpreting temporal constraints are key elements of knowledge representation and reasoning, with applications in temporal databases, agent programming, and ambient intelligence. We present and formally characterize the language CG, which tackles this issue. In CG, users are able to develop time-dependent programs, in a flexible and straightforward manner. Such programs can, in turn, be coupled with evolving environments, thus empowering users to control the environment's evolution. CG relies on a structure for storing temporal information, together with a dedicated query mechanism. Hence, we explore the computational complexity of our query satisfaction problem. We discuss previous implementation attempts of CG and introduce a novel prototype which relies on logic programming. Finally, we address the issue of consistency and correctness of CG program execution, using the Event-B modeling approach.

  14. The LSST operations simulator

    NASA Astrophysics Data System (ADS)

    Delgado, Francisco; Saha, Abhijit; Chandrasekharan, Srinivasan; Cook, Kem; Petry, Catherine; Ridgway, Stephen

    2014-08-01

    The Operations Simulator for the Large Synoptic Survey Telescope (LSST; http://www.lsst.org) allows the planning of LSST observations that obey explicit science driven observing specifications, patterns, schema, and priorities, while optimizing against the constraints placed by design-specific opto-mechanical system performance of the telescope facility, site specific conditions as well as additional scheduled and unscheduled downtime. It has a detailed model to simulate the external conditions with real weather history data from the site, a fully parameterized kinematic model for the internal conditions of the telescope, camera and dome, and serves as a prototype for an automatic scheduler for the real time survey operations with LSST. The Simulator is a critical tool that has been key since very early in the project, to help validate the design parameters of the observatory against the science requirements and the goals from specific science programs. A simulation run records the characteristics of all observations (e.g., epoch, sky position, seeing, sky brightness) in a MySQL database, which can be queried for any desired purpose. Derivative information digests of the observing history are made with an analysis package called Simulation Survey Tools for Analysis and Reporting (SSTAR). Merit functions and metrics have been designed to examine how suitable a specific simulation run is for several different science applications. Software to efficiently compare the efficacy of different survey strategies for a wide variety of science applications using such a growing set of metrics is under development. A recent restructuring of the code allows us to a) use "look-ahead" strategies that avoid cadence sequences that cannot be completed due to observing constraints; and b) examine alternate optimization strategies, so that the most efficient scheduling algorithm(s) can be identified and used: even few-percent efficiency gains will create substantive scientific opportunity. The enhanced simulator is being used to assess the feasibility of desired observing cadences, study the impact of changing science program priorities and assist with performance margin investigations of the LSST system.

  15. Feed Forward Neural Network and Optimal Control Problem with Control and State Constraints

    NASA Astrophysics Data System (ADS)

    Kmet', Tibor; Kmet'ová, Mária

    2009-09-01

    A feed forward neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The paper extends adaptive critic neural network architecture proposed by [5] to the optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.

  16. Integrating ergonomics knowledge into business-driven design projects: The shaping of resource constraints in engineering consultancy.

    PubMed

    Hall-Andersen, Lene Bjerg; Neumann, Patrick; Broberg, Ole

    2016-10-17

    The integration of ergonomics knowledge into engineering projects leads to both healthier and more efficient workplaces. There is a lack of knowledge about integrating ergonomic knowledge into the design practice in engineering consultancies. This study explores how organizational resources can pose constraints for the integration of ergonomics knowledge into engineering design projects in a business-driven setting, and how ergonomists cope with these resource constraints. An exploratory case study in an engineering consultancy was conducted. A total of 27 participants were interviewed. Data were collected applying semi-structured interviews, observations, and documentary studies. Interviews were transcribed, coded, and categorized into themes. From the analysis five overall themes emerged as major constituents of resource constraints: 1) maximizing project revenue, 2) payment for ergonomics services, 3) value of ergonomic services, 4) role of the client, and 5) coping strategies to overcome resource constraints. We hypothesize that resource constraints were shaped due to sub-optimization of costs in design projects. The economical contribution of ergonomics measures was not evaluated in the entire life cycle of a designed workplace. Coping strategies included teaming up with engineering designers in the sales process or creating an alliance with ergonomists in the client organization.

  17. Activity-Centered Domain Characterization for Problem-Driven Scientific Visualization

    PubMed Central

    Marai, G. Elisabeta

    2018-01-01

    Although visualization design models exist in the literature in the form of higher-level methodological frameworks, these models do not present a clear methodological prescription for the domain characterization step. This work presents a framework and end-to-end model for requirements engineering in problem-driven visualization application design. The framework and model are based on the activity-centered design paradigm, which is an enhancement of human-centered design. The proposed activity-centered approach focuses on user tasks and activities, and allows an explicit link between the requirements engineering process with the abstraction stage—and its evaluation—of existing, higher-level visualization design models. In a departure from existing visualization design models, the resulting model: assigns value to a visualization based on user activities; ranks user tasks before the user data; partitions requirements in activity-related capabilities and nonfunctional characteristics and constraints; and explicitly incorporates the user workflows into the requirements process. A further merit of this model is its explicit integration of functional specifications, a concept this work adapts from the software engineering literature, into the visualization design nested model. A quantitative evaluation using two sets of interdisciplinary projects supports the merits of the activity-centered model. The result is a practical roadmap to the domain characterization step of visualization design for problem-driven data visualization. Following this domain characterization model can help remove a number of pitfalls that have been identified multiple times in the visualization design literature. PMID:28866550

  18. Home visiting programs for HIV-affected families: a comparison of service quality between volunteer-driven and paraprofessional models.

    PubMed

    Kidman, Rachel; Nice, Johanna; Taylor, Tory; Thurman, Tonya R

    2014-10-02

    Home visiting is a popular component of programs for HIV-affected children in sub-Saharan Africa, but its implementation varies widely. While some home visitors are lay volunteers, other programs invest in more highly trained paraprofessional staff. This paper describes a study investigating whether additional investment in paraprofessional staffing translated into higher quality service delivery in one program context. Beneficiary children and caregivers at sites in KwaZulu-Natal, South Africa were interviewed after 2 years of program enrollment and asked to report about their experiences with home visiting. Analysis focused on intervention exposure, including visit intensity, duration and the kinds of emotional, informational and tangible support provided. Few beneficiaries reported receiving home visits in program models primarily driven by lay volunteers; when visits did occur, they were shorter and more infrequent. Paraprofessional-driven programs not only provided significantly more home visits, but also provided greater interaction with the child, communication on a larger variety of topics, and more tangible support to caregivers. These results suggest that programs that invest in compensation and extensive training for home visitors are better able to serve and retain beneficiaries, and they support a move toward establishing a professional workforce of home visitors to support vulnerable children and families in South Africa.

  19. Leading Change: A Case Study of Alamo Academies--An Industry-Driven Workforce Partnership Program

    ERIC Educational Resources Information Center

    Hu, Xiaodan; Bowman, Gene

    2016-01-01

    In this study, the authors focus on the initiation and development of the Alamo Academies, aiming to illustrate an exemplary industry-driven model that addresses workforce development in local community. After a brief introduction of the context, the authors summarized major factors that contribute to the success of the collaboration model,…

  20. Saul: Towards Declarative Learning Based Programming

    PubMed Central

    Kordjamshidi, Parisa; Roth, Dan; Wu, Hao

    2015-01-01

    We present Saul, a new probabilistic programming language designed to address some of the shortcomings of programming languages that aim at advancing and simplifying the development of AI systems. Such languages need to interact with messy, naturally occurring data, to allow a programmer to specify what needs to be done at an appropriate level of abstraction rather than at the data level, to be developed on a solid theory that supports moving to and reasoning at this level of abstraction and, finally, to support flexible integration of these learning and inference models within an application program. Saul is an object-functional programming language written in Scala that facilitates these by (1) allowing a programmer to learn, name and manipulate named abstractions over relational data; (2) supporting seamless incorporation of trainable (probabilistic or discriminative) components into the program, and (3) providing a level of inference over trainable models to support composition and make decisions that respect domain and application constraints. Saul is developed over a declaratively defined relational data model, can use piecewise learned factor graphs with declaratively specified learning and inference objectives, and it supports inference over probabilistic models augmented with declarative knowledge-based constraints. We describe the key constructs of Saul and exemplify its use in developing applications that require relational feature engineering and structured output prediction. PMID:26635465

  1. Saul: Towards Declarative Learning Based Programming.

    PubMed

    Kordjamshidi, Parisa; Roth, Dan; Wu, Hao

    2015-07-01

    We present Saul , a new probabilistic programming language designed to address some of the shortcomings of programming languages that aim at advancing and simplifying the development of AI systems. Such languages need to interact with messy, naturally occurring data, to allow a programmer to specify what needs to be done at an appropriate level of abstraction rather than at the data level, to be developed on a solid theory that supports moving to and reasoning at this level of abstraction and, finally, to support flexible integration of these learning and inference models within an application program. Saul is an object-functional programming language written in Scala that facilitates these by (1) allowing a programmer to learn, name and manipulate named abstractions over relational data; (2) supporting seamless incorporation of trainable (probabilistic or discriminative) components into the program, and (3) providing a level of inference over trainable models to support composition and make decisions that respect domain and application constraints. Saul is developed over a declaratively defined relational data model, can use piecewise learned factor graphs with declaratively specified learning and inference objectives, and it supports inference over probabilistic models augmented with declarative knowledge-based constraints. We describe the key constructs of Saul and exemplify its use in developing applications that require relational feature engineering and structured output prediction.

  2. Attribution of trends in global vegetation greenness from 1982 to 2011

    NASA Astrophysics Data System (ADS)

    Zhu, Z.; Xu, L.; Bi, J.; Myneni, R.; Knyazikhin, Y.

    2012-12-01

    Time series of remotely sensed vegetation indices data provide evidence of changes in terrestrial vegetation activity over the past decades in the world. However, it is difficult to attribute cause-and-effect to vegetation trends because variations in vegetation productivity are driven by various factors. This study investigated changes in global vegetation productivity first, and then attributed the global natural vegetation with greening trend. Growing season integrated normalized difference vegetation index (GSI NDVI) derived from the new GIMMS NDVI3g dataset (1982-2011was analyzed. A combined time series analysis model, which was developed from simper linear trend model (SLT), autoregressive integrated moving average model (ARIMA) and Vogelsang's t-PST model shows that productivity of all vegetation types except deciduous broadleaf forest predominantly showed increasing trends through the 30-year period. The evolution of changes in productivity in the last decade was also investigated. Area of greening vegetation monotonically increased through the last decade, and both the browning and no change area monotonically decreased. To attribute the predominant increase trend of productivity of global natural vegetation, trends of eight climate time series datasets (three temperature, three precipitation and two radiation datasets) were analyzed. The attribution of trends in global vegetation greenness was summarized as relaxation of climatic constraints, fertilization and other unknown reasons. Result shows that nearly all the productivity increase of global natural vegetation was driven by relaxation of climatic constraints and fertilization, which play equally important role in driving global vegetation greenness.; Area fraction and productivity change fraction of IGBP vegetation land cover classes showing statistically significant (10% level) trend in GSI NDVIt;

  3. Evidence of market-driven size-selective fishing and the mediating effects of biological and institutional factors.

    PubMed

    Reddy, Sheila M W; Wentz, Allison; Aburto-Oropeza, Octavio; Maxey, Martin; Nagavarapu, Sriniketh; Leslie, Heather M

    2013-06-01

    Market demand is often ignored or assumed to lead uniformly to the decline of resources. Yet little is known about how market demand influences natural resources in particular contexts, or the mediating effects of biological or institutional factors. Here, we investigate this problem by examining the Pacific red snapper (Lutjanus peru) fishery around La Paz, Mexico, where medium or "plate-sized" fish are sold to restaurants at a premium price. If higher demand for plate-sized fish increases the relative abundance of the smallest (recruit size class) and largest (most fecund) fish, this may be a market mechanism to increase stocks and fishermen's revenues. We tested this hypothesis by estimating the effect of prices on the distribution of catch across size classes using daily records of prices and catch. We linked predictions from this economic choice model to a staged-based model of the fishery to estimate the effects on the stock and revenues from harvest. We found that the supply of plate-sized fish increased by 6%, while the supply of large fish decreased by 4% as a result of a 13% price premium for plate-sized fish. This market-driven size selection increased revenues (14%) but decreased total fish biomass (-3%). However, when market-driven size selection was combined with limited institutional constraints, both fish biomass (28%) and fishermen's revenue (22%) increased. These results show that the direction and magnitude of the effects of market demand on biological populations and human behavior can depend on both biological attributes and institutional constraints. Fisheries management may capitalize on these conditional effects by implementing size-based regulations when economic and institutional incentives will enhance compliance, as in the case we describe here, or by creating compliance enhancing conditions for existing regulations.

  4. A homeostatic-driven turnover remodelling constitutive model for healing in soft tissues

    PubMed Central

    Gasser, T. Christian; Bellomo, Facundo J.

    2016-01-01

    Remodelling of soft biological tissue is characterized by interacting biochemical and biomechanical events, which change the tissue's microstructure, and, consequently, its macroscopic mechanical properties. Remodelling is a well-defined stage of the healing process, and aims at recovering or repairing the injured extracellular matrix. Like other physiological processes, remodelling is thought to be driven by homeostasis, i.e. it tends to re-establish the properties of the uninjured tissue. However, homeostasis may never be reached, such that remodelling may also appear as a continuous pathological transformation of diseased tissues during aneurysm expansion, for example. A simple constitutive model for soft biological tissues that regards remodelling as homeostatic-driven turnover is developed. Specifically, the recoverable effective tissue damage, whose rate is the sum of a mechanical damage rate and a healing rate, serves as a scalar internal thermodynamic variable. In order to integrate the biochemical and biomechanical aspects of remodelling, the healing rate is, on the one hand, driven by mechanical stimuli, but, on the other hand, subjected to simple metabolic constraints. The proposed model is formulated in accordance with continuum damage mechanics within an open-system thermodynamics framework. The numerical implementation in an in-house finite-element code is described, particularized for Ogden hyperelasticity. Numerical examples illustrate the basic constitutive characteristics of the model and demonstrate its potential in representing aspects of remodelling of soft tissues. Simulation results are verified for their plausibility, but also validated against reported experimental data. PMID:27009177

  5. A homeostatic-driven turnover remodelling constitutive model for healing in soft tissues.

    PubMed

    Comellas, Ester; Gasser, T Christian; Bellomo, Facundo J; Oller, Sergio

    2016-03-01

    Remodelling of soft biological tissue is characterized by interacting biochemical and biomechanical events, which change the tissue's microstructure, and, consequently, its macroscopic mechanical properties. Remodelling is a well-defined stage of the healing process, and aims at recovering or repairing the injured extracellular matrix. Like other physiological processes, remodelling is thought to be driven by homeostasis, i.e. it tends to re-establish the properties of the uninjured tissue. However, homeostasis may never be reached, such that remodelling may also appear as a continuous pathological transformation of diseased tissues during aneurysm expansion, for example. A simple constitutive model for soft biological tissues that regards remodelling as homeostatic-driven turnover is developed. Specifically, the recoverable effective tissue damage, whose rate is the sum of a mechanical damage rate and a healing rate, serves as a scalar internal thermodynamic variable. In order to integrate the biochemical and biomechanical aspects of remodelling, the healing rate is, on the one hand, driven by mechanical stimuli, but, on the other hand, subjected to simple metabolic constraints. The proposed model is formulated in accordance with continuum damage mechanics within an open-system thermodynamics framework. The numerical implementation in an in-house finite-element code is described, particularized for Ogden hyperelasticity. Numerical examples illustrate the basic constitutive characteristics of the model and demonstrate its potential in representing aspects of remodelling of soft tissues. Simulation results are verified for their plausibility, but also validated against reported experimental data. © 2016 The Author(s).

  6. Overview of C-2U FRC Experimental Program and Plans for C-2W

    NASA Astrophysics Data System (ADS)

    Gota, H.; Binderbauer, M. W.; Tajima, T.; Putvinski, S.; Tuszewski, M.; Dettrick, S.; Korepanov, S.; Smirnov, A.; Thompson, M. C.; Yang, X.; Cappello, M.; Ivanov, A. A.; TAE Team

    2016-10-01

    Tri Alpha Energy's experimental program has been focused on a demonstration of reliable field-reversed configuration (FRC) formation and sustainment, driven by fast ions via high-power neutral-beam (NB) injection. The world's largest compact-toroid experimental devices, C-2 and C-2U, have successfully produced a well-stabilized, sustainable FRC plasma state with NB injection (input power, PNB 10 + MW; 15 keV hydrogen) and end-on coaxial plasma guns. Remarkable improvements in confinement and stability of FRC plasmas have led to further improved fast-ion build up; thereby, an advanced beam-driven FRC state has been produced and sustained for up to 5 + ms (longer than all characteristic system time scales), only limited by hardware and electric supply constraints such as NB and plasma-gun power supplies. To further improve the FRC performance the C-2U device is being replaced by C-2W featuring higher injected NB power, longer pulse duration as well as enhanced edge-biasing systems and substantially upgraded divertors. Main C-2U experimental results and key features of C-2W will be presented. Tri Alpha Energy, Inc.

  7. Fall Risk Assessment Tools for Elderly Living in the Community: Can We Do Better?

    PubMed

    Palumbo, Pierpaolo; Palmerini, Luca; Bandinelli, Stefania; Chiari, Lorenzo

    2015-01-01

    Falls are a common, serious threat to the health and self-confidence of the elderly. Assessment of fall risk is an important aspect of effective fall prevention programs. In order to test whether it is possible to outperform current prognostic tools for falls, we analyzed 1010 variables pertaining to mobility collected from 976 elderly subjects (InCHIANTI study). We trained and validated a data-driven model that issues probabilistic predictions about future falls. We benchmarked the model against other fall risk indicators: history of falls, gait speed, Short Physical Performance Battery (Guralnik et al. 1994), and the literature-based fall risk assessment tool FRAT-up (Cattelani et al. 2015). Parsimony in the number of variables included in a tool is often considered a proxy for ease of administration. We studied how constraints on the number of variables affect predictive accuracy. The proposed model and FRAT-up both attained the same discriminative ability; the area under the Receiver Operating Characteristic (ROC) curve (AUC) for multiple falls was 0.71. They outperformed the other risk scores, which reported AUCs for multiple falls between 0.64 and 0.65. Thus, it appears that both data-driven and literature-based approaches are better at estimating fall risk than commonly used fall risk indicators. The accuracy-parsimony analysis revealed that tools with a small number of predictors (~1-5) were suboptimal. Increasing the number of variables improved the predictive accuracy, reaching a plateau at ~20-30, which we can consider as the best trade-off between accuracy and parsimony. Obtaining the values of these ~20-30 variables does not compromise usability, since they are usually available in comprehensive geriatric assessments.

  8. Comparing inversion techniques for constraining CO2 fluxes in the Brazilian Amazon Basin with aircraft observations

    NASA Astrophysics Data System (ADS)

    Chow, V. Y.; Gerbig, C.; Longo, M.; Koch, F.; Nehrkorn, T.; Eluszkiewicz, J.; Ceballos, J. C.; Longo, K.; Wofsy, S. C.

    2012-12-01

    The Balanço Atmosférico Regional de Carbono na Amazônia (BARCA) aircraft program spanned the dry to wet and wet to dry transition seasons in November 2008 & May 2009 respectively. It resulted in ~150 vertical profiles covering the Brazilian Amazon Basin (BAB). With the data we attempt to estimate a carbon budget for the BAB, to determine if regional aircraft experiments can provide strong constraints for a budget, and to compare inversion frameworks when optimizing flux estimates. We use a LPDM to integrate satellite-, aircraft-, & surface-data with mesoscale meteorological fields to link bottom-up and top-down models to provide constraints and error bounds for regional fluxes. The Stochastic Time-Inverted Lagrangian Transport (STILT) model driven by meteorological fields from BRAMS, ECMWF, and WRF are coupled to a biosphere model, the Vegetation Photosynthesis Respiration Model (VPRM), to determine regional CO2 fluxes for the BAB. The VPRM is a prognostic biosphere model driven by MODIS 8-day EVI and LSWI indices along with shortwave radiation and temperature from tower measurements and mesoscale meteorological data. VPRM parameters are tuned using eddy flux tower data from the Large-Scale Biosphere Atmosphere experiment. VPRM computes hourly CO2 fluxes by calculating Gross Ecosystem Exchange (GEE) and Respiration (R) for 8 different vegetation types. The VPRM fluxes are scaled up to the BAB by using time-averaged drivers (shortwave radiation & temperature) from high-temporal resolution runs of BRAMS, ECMWF, and WRF and vegetation maps from SYNMAP and IGBP2007. Shortwave radiation from each mesoscale model is validated using surface data and output from GL 1.2, a global radiation model based on GOES 8 visible imagery. The vegetation maps are updated to 2008 and 2009 using landuse scenarios modeled by Sim Amazonia 2 and Sim Brazil. A priori fluxes modeled by STILT-VPRM are optimized using data from BARCA, eddy covariance sites, and flask measurements. The aircraft mixing ratios are applied as a top down constraint in Maximum Likelihood Estimation (MLE) and Bayesian inversion frameworks that solves for parameters controlling the flux. Posterior parameter estimates are used to estimate the carbon budget of the BAB. Preliminary results show that the STILT-VPRM model simulates the net emission of CO2 during both transition periods reasonably well. There is significant enhancement from biomass burning during the November 2008 profiles and some from fossil fuel combustion during the May 2009 flights. ΔCO/ΔCO2 emission ratios are used in combination with continuous observations of CO to remove the CO2 contributions from biomass burning and fossil fuel combustion from the observed CO2 measurements resulting in better agreement of observed and modeled aircraft data. Comparing column calculations for each of the vertical profiles shows our model represents the variability in the diurnal cycle. The high altitude CO2 values from above 3500m are similar to the lateral boundary conditions from CarbonTracker 2010 and GEOS-Chem indicating little influence from surface fluxes at these levels. The MLE inversion provides scaling factors for GEE and R for each of the 8 vegetation types and a Bayesian inversion is being conducted. Our initial inversion results suggest the BAB represents a small net source of CO2 during both of the BARCA intensives.

  9. Object matching using a locally affine invariant and linear programming techniques.

    PubMed

    Li, Hongsheng; Huang, Xiaolei; He, Lei

    2013-02-01

    In this paper, we introduce a new matching method based on a novel locally affine-invariant geometric constraint and linear programming techniques. To model and solve the matching problem in a linear programming formulation, all geometric constraints should be able to be exactly or approximately reformulated into a linear form. This is a major difficulty for this kind of matching algorithm. We propose a novel locally affine-invariant constraint which can be exactly linearized and requires a lot fewer auxiliary variables than other linear programming-based methods do. The key idea behind it is that each point in the template point set can be exactly represented by an affine combination of its neighboring points, whose weights can be solved easily by least squares. Errors of reconstructing each matched point using such weights are used to penalize the disagreement of geometric relationships between the template points and the matched points. The resulting overall objective function can be solved efficiently by linear programming techniques. Our experimental results on both rigid and nonrigid object matching show the effectiveness of the proposed algorithm.

  10. Marginally fast cooling synchrotron models for prompt GRBs

    NASA Astrophysics Data System (ADS)

    Beniamini, Paz; Barniol Duran, Rodolfo; Giannios, Dimitrios

    2018-05-01

    Previous studies have considered synchrotron as the emission mechanism for prompt gamma-ray bursts (GRBs). These works have shown that the electrons must cool on a time-scale comparable to the dynamic time at the source in order to satisfy spectral constraints while maintaining high radiative efficiency. We focus on conditions where synchrotron cooling is balanced by a continuous source of heating, and in which these constraints are naturally satisfied. Assuming that a majority of the electrons in the emitting region are contributing to the observed peak, we find that the energy per electron has to be E ≳ 20 GeV and that the Lorentz factor of the emitting material has to be very large 103 ≲ Γem ≲ 104, well in excess of the bulk Lorentz factor of the jet inferred from GRB afterglows. A number of independent constraints then indicate that the emitters must be moving relativistically, with Γ΄ ≈ 10, relative to the bulk frame of the jet and that the jet must be highly magnetized upstream of the emission region, σup ≳ 30. The emission radius is also strongly constrained in this model to R ≳ 1016 cm. These values are consistent with magnetic jet models where the dissipation is driven by magnetic reconnection that takes place far away from the base of the jet.

  11. Incremental checking of Master Data Management model based on contextual graphs

    NASA Astrophysics Data System (ADS)

    Lamolle, Myriam; Menet, Ludovic; Le Duc, Chan

    2015-10-01

    The validation of models is a crucial step in distributed heterogeneous systems. In this paper, an incremental validation method is proposed in the scope of a Model Driven Engineering (MDE) approach, which is used to develop a Master Data Management (MDM) field represented by XML Schema models. The MDE approach presented in this paper is based on the definition of an abstraction layer using UML class diagrams. The validation method aims to minimise the model errors and to optimisethe process of model checking. Therefore, the notion of validation contexts is introduced allowing the verification of data model views. Description logics specify constraints that the models have to check. An experimentation of the approach is presented through an application developed in ArgoUML IDE.

  12. Microscopic origin and macroscopic implications of lane formation in mixtures of oppositely driven particles

    NASA Astrophysics Data System (ADS)

    Klymko, Katherine; Geissler, Phillip L.; Whitelam, Stephen

    2016-08-01

    Colloidal particles of two types, driven in opposite directions, can segregate into lanes [Vissers et al., Soft Matter 7, 2352 (2011), 10.1039/c0sm01343a]. This phenomenon can be reproduced by two-dimensional Brownian dynamics simulations of model particles [Dzubiella et al., Phys. Rev. E 65, 021402 (2002), 10.1103/PhysRevE.65.021402]. Here we use computer simulation to assess the generality of lane formation with respect to variation of particle type and dynamical protocol. We find that laning results from rectification of diffusion on the scale of a particle diameter: oppositely driven particles must, in the time taken to encounter each other in the direction of the drive, diffuse in the perpendicular direction by about one particle diameter. This geometric constraint implies that the diffusion constant of a particle, in the presence of those of the opposite type, grows approximately linearly with the Péclet number, a prediction confirmed by our numerics over a range of model parameters. Such environment-dependent diffusion is statistically similar to an effective interparticle attraction; consistent with this observation, we find that oppositely driven nonattractive colloids display features characteristic of the simplest model system possessing both interparticle attractions and persistent motion, the driven Ising lattice gas [Katz, Leibowitz, and Spohn, J. Stat. Phys. 34, 497 (1984), 10.1007/BF01018556]. These features include long-ranged correlations in the disordered regime, a critical regime characterized by a change in slope of the particle current with the Péclet number, and fluctuations that grow with system size. By analogy, we suggest that lane formation in the driven colloid system is a phase transition in the macroscopic limit, but that macroscopic phase separation would not occur in finite time upon starting from disordered initial conditions.

  13. LexValueSets: An Approach for Context-Driven Value Sets Extraction

    PubMed Central

    Pathak, Jyotishman; Jiang, Guoqian; Dwarkanath, Sridhar O.; Buntrock, James D.; Chute, Christopher G.

    2008-01-01

    The ability to model, share and re-use value sets across multiple medical information systems is an important requirement. However, generating value sets semi-automatically from a terminology service is still an unresolved issue, in part due to the lack of linkage to clinical context patterns that provide the constraints in defining a concept domain and invocation of value sets extraction. Towards this goal, we develop and evaluate an approach for context-driven automatic value sets extraction based on a formal terminology model. The crux of the technique is to identify and define the context patterns from various domains of discourse and leverage them for value set extraction using two complementary ideas based on (i) local terms provided by the Subject Matter Experts (extensional) and (ii) semantic definition of the concepts in coding schemes (intensional). A prototype was implemented based on SNOMED CT rendered in the LexGrid terminology model and a preliminary evaluation is presented. PMID:18998955

  14. Structure of protoplanetary discs with magnetically driven winds

    NASA Astrophysics Data System (ADS)

    Khajenabi, Fazeleh; Shadmehri, Mohsen; Pessah, Martin E.; Martin, Rebecca G.

    2018-04-01

    We present a new set of analytical solutions to model the steady-state structure of a protoplanetary disc with a magnetically driven wind. Our model implements a parametrization of the stresses involved and the wind launching mechanism in terms of the plasma parameter at the disc midplane, as suggested by the results of recent, local magnetohydrodynamical simulations. When wind mass-loss is accounted for, we find that its rate significantly reduces the disc surface density, particularly in the inner disc region. We also find that models that include wind mass-loss lead to thinner dust layers. As an astrophysical application of our models, we address the case of HL Tau, whose disc exhibits a high accretion rate and efficient dust settling at its midplane. These two observational features are not easy to reconcile with conventional accretion disc theory, where the level of turbulence needed to explain the high accretion rate would prevent a thin dust layer. Our disc model that incorporates both mass-loss and angular momentum removal by a wind is able to account for HL Tau observational constraints concerning its high accretion rate and dust layer thinness.

  15. An Approximation Solution to Refinery Crude Oil Scheduling Problem with Demand Uncertainty Using Joint Constrained Programming

    PubMed Central

    Duan, Qianqian; Yang, Genke; Xu, Guanglin; Pan, Changchun

    2014-01-01

    This paper is devoted to develop an approximation method for scheduling refinery crude oil operations by taking into consideration the demand uncertainty. In the stochastic model the demand uncertainty is modeled as random variables which follow a joint multivariate distribution with a specific correlation structure. Compared to deterministic models in existing works, the stochastic model can be more practical for optimizing crude oil operations. Using joint chance constraints, the demand uncertainty is treated by specifying proximity level on the satisfaction of product demands. However, the joint chance constraints usually hold strong nonlinearity and consequently, it is still hard to handle it directly. In this paper, an approximation method combines a relax-and-tight technique to approximately transform the joint chance constraints to a serial of parameterized linear constraints so that the complicated problem can be attacked iteratively. The basic idea behind this approach is to approximate, as much as possible, nonlinear constraints by a lot of easily handled linear constraints which will lead to a well balance between the problem complexity and tractability. Case studies are conducted to demonstrate the proposed methods. Results show that the operation cost can be reduced effectively compared with the case without considering the demand correlation. PMID:24757433

  16. An approximation solution to refinery crude oil scheduling problem with demand uncertainty using joint constrained programming.

    PubMed

    Duan, Qianqian; Yang, Genke; Xu, Guanglin; Pan, Changchun

    2014-01-01

    This paper is devoted to develop an approximation method for scheduling refinery crude oil operations by taking into consideration the demand uncertainty. In the stochastic model the demand uncertainty is modeled as random variables which follow a joint multivariate distribution with a specific correlation structure. Compared to deterministic models in existing works, the stochastic model can be more practical for optimizing crude oil operations. Using joint chance constraints, the demand uncertainty is treated by specifying proximity level on the satisfaction of product demands. However, the joint chance constraints usually hold strong nonlinearity and consequently, it is still hard to handle it directly. In this paper, an approximation method combines a relax-and-tight technique to approximately transform the joint chance constraints to a serial of parameterized linear constraints so that the complicated problem can be attacked iteratively. The basic idea behind this approach is to approximate, as much as possible, nonlinear constraints by a lot of easily handled linear constraints which will lead to a well balance between the problem complexity and tractability. Case studies are conducted to demonstrate the proposed methods. Results show that the operation cost can be reduced effectively compared with the case without considering the demand correlation.

  17. A MODEL OF MAGNETIC BRAKING OF SOLAR ROTATION THAT SATISFIES OBSERVATIONAL CONSTRAINTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Denissenkov, Pavel A., E-mail: pavel.denisenkov@gmail.co

    The model of magnetic braking of solar rotation considered by Charbonneau and MacGregor has been modified so that it is able to reproduce for the first time the rotational evolution of both the fastest and slowest rotators among solar-type stars in open clusters of different ages, without coming into conflict with other observational constraints, such as the time evolution of the atmospheric Li abundance in solar twins and the thinness of the solar tachocline. This new model assumes that rotation-driven turbulent diffusion, which is thought to amplify the viscosity and magnetic diffusivity in stellar radiative zones, is strongly anisotropic withmore » the horizontal components of the transport coefficients strongly dominating over those in the vertical direction. Also taken into account is the poloidal field decay that helps to confine the width of the tachocline at the solar age. The model's properties are investigated by numerically solving the azimuthal components of the coupled momentum and magnetic induction equations in two dimensions using a finite element method.« less

  18. The Data-Driven Approach to Spectroscopic Analyses

    NASA Astrophysics Data System (ADS)

    Ness, M.

    2018-01-01

    I review the data-driven approach to spectroscopy, The Cannon, which is a method for deriving fundamental diagnostics of galaxy formation of precise chemical compositions and stellar ages, across many stellar surveys that are mapping the Milky Way. With The Cannon, the abundances and stellar parameters from the multitude of stellar surveys can be placed directly on the same scale, using stars in common between the surveys. Furthermore, the information that resides in the data can be fully extracted, this has resulted in higher precision stellar parameters and abundances being delivered from spectroscopic data and has opened up new avenues in galactic archeology, for example, in the determination of ages for red giant stars across the Galactic disk. Coupled with Gaia distances, proper motions, and derived orbit families, the stellar age and individual abundance information delivered at the precision obtained with the data-driven approach provides very strong constraints on the evolution of and birthplace of stars in the Milky Way. I will review the role of data-driven spectroscopy as we enter the era where we have both the data and the tools to build the ultimate conglomerate of galactic information as well as highlight further applications of data-driven models in the coming decade.

  19. Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness

    NASA Astrophysics Data System (ADS)

    Kusuma, K. K.; Maruf, A.

    2016-02-01

    Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.

  20. Software For Integer Programming

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1992-01-01

    Improved Exploratory Search Technique for Pure Integer Linear Programming Problems (IESIP) program optimizes objective function of variables subject to confining functions or constraints, using discrete optimization or integer programming. Enables rapid solution of problems up to 10 variables in size. Integer programming required for accuracy in modeling systems containing small number of components, distribution of goods, scheduling operations on machine tools, and scheduling production in general. Written in Borland's TURBO Pascal.

  1. Constraint Force Equation Methodology for Modeling Multi-Body Stage Separation Dynamics

    NASA Technical Reports Server (NTRS)

    Toniolo, Matthew D.; Tartabini, Paul V.; Pamadi, Bandu N.; Hotchko, Nathaniel

    2008-01-01

    This paper discusses a generalized approach to the multi-body separation problems in a launch vehicle staging environment based on constraint force methodology and its implementation into the Program to Optimize Simulated Trajectories II (POST2), a widely used trajectory design and optimization tool. This development facilitates the inclusion of stage separation analysis into POST2 for seamless end-to-end simulations of launch vehicle trajectories, thus simplifying the overall implementation and providing a range of modeling and optimization capabilities that are standard features in POST2. Analysis and results are presented for two test cases that validate the constraint force equation methodology in a stand-alone mode and its implementation in POST2.

  2. Personnel scheduling using an integer programming model- an application at Avanti Blue-Nile Hotels.

    PubMed

    Kassa, Biniyam Asmare; Tizazu, Anteneh Eshetu

    2013-01-01

    In this paper, we report perhaps a first of its kind application of management science in the Ethiopian hotel industry. Avanti Blue Nile Hotels, a newly established five star hotel in Bahir Dar, is the company for which we developed an integer programming model that determines an optimal weekly shift schedule for the Hotel's engineering department personnel while satisfying several constraints including weekly rest requirements per employee, rest requirements between working shifts per employee, required number of personnel per shift, and other constraints. The model is implemented on an excel solver routine. The model enables the company's personnel department management to develop a fair personnel schedule as needed and to effectively utilize personnel resources while satisfying several technical, legal and economic requirements. These encouraging achievements make us optimistic about the gains other Ethiopian organizations can amass by introducing management science approaches in their management planning and decision making systems.

  3. CIS Program Redesign Driven by IS2010 Model: A Case Study

    ERIC Educational Resources Information Center

    Surendran, Ken; Amer, Suhair; Schwieger, Dana

    2012-01-01

    The release of the IS2010 Model Curriculum has triggered review of existing Information Systems (IS) programs. It also provides an opportunity to replace low enrollment IS programs with flexible ones that focus on specific application domains. In this paper, the authors present a case study of their redesigned Computer Information Systems (CIS)…

  4. Serendipity in dark photon searches

    NASA Astrophysics Data System (ADS)

    Ilten, Philip; Soreq, Yotam; Williams, Mike; Xue, Wei

    2018-06-01

    Searches for dark photons provide serendipitous discovery potential for other types of vector particles. We develop a framework for recasting dark photon searches to obtain constraints on more general theories, which includes a data-driven method for determining hadronic decay rates. We demonstrate our approach by deriving constraints on a vector that couples to the B-L current, a leptophobic B boson that couples directly to baryon number and to leptons via B- γ kinetic mixing, and on a vector that mediates a protophobic force. Our approach can easily be generalized to any massive gauge boson with vector couplings to the Standard Model fermions, and software to perform any such recasting is provided at https://gitlab.com/philten/darkcast .

  5. Constraining the CO intensity mapping power spectrum at intermediate redshifts

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Hamsa

    2018-04-01

    We compile available constraints on the carbon monoxide (CO) 1-0 luminosity functions and abundances at redshifts 0-3. This is used to develop a data driven halo model for the evolution of the CO galaxy abundances and clustering across intermediate redshifts. It is found that the recent constraints from the CO Power Spectrum Survey (z ˜ 3; Keating et al. 2016), when combined with existing observations of local galaxies (z ˜ 0; Keres, Yun & Young 2003), lead to predictions that are consistent with the results of smaller surveys at intermediate redshifts (z ˜ 1-2). We provide convenient fitting forms for the evolution of the CO luminosity-halo mass relation, and estimates of the mean and uncertainties in the CO power spectrum in the context of future intensity mapping experiments.

  6. Relaxion cosmology and the price of fine-tuning

    NASA Astrophysics Data System (ADS)

    Di Chiara, Stefano; Kannike, Kristjan; Marzola, Luca; Racioppi, Antonio; Raidal, Martti; Spethmann, Christian

    2016-05-01

    The relaxion scenario presents an intriguing extension of the standard model in which the particle introduced to solve to the strong C P problem, the axion, also achieves the dynamical relaxation of the Higgs boson mass term. In this work we complete this framework by proposing a scenario of inflationary cosmology that is consistent with all the observational constraints: the relaxion hybrid inflation with an asymmetric waterfall. In our scheme, the vacuum energy of the inflaton drives inflation in a natural way while the relaxion slow rolls. The constraints on the present inflationary observables are then matched through a subsequent inflationary epoch driven by the inflaton. We quantify the amount of fine-tuning of the proposed inflation scenario, concluding that the inflaton sector severely decreases the naturalness of the theory.

  7. Describing Teacher–Student Interactions: A Qualitative Assessment of Teacher Implementation of the 7th Grade keepin’ it REAL Substance Use Intervention

    PubMed Central

    Miller-Day, Michelle; Shin, Young Ju; Hecht, Michael L.; Krieger, Janice L.; Graham, John W.

    2014-01-01

    Variations in the delivery of school-based substance use prevention curricula affect students’ acquisition of the lesson content and program outcomes. Although adaptation is sometimes viewed as a lack of fidelity, it is unclear what types of variations actually occur in the classroom. This observational study investigated teacher and student behaviors during implementation of a middle school-based drug prevention curriculum in 25 schools across two Midwestern states. Trained observers coded videos of 276 lessons, reflecting a total of 31 predominantly Caucasian teachers (10 males and 21 females) in 73 different classes. Employing qualitative coding procedures, the study provides a working typology of implementation patterns based on varying levels of teacher control and student participation. These patterns are fairly consistent across lessons and across classes of students, suggesting a teacher-driven delivery model where teachers create a set of constraints within which students vary their engagement. Findings provide a descriptive basis grounded in observation of classroom implementation that can be used to test models of implementation fidelity and quality as well as impact training and other dissemination research. PMID:22739791

  8. An Analysis of the Neighborhood Impacts of a Mortgage Assistance Program: A Spatial Hedonic Model

    ERIC Educational Resources Information Center

    Di, Wenhua; Ma, Jielai; Murdoch, James C.

    2010-01-01

    Down payment or closing cost assistance is an effective program in addressing the wealth constraints of low-and moderate-income homebuyers. However, the spillover effect of such programs on the neighborhood is unknown. This paper estimates the impact of the City of Dallas Mortgage Assistance Program (MAP) on nearby home values using a hedonic…

  9. Modeling Regular Replacement for String Constraint Solving

    NASA Technical Reports Server (NTRS)

    Fu, Xiang; Li, Chung-Chih

    2010-01-01

    Bugs in user input sanitation of software systems often lead to vulnerabilities. Among them many are caused by improper use of regular replacement. This paper presents a precise modeling of various semantics of regular substitution, such as the declarative, finite, greedy, and reluctant, using finite state transducers (FST). By projecting an FST to its input/output tapes, we are able to solve atomic string constraints, which can be applied to both the forward and backward image computation in model checking and symbolic execution of text processing programs. We report several interesting discoveries, e.g., certain fragments of the general problem can be handled using less expressive deterministic FST. A compact representation of FST is implemented in SUSHI, a string constraint solver. It is applied to detecting vulnerabilities in web applications

  10. On the radial evolution of reflection-driven turbulence in the inner solar wind in preparation for Parker Solar Probe

    NASA Astrophysics Data System (ADS)

    Perez, J. C.; Chandran, B. D. G.

    2017-12-01

    In this work we present recent results from high-resolution direct numerical simulations and a phenomenological model that describes the radial evolution of reflection-driven Alfven Wave turbulence in the solar atmosphere and the inner solar wind. The simulations are performed inside a narrow magnetic flux tube that models a coronal hole extending from the solar surface through the chromosphere and into the solar corona to approximately 21 solar radii. The simulations include prescribed empirical profiles that account for the inhomogeneities in density, background flow, and the background magnetic field present in coronal holes. Alfven waves are injected into the solar corona by imposing random, time-dependent velocity and magnetic field fluctuations at the photosphere. The phenomenological model incorporates three important features observed in the simulations: dynamic alignment, weak/strong nonlinear AW-AW interactions, and that the outward-propagating AWs launched by the Sun split into two populations with different characteristic frequencies. Model and simulations are in good agreement and show that when the key physical parameters are chosen within observational constraints, reflection-driven Alfven turbulence is a plausible mechanism for the heating and acceleration of the fast solar wind. By flying a virtual Parker Solar Probe (PSP) through the simulations, we will also establish comparisons between the model and simulations with the kind of single-point measurements that PSP will provide.

  11. Generative Models in Deep Learning: Constraints for Galaxy Evolution

    NASA Astrophysics Data System (ADS)

    Turp, Maximilian Dennis; Schawinski, Kevin; Zhang, Ce; Weigel, Anna K.

    2018-01-01

    New techniques are essential to make advances in the field of galaxy evolution. Recent developments in the field of artificial intelligence and machine learning have proven that these tools can be applied to problems far more complex than simple image recognition. We use these purely data driven approaches to investigate the process of star formation quenching. We show that Variational Autoencoders provide a powerful method to forward model the process of galaxy quenching. Our results imply that simple changes in specific star formation rate and bulge to disk ratio cannot fully describe the properties of the quenched population.

  12. Use of nonlinear programming to optimize performance response to energy density in broiler feed formulation.

    PubMed

    Guevara, V R

    2004-02-01

    A nonlinear programming optimization model was developed to maximize margin over feed cost in broiler feed formulation and is described in this paper. The model identifies the optimal feed mix that maximizes profit margin. Optimum metabolizable energy level and performance were found by using Excel Solver nonlinear programming. Data from an energy density study with broilers were fitted to quadratic equations to express weight gain, feed consumption, and the objective function income over feed cost in terms of energy density. Nutrient:energy ratio constraints were transformed into equivalent linear constraints. National Research Council nutrient requirements and feeding program were used for examining changes in variables. The nonlinear programming feed formulation method was used to illustrate the effects of changes in different variables on the optimum energy density, performance, and profitability and was compared with conventional linear programming. To demonstrate the capabilities of the model, I determined the impact of variation in prices. Prices for broiler, corn, fish meal, and soybean meal were increased and decreased by 25%. Formulations were identical in all other respects. Energy density, margin, and diet cost changed compared with conventional linear programming formulation. This study suggests that nonlinear programming can be more useful than conventional linear programming to optimize performance response to energy density in broiler feed formulation because an energy level does not need to be set.

  13. US Food Security and Climate Change: Mid-Century Projections of Commodity Crop Production by the IMPACT Model

    NASA Astrophysics Data System (ADS)

    Takle, E. S.; Gustafson, D. I.; Beachy, R.; Nelson, G. C.; Mason-D'Croz, D.; Palazzo, A.

    2013-12-01

    Agreement is developing among agricultural scientists on the emerging inability of agriculture to meet growing global food demands. The lack of additional arable land and availability of freshwater have long been constraints on agriculture. Changes in trends of weather conditions that challenge physiological limits of crops, as projected by global climate models, are expected to exacerbate the global food challenge toward the middle of the 21st century. These climate- and constraint-driven crop production challenges are interconnected within a complex global economy, where diverse factors add to price volatility and food scarcity. We use the DSSAT crop modeling suite, together with mid-century projections of four AR4 global models, as input to the International Food Policy Research Institute IMPACT model to project the impact of climate change on food security through the year 2050 for internationally traded crops. IMPACT is an iterative model that responds to endogenous and exogenous drivers to dynamically solve for the world prices that ensure global supply equals global demand. The modeling methodology reconciles the limited spatial resolution of macro-level economic models that operate through equilibrium-driven relationships at a national level with detailed models of biophysical processes at high spatial resolution. The analysis presented here suggests that climate change in the first half of the 21st century does not represent a near-term threat to food security in the US due to the availability of adaptation strategies (e.g., loss of current growing regions is balanced by gain of new growing regions). However, as climate continues to trend away from 20th century norms current adaptation measures will not be sufficient to enable agriculture to meet growing food demand. Climate scenarios from higher-level carbon emissions exacerbate the food shortfall, although uncertainty in climate model projections (particularly precipitation) is a limitation to impact studies.

  14. Chantey Castings: A Hands-On Simulation to Teach Constraint Management and Demand-Driven Supply Chain Approaches

    ERIC Educational Resources Information Center

    Grandzol, Christian J.; Grandzol, John R.

    2018-01-01

    Supply chain design and constraint management are widely-adopted techniques in industry, necessitating that operations and supply chain educators teach these topics in ways that enhance student learning and retention, optimize resource utilization (especially time), and maximize student interest. The Chantey Castings Simulation provides a platform…

  15. The Betelgeuse Project: Constraints from Rotation

    NASA Astrophysics Data System (ADS)

    Diaz, Manuel; Nance, Sarafina; Sullivan, James; Wheeler, J. Craig

    2017-01-01

    In order to constrain the evolutionary state of the red supergiant Betelgeuse, we have produced a suite of models with ZAMS masses from 15 to 25 Msun in intervals of 1 Msun including the effects of rotation computed with the stellar evolutionary code MESA. For non--rotating models we find results that are similar to other work. It is somewhat difficult to find models that agree within 1 σ of the observed values of R, Teff and L, but modestly easy within 3 σ uncertainty. Incorporating the nominal observed rotational velocity, ~15 km/s, yields significantly different, and challenging, constraints. This velocity constraint is only matched when the models first approach the base of the red supergiant branch (RSB), having crossed the Hertzsprung gap, but not yet having ascended the RSB and most violate even generous error bars on R, Teff and L. Models at the tip of the RSB typically rotate at only ~0.1 km/s, independent of any reasonable choice of initial rotation. We discuss the possible uncertainties in our modeling and the observations, including the distance to Betelgeuse, the rotation velocity, and model parameters. We summarize various options to account for the rotational velocity and suggest that one possibility is that Betelgeuse merged with a companion star of about 1 Msun as it ascended the RSB, in the process producing the ring structure observed at about 7' away. A past coalescence would complicate attempts to understand the evolutionary history and future of Betelgeuse. To that end, we also present asteroseismology models with acoustic waves driven by inner convective regions that could elucidate the inner structure and evolutionary state.

  16. Model-Driven Development for scientific computing. An upgrade of the RHEEDGr program

    NASA Astrophysics Data System (ADS)

    Daniluk, Andrzej

    2009-11-01

    Model-Driven Engineering (MDE) is the software engineering discipline, which considers models as the most important element for software development, and for the maintenance and evolution of software, through model transformation. Model-Driven Architecture (MDA) is the approach for software development under the Model-Driven Engineering framework. This paper surveys the core MDA technology that was used to upgrade of the RHEEDGR program to C++0x language standards. New version program summaryProgram title: RHEEDGR-09 Catalogue identifier: ADUY_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUY_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 21 263 No. of bytes in distributed program, including test data, etc.: 1 266 982 Distribution format: tar.gz Programming language: Code Gear C++ Builder Computer: Intel Core Duo-based PC Operating system: Windows XP, Vista, 7 RAM: more than 1 MB Classification: 4.3, 7.2, 6.2, 8, 14 Does the new version supersede the previous version?: Yes Nature of problem: Reflection High-Energy Electron Diffraction (RHEED) is a very useful technique for studying growth and surface analysis of thin epitaxial structures prepared by the Molecular Beam Epitaxy (MBE). The RHEED technique can reveal, almost instantaneously, changes either in the coverage of the sample surface by adsorbates or in the surface structure of a thin film. Solution method: The calculations are based on the use of a dynamical diffraction theory in which the electrons are taken to be diffracted by a potential, which is periodic in the dimension perpendicular to the surface. Reasons for new version: Responding to the user feedback the graphical version of the RHEED program has been upgraded to C++0x language standards. Also, functionality and documentation of the program have been improved. Summary of revisions: Model-Driven Architecture (MDA) is the approach defined by the Object Management Group (OMG) for software development under the Model-Driven Engineering framework [1]. The MDA approach shifts the focus of software development from writing code to building models. By adapting a model-centric approach, the MDA approach hopes to automate the generation of system implementation artifacts directly from the model. The following three models are the core of the MDA: (i) the Computation Independent Model (CIM), which is focused on basic requirements of the system, (ii) the Platform Independent Model (PIM), which is used by software architects and designers, and is focused on the operational capabilities of a system outside the context of a specific platform, and (iii) the Platform Specific Model (PSM), which is used by software developers and programmers, and includes details relating to the system for a specific platform. Basic requirements for the calculation of the RHEED intensity rocking curves in the one-beam condition have been described in Ref. [2]. Fig. 1 shows the PIM for the present version of the program. Fig. 2 presents the PSM for the program. The TGraph2D.bpk package has been recompiled to Graph2D0x.bpl and upgraded according to C++0x language standards. Fig. 3 shows the PSM of the Graph2D component, which is manifested by the Graph2D0x.bpl package presently. This diagram is a graphic presentation of the static view, which shows a collection of declarative model elements and their relationships. Installation instructions of the Graph2D0x package can be found in the new distribution. The program requires the user to provide the appropriate parameters for the crystal structure under investigation. These parameters are loaded from the parameters.ini file at run-time. Instructions for the preparation of the .ini files can be found in the new distribution. The program enables carrying out one-dimensional dynamical calculations for the fcc lattice, with a two-atoms basis and fcc lattice, with one atom basis but yet the zeroth Fourier component of the scattering potential in the TRHEED1D::crystPotUg() function can be modified according to users' specific application requirements. A graphical user interface (GUI) for the program has been reconstructed. The program has been compiled with English/USA regional and language options. Unusual features: The program is distributed in the form of main projects RHEEDGr_09.cbproj and Graph2D0x.cbproj with associated files, and should be compiled using Code Gear C++ Builder 2009 compilers. Running time: The typical running time is machine and user-parameters dependent. References: OMG, Model Driven Architecture Guide Version 1.0.1, 2003, http://www.omg.org/cgi-bin/doc?omg/03-06-01. A. Daniluk, Comput. Phys. Comm. 166 (2005) 123.

  17. Building a laboratory foundation for interpreting spectral emission from x-ray binary and black hole accretion disks

    NASA Astrophysics Data System (ADS)

    Loisel, Guillaume

    2016-10-01

    Emission from accretion powered objects accounts for a large fraction of all photons in the universe and is a powerful diagnostic for their behavior and structure. Quantitative interpretation of spectrum emission from these objects requires a spectral synthesis model for photoionized plasma, since the ionizing luminosity is so large that photon driven atomic processes dominate over collisions. This is a quandary because laboratory experiments capable of testing the spectral emission models are non-existent. The models must predict the photoionized charge state distribution, the photon emission processes, and the radiation transport influence on the observed emission. We have used a decade of research at the Z facility to achieve the first simultaneous measurements of emission and absorption from photoionized plasmas. The extraordinary spectra are reproducible to within +/-2% and the E/dE 500 spectral resolution has enabled unprecedented tests of atomic structure calculations. The absorption spectra enable determination of plasma density, temperature, and charge state distribution. The emission spectra then enable tests of spectral emission models. The emission has been measured from plasmas with varying size to elucidate the radiation transport effects. This combination of measurements will provide strong constraints on models used in astrophysics. Sandia is a multi-program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract DE-AC04-94AL85000.

  18. Electric power scheduling: A distributed problem-solving approach

    NASA Technical Reports Server (NTRS)

    Mellor, Pamela A.; Dolce, James L.; Krupp, Joseph C.

    1990-01-01

    Space Station Freedom's power system, along with the spacecraft's other subsystems, needs to carefully conserve its resources and yet strive to maximize overall Station productivity. Due to Freedom's distributed design, each subsystem must work cooperatively within the Station community. There is a need for a scheduling tool which will preserve this distributed structure, allow each subsystem the latitude to satisfy its own constraints, and preserve individual value systems while maintaining Station-wide integrity. The value-driven free-market economic model is such a tool.

  19. Topology optimization of induction heating model using sequential linear programming based on move limit with adaptive relaxation

    NASA Astrophysics Data System (ADS)

    Masuda, Hiroshi; Kanda, Yutaro; Okamoto, Yoshifumi; Hirono, Kazuki; Hoshino, Reona; Wakao, Shinji; Tsuburaya, Tomonori

    2017-12-01

    It is very important to design electrical machineries with high efficiency from the point of view of saving energy. Therefore, topology optimization (TO) is occasionally used as a design method for improving the performance of electrical machinery under the reasonable constraints. Because TO can achieve a design with much higher degree of freedom in terms of structure, there is a possibility for deriving the novel structure which would be quite different from the conventional structure. In this paper, topology optimization using sequential linear programming using move limit based on adaptive relaxation is applied to two models. The magnetic shielding, in which there are many local minima, is firstly employed as firstly benchmarking for the performance evaluation among several mathematical programming methods. Secondly, induction heating model is defined in 2-D axisymmetric field. In this model, the magnetic energy stored in the magnetic body is maximized under the constraint on the volume of magnetic body. Furthermore, the influence of the location of the design domain on the solutions is investigated.

  20. A Hybrid Physics-Based Data-Driven Approach for Point-Particle Force Modeling

    NASA Astrophysics Data System (ADS)

    Moore, Chandler; Akiki, Georges; Balachandar, S.

    2017-11-01

    This study improves upon the physics-based pairwise interaction extended point-particle (PIEP) model. The PIEP model leverages a physical framework to predict fluid mediated interactions between solid particles. While the PIEP model is a powerful tool, its pairwise assumption leads to increased error in flows with high particle volume fractions. To reduce this error, a regression algorithm is used to model the differences between the current PIEP model's predictions and the results of direct numerical simulations (DNS) for an array of monodisperse solid particles subjected to various flow conditions. The resulting statistical model and the physical PIEP model are superimposed to construct a hybrid, physics-based data-driven PIEP model. It must be noted that the performance of a pure data-driven approach without the model-form provided by the physical PIEP model is substantially inferior. The hybrid model's predictive capabilities are analyzed using more DNS. In every case tested, the hybrid PIEP model's prediction are more accurate than those of physical PIEP model. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1315138 and the U.S. DOE, NNSA, ASC Program, as a Cooperative Agreement under Contract No. DE-NA0002378.

  1. Oil Dependence, Climate Change and Energy Security: Will Constraints on Oil Shape our Climate Future or Vice Versa?

    NASA Astrophysics Data System (ADS)

    Mignone, B. K.

    2008-12-01

    Threats to US and global energy security take several forms. First, the overwhelming dependence on oil in the transport sector leaves the US economy (and others) vulnerable to supply shocks and price volatility. Secondly, the global dependence on oil inflates prices and enhances the transfer of wealth to authoritarian regimes. Finally, the global reliance on fossil fuels more generally jeopardizes the stability of the climate system. These three threats - economic, strategic and environmental - can only be mitigated through a gradual substitution away from fossil fuels (both coal and oil) on a global scale. Such large-scale substitution could occur in response to potential resource constraints or in response to coordinated government policies in which these externalities are explicitly internalized. Here, I make use of a well-known integrated assessment model (MERGE) to examine both possibilities. When resource limits are considered alone, global fuel use tends to shift toward even more carbon-intensive resources, like oil shale or liquids derived from coal. On the other hand, when explicit carbon constraints are imposed, the fuel sector response is more complex. Generally, less stringent climate targets can be satisfied entirely through reductions in global coal consumption, while more stringent targets require simultaneous reductions in both coal and oil consumption. Taken together, these model results suggest that resource constraints alone will only exacerbate the climate problem, while a subset of policy-driven carbon constraints may yield tangible security benefits (in the form of reduced global oil consumption) in addition to the intended environmental outcome.

  2. Integrity Constraint Monitoring in Software Development: Proposed Architectures

    NASA Technical Reports Server (NTRS)

    Fernandez, Francisco G.

    1997-01-01

    In the development of complex software systems, designers are required to obtain from many sources and manage vast amounts of knowledge of the system being built and communicate this information to personnel with a variety of backgrounds. Knowledge concerning the properties of the system, including the structure of, relationships between and limitations of the data objects in the system, becomes increasingly more vital as the complexity of the system and the number of knowledge sources increases. Ensuring that violations of these properties do not occur becomes steadily more challenging. One approach toward managing the enforcement or system properties, called context monitoring, uses a centralized repository of integrity constraints and a constraint satisfiability mechanism for dynamic verification of property enforcement during program execution. The focus of this paper is to describe possible software architectures that define a mechanism for dynamically checking the satisfiability of a set of constraints on a program. The next section describes the context monitoring approach in general. Section 3 gives an overview of the work currently being done toward the addition of an integrity constraint satisfiability mechanism to a high-level program language, SequenceL, and demonstrates how this model is being examined to develop a general software architecture. Section 4 describes possible architectures for a general constraint satisfiability mechanism, as well as an alternative approach that, uses embedded database queries in lieu of an external monitor. The paper concludes with a brief summary outlining the, current state of the research and future work.

  3. Exoplanet Yield Estimation for Decadal Study Concepts using EXOSIMS

    NASA Astrophysics Data System (ADS)

    Morgan, Rhonda; Lowrance, Patrick; Savransky, Dmitry; Garrett, Daniel

    2016-01-01

    The anticipated upcoming large mission study concepts for the direct imaging of exo-earths present an exciting opportunity for exoplanet discovery and characterization. While these telescope concepts would also be capable of conducting a broad range of astrophysical investigations, the most difficult technology challenges are driven by the requirements for imaging exo-earths. The exoplanet science yield for these mission concepts will drive design trades and mission concept comparisons.To assist in these trade studies, the Exoplanet Exploration Program Office (ExEP) is developing a yield estimation tool that emphasizes transparency and consistent comparison of various design concepts. The tool will provide a parametric estimate of science yield of various mission concepts using contrast curves from physics-based model codes and Monte Carlo simulations of design reference missions using realistic constraints, such as solar avoidance angles, the observatory orbit, propulsion limitations of star shades, the accessibility of candidate targets, local and background zodiacal light levels, and background confusion by stars and galaxies. The python tool utilizes Dmitry Savransky's EXOSIMS (Exoplanet Open-Source Imaging Mission Simulator) design reference mission simulator that is being developed for the WFIRST Preliminary Science program. ExEP is extending and validating the tool for future mission concepts under consideration for the upcoming 2020 decadal review. We present a validation plan and preliminary yield results for a point design.

  4. Characterizing and reducing equifinality by constraining a distributed catchment model with regional signatures, local observations, and process understanding

    NASA Astrophysics Data System (ADS)

    Kelleher, Christa; McGlynn, Brian; Wagener, Thorsten

    2017-07-01

    Distributed catchment models are widely used tools for predicting hydrologic behavior. While distributed models require many parameters to describe a system, they are expected to simulate behavior that is more consistent with observed processes. However, obtaining a single set of acceptable parameters can be problematic, as parameter equifinality often results in several behavioral sets that fit observations (typically streamflow). In this study, we investigate the extent to which equifinality impacts a typical distributed modeling application. We outline a hierarchical approach to reduce the number of behavioral sets based on regional, observation-driven, and expert-knowledge-based constraints. For our application, we explore how each of these constraint classes reduced the number of behavioral parameter sets and altered distributions of spatiotemporal simulations, simulating a well-studied headwater catchment, Stringer Creek, Montana, using the distributed hydrology-soil-vegetation model (DHSVM). As a demonstrative exercise, we investigated model performance across 10 000 parameter sets. Constraints on regional signatures, the hydrograph, and two internal measurements of snow water equivalent time series reduced the number of behavioral parameter sets but still left a small number with similar goodness of fit. This subset was ultimately further reduced by incorporating pattern expectations of groundwater table depth across the catchment. Our results suggest that utilizing a hierarchical approach based on regional datasets, observations, and expert knowledge to identify behavioral parameter sets can reduce equifinality and bolster more careful application and simulation of spatiotemporal processes via distributed modeling at the catchment scale.

  5. Implementing a Successful Faculty, Data Driven Model for Program Review.

    ERIC Educational Resources Information Center

    Beal, Suzanne; Davis, Shirley

    Frederick Community College (Maryland) utilizes both the Instructional Accountability Program Review (IAPR) and the Career Program Review (CPR) to assess program outcomes and determine progress in meeting goals and objectives. The IAPR is a comprehensive review procedure conducted by faculty and associate deans to evaluate all transfer, career,…

  6. Synchronic interval Gaussian mixed-integer programming for air quality management.

    PubMed

    Cheng, Guanhui; Huang, Guohe Gordon; Dong, Cong

    2015-12-15

    To reveal the synchronism of interval uncertainties, the tradeoff between system optimality and security, the discreteness of facility-expansion options, the uncertainty of pollutant dispersion processes, and the seasonality of wind features in air quality management (AQM) systems, a synchronic interval Gaussian mixed-integer programming (SIGMIP) approach is proposed in this study. A robust interval Gaussian dispersion model is developed for approaching the pollutant dispersion process under interval uncertainties and seasonal variations. The reflection of synchronic effects of interval uncertainties in the programming objective is enabled through introducing interval functions. The proposition of constraint violation degrees helps quantify the tradeoff between system optimality and constraint violation under interval uncertainties. The overall optimality of system profits of an SIGMIP model is achieved based on the definition of an integrally optimal solution. Integer variables in the SIGMIP model are resolved by the existing cutting-plane method. Combining these efforts leads to an effective algorithm for the SIGMIP model. An application to an AQM problem in a region in Shandong Province, China, reveals that the proposed SIGMIP model can facilitate identifying the desired scheme for AQM. The enhancement of the robustness of optimization exercises may be helpful for increasing the reliability of suggested schemes for AQM under these complexities. The interrelated tradeoffs among control measures, emission sources, flow processes, receptors, influencing factors, and economic and environmental goals are effectively balanced. Interests of many stakeholders are reasonably coordinated. The harmony between economic development and air quality control is enabled. Results also indicate that the constraint violation degree is effective at reflecting the compromise relationship between constraint-violation risks and system optimality under interval uncertainties. This can help decision makers mitigate potential risks, e.g. insufficiency of pollutant treatment capabilities, exceedance of air quality standards, deficiency of pollution control fund, or imbalance of economic or environmental stress, in the process of guiding AQM. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Stage 4 Cosmic Microwave Background Experiment

    NASA Astrophysics Data System (ADS)

    Carlstrom, John

    2016-03-01

    Measurements of the CMB have driven our understanding of the universe and the physics that govern its evolution from quantum fluctuations to its present state. They provide the foundation for the remarkable 6-parameter cosmological model, ΛCDM, which fits all cosmological data, although there are some tensions which may hint at new physics, or simply unaccounted systematics. Far from being the last word in cosmology, the model raises deep questions: Is Inflation correct? What is its energy scale? What is the dark matter? What is the nature of dark energy? There is still a lot to learn from the CMB measurements. We are searching for the unique B-mode polarization that would be induced on the CMB by inflationary gravitational waves. We are able to detect the impact of the neutrino background on the CMB, which can be used to provide precise constraints on the number and masses of the neutrinos. We are untangling the correlations in the CMB induced by gravitational lensing to make maps of all the mass in the universe. We are measuring the scattering of the CMB by ionized structures, the Sunyaev-Zel'dovich effects, to detect clusters of galaxies and soon to map the momentum of the universe in addition to its density. To realize the enormous potential of these CMB tools we need to greatly increase the sensitivity of CMB measurements. We can expect significant advances in the next few years as the ongoing experiments deploy of order 10,000 detectors (Stage III), but to achieve critical threshold crossing goals we need to go further. The CMB community is therefore planning CMB-S4, an ambitious next generation (Stage IV) ground-based program with order of 500,000 detectors with science goals that include detecting or ruling out large field inflationary models, determining the number and masses of the neutrinos, providing precision constraints on dark energy through its impact on structure formation, as well as searching for cracks in the ΛCDM model.

  8. Deducing protein structures using logic programming: exploiting minimum data of diverse types.

    PubMed

    Sibbald, P R

    1995-04-21

    The extent to which a protein can be modeled from constraint data depends on the amount and quality of the data. This report quantifies a relationship between the amount of data and the achievable model resolution. In an information-theoretic framework the number of bits of information per residue needed to constrain a solution was calculated. The number of bits provided by different kinds of constraints was estimated from a tetrahedral lattice where all unique molecules of 6, 9, ..., 21 atoms were enumerated. Subsets of these molecules consistent with different constraint sets were then chosen, counted, and the root-mean-square distance between them calculated. This provided the desired relations. In a discrete system the number of possible models can be severely limited with relatively few constraints. An expert system that can model a protein from data of different types was built to illustrate the principle and was tested using known proteins as examples. C-alpha resolutions of 5 A are obtainable from 5 bits of information per amino acid and, in principle, from data that could be rapidly collected using standard biophysical techniques.

  9. Statistical-QoS Guaranteed Energy Efficiency Optimization for Energy Harvesting Wireless Sensor Networks

    PubMed Central

    Cheng, Wenchi; Zhang, Hailin

    2017-01-01

    Energy harvesting, which offers a never-ending energy supply, has emerged as a prominent technology to prolong the lifetime and reduce costs for the battery-powered wireless sensor networks. However, how to improve the energy efficiency while guaranteeing the quality of service (QoS) for energy harvesting based wireless sensor networks is still an open problem. In this paper, we develop statistical delay-bounded QoS-driven power control policies to maximize the effective energy efficiency (EEE), which is defined as the spectrum efficiency under given specified QoS constraints per unit harvested energy, for energy harvesting based wireless sensor networks. For the battery-infinite wireless sensor networks, our developed QoS-driven power control policy converges to the Energy harvesting Water Filling (E-WF) scheme and the Energy harvesting Channel Inversion (E-CI) scheme under the very loose and stringent QoS constraints, respectively. For the battery-finite wireless sensor networks, our developed QoS-driven power control policy becomes the Truncated energy harvesting Water Filling (T-WF) scheme and the Truncated energy harvesting Channel Inversion (T-CI) scheme under the very loose and stringent QoS constraints, respectively. Furthermore, we evaluate the outage probabilities to theoretically analyze the performance of our developed QoS-driven power control policies. The obtained numerical results validate our analysis and show that our developed optimal power control policies can optimize the EEE over energy harvesting based wireless sensor networks. PMID:28832509

  10. Statistical-QoS Guaranteed Energy Efficiency Optimization for Energy Harvesting Wireless Sensor Networks.

    PubMed

    Gao, Ya; Cheng, Wenchi; Zhang, Hailin

    2017-08-23

    Energy harvesting, which offers a never-ending energy supply, has emerged as a prominent technology to prolong the lifetime and reduce costs for the battery-powered wireless sensor networks. However, how to improve the energy efficiency while guaranteeing the quality of service (QoS) for energy harvesting based wireless sensor networks is still an open problem. In this paper, we develop statistical delay-bounded QoS-driven power control policies to maximize the effective energy efficiency (EEE), which is defined as the spectrum efficiency under given specified QoS constraints per unit harvested energy, for energy harvesting based wireless sensor networks. For the battery-infinite wireless sensor networks, our developed QoS-driven power control policy converges to the Energy harvesting Water Filling (E-WF) scheme and the Energy harvesting Channel Inversion (E-CI) scheme under the very loose and stringent QoS constraints, respectively. For the battery-finite wireless sensor networks, our developed QoS-driven power control policy becomes the Truncated energy harvesting Water Filling (T-WF) scheme and the Truncated energy harvesting Channel Inversion (T-CI) scheme under the very loose and stringent QoS constraints, respectively. Furthermore, we evaluate the outage probabilities to theoretically analyze the performance of our developed QoS-driven power control policies. The obtained numerical results validate our analysis and show that our developed optimal power control policies can optimize the EEE over energy harvesting based wireless sensor networks.

  11. Atomic Physics of Shocked Plasma in Winds of Massive Stars

    NASA Technical Reports Server (NTRS)

    Leutenegger, Maurice A.; Cohen, David H.; Owocki, Stanley P.

    2012-01-01

    High resolution diffraction grating spectra of X-ray emission from massive stars obtained with Chandra and XMM-Newton have revolutionized our understanding of their powerful, radiation-driven winds. Emission line shapes and line ratios provide diagnostics on a number of key wind parameters. Modeling of resolved emission line velocity profiles allows us to derive independent constraints on stellar mass-loss rates, leading to downward revisions of a factor of a few from previous measurements. Line ratios in He-like ions strongly constrain the spatial distribution of Xray emitting plasma, confirming the expectations of radiation hydrodynamic simulations that X-ray emission begins moderately close to the stellar surface and extends throughout the wind. Some outstanding questions remain, including the possibility of large optical depths in resonance lines, which is hinted at by differences in line shapes of resonance and intercombination lines from the same ion. Resonance scattering leads to nontrivial radiative transfer effects, and modeling it allows us to place constraints on shock size, density, and velocity structure

  12. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    PubMed

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  13. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP approach is tested and compared with MultiPlan on three clinical cases of varying complexities. In general, the plans generated by the SVDLP achieve steeper dose gradient, better conformity and less damage to normal tissues. In conclusion, the SVDLP approach effectively improves the quality of treatment plan due to the use of the complete beam search space. This challenging optimization problem with the complete beam search space is effectively handled by the proposed SVD acceleration.

  14. Automated design optimization of supersonic airplane wing structures under dynamic constraints

    NASA Technical Reports Server (NTRS)

    Fox, R. L.; Miura, H.; Rao, S. S.

    1972-01-01

    The problems of the preliminary and first level detail design of supersonic aircraft wings are stated as mathematical programs and solved using automated optimum design techniques. The problem is approached in two phases: the first is a simplified equivalent plate model in which the envelope, planform and structural parameters are varied to produce a design, the second is a finite element model with fixed configuration in which the material distribution is varied. Constraints include flutter, aeroelastically computed stresses and deflections, natural frequency and a variety of geometric limitations.

  15. Flexure in the Corinth rift: reconciling marine terraces, rivers, offshore data and fault modeling

    NASA Astrophysics Data System (ADS)

    de Gelder, G.; Fernández-Blanco, D.; Jara-Muñoz, J.; Melnick, D.; Duclaux, G.; Bell, R. E.; Lacassin, R.; Armijo, R.

    2016-12-01

    The Corinth rift (Greece) is an exceptional area to study the large-scale mechanics of a young rift system, due to its extremely high extension rates and fault slip rates. Late Pleistocene activity of large normal faults has created a mostly asymmetric E-W trending graben, mainly driven by N-dipping faults that shape the southern margin of the Corinth Gulf. Flexural footwall uplift of these faults is evidenced by Late Pleistocene coastal fan deltas that are presently up to 1700m in elevation, a drainage reversal of some major river systems, and flights of marine terraces that have been uplifted along the southern margin of the Gulf. To improve constraints on this footwall uplift, we analysed the extensive terrace sequence between Xylokastro and Corinth - uplifted by the Xylokastro Fault - using 2m-resolution digital surface models developed from Pleiades satellite imagery (acquired through the Isis and Tosca programs of the French CNES). We refined and improved the spatial uplift pattern and age correlation of these terraces, through a detailed analysis of the shoreline angles using the graphical interface TerraceM, and 2D numerical modeling of terrace formation. We combine the detailed record of flexure provided by this analysis with a morphometric analysis of the major river systems along the southern shore, obtaining constraints of footwall uplift on a longer time scale and larger spatial scale. Flexural subsidence of the hanging wall is evidenced by offshore seismic sections, for which we depth-converted a multi-channel seismic section north of the Xylokastro Fault. We use the full profile of the fault geometry and its associated deformation pattern as constraints to reproduce the long-term flexural wavelength and uplift/subsidence ratio through fault modeling. Using PyLith, an open-source finite element code for quasi-static viscoelastic simulations, we find that a steep-dipping planar fault to the brittle-ductile transition provides the best fit to reproduce the observed deformation pattern on- and offshore. The combined results of this study allow us to compare flexural normal faulting on different scales, and recorded in different elements of the Corinth rift, allowing us to put forward a comprehensive discussion on the deformation mechanisms and the mechanical behavior of this crustal scale feature.

  16. PATRAN-STAGS translator (PATSTAGS)

    NASA Technical Reports Server (NTRS)

    Otte, Neil

    1990-01-01

    A a computer program used to translate PATRAN finite element model data into Structural Analysis of General Shells (STAGS) input data is presented. The program supports translation of nodal, nodal constraints, element, force, and pressure data. The subroutine UPRESS required for the readings of live pressure data into STAGS is also presented.

  17. Inexact nonlinear improved fuzzy chance-constrained programming model for irrigation water management under uncertainty

    NASA Astrophysics Data System (ADS)

    Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping

    2018-01-01

    An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.

  18. Data-driven robust approximate optimal tracking control for unknown general nonlinear systems using adaptive dynamic programming method.

    PubMed

    Zhang, Huaguang; Cui, Lili; Zhang, Xin; Luo, Yanhong

    2011-12-01

    In this paper, a novel data-driven robust approximate optimal tracking control scheme is proposed for unknown general nonlinear systems by using the adaptive dynamic programming (ADP) method. In the design of the controller, only available input-output data is required instead of known system dynamics. A data-driven model is established by a recurrent neural network (NN) to reconstruct the unknown system dynamics using available input-output data. By adding a novel adjustable term related to the modeling error, the resultant modeling error is first guaranteed to converge to zero. Then, based on the obtained data-driven model, the ADP method is utilized to design the approximate optimal tracking controller, which consists of the steady-state controller and the optimal feedback controller. Further, a robustifying term is developed to compensate for the NN approximation errors introduced by implementing the ADP method. Based on Lyapunov approach, stability analysis of the closed-loop system is performed to show that the proposed controller guarantees the system state asymptotically tracking the desired trajectory. Additionally, the obtained control input is proven to be close to the optimal control input within a small bound. Finally, two numerical examples are used to demonstrate the effectiveness of the proposed control scheme.

  19. Application of Monte Carlo techniques to optimization of high-energy beam transport in a stochastic environment

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.; Dieudonne, J. E.; Filippas, T. A.

    1971-01-01

    An algorithm employing a modified sequential random perturbation, or creeping random search, was applied to the problem of optimizing the parameters of a high-energy beam transport system. The stochastic solution of the mathematical model for first-order magnetic-field expansion allows the inclusion of state-variable constraints, and the inclusion of parameter constraints allowed by the method of algorithm application eliminates the possibility of infeasible solutions. The mathematical model and the algorithm were programmed for a real-time simulation facility; thus, two important features are provided to the beam designer: (1) a strong degree of man-machine communication (even to the extent of bypassing the algorithm and applying analog-matching techniques), and (2) extensive graphics for displaying information concerning both algorithm operation and transport-system behavior. Chromatic aberration was also included in the mathematical model and in the optimization process. Results presented show this method as yielding better solutions (in terms of resolutions) to the particular problem than those of a standard analog program as well as demonstrating flexibility, in terms of elements, constraints, and chromatic aberration, allowed by user interaction with both the algorithm and the stochastic model. Example of slit usage and a limited comparison of predicted results and actual results obtained with a 600 MeV cyclotron are given.

  20. It's All About the Data: Workflow Systems and Weather

    NASA Astrophysics Data System (ADS)

    Plale, B.

    2009-05-01

    Digital data is fueling new advances in the computational sciences, particularly geospatial research as environmental sensing grows more practical through reduced technology costs, broader network coverage, and better instruments. e-Science research (i.e., cyberinfrastructure research) has responded to data intensive computing with tools, systems, and frameworks that support computationally oriented activities such as modeling, analysis, and data mining. Workflow systems support execution of sequences of tasks on behalf of a scientist. These systems, such as Taverna, Apache ODE, and Kepler, when built as part of a larger cyberinfrastructure framework, give the scientist tools to construct task graphs of execution sequences, often through a visual interface for connecting task boxes together with arcs representing control flow or data flow. Unlike business processing workflows, scientific workflows expose a high degree of detail and control during configuration and execution. Data-driven science imposes unique needs on workflow frameworks. Our research is focused on two issues. The first is the support for workflow-driven analysis over all kinds of data sets, including real time streaming data and locally owned and hosted data. The second is the essential role metadata/provenance collection plays in data driven science, for discovery, determining quality, for science reproducibility, and for long-term preservation. The research has been conducted over the last 6 years in the context of cyberinfrastructure for mesoscale weather research carried out as part of the Linked Environments for Atmospheric Discovery (LEAD) project. LEAD has pioneered new approaches for integrating complex weather data, assimilation, modeling, mining, and cyberinfrastructure systems. Workflow systems have the potential to generate huge volumes of data. Without some form of automated metadata capture, either metadata description becomes largely a manual task that is difficult if not impossible under high-volume conditions, or the searchability and manageability of the resulting data products is disappointingly low. The provenance of a data product is a record of its lineage, or trace of the execution history that resulted in the product. The provenance of a forecast model result, e.g., captures information about the executable version of the model, configuration parameters, input data products, execution environment, and owner. Provenance enables data to be properly attributed and captures critical parameters about the model run so the quality of the result can be ascertained. Proper provenance is essential to providing reproducible scientific computing results. Workflow languages used in science discovery are complete programming languages, and in theory can support any logic expressible by a programming language. The execution environments supporting the workflow engines, on the other hand, are subject to constraints on physical resources, and hence in practice the workflow task graphs used in science utilize relatively few of the cataloged workflow patterns. It is important to note that these workflows are executed on demand, and are executed once. Into this context is introduced the need for science discovery that is responsive to real time information. If we can use simple programming models and abstractions to make scientific discovery involving real-time data accessible to specialists who share and utilize data across scientific domains, we bring science one step closer to solving the largest of human problems.

  1. Force Model for Control of Tendon Driven Hands

    NASA Technical Reports Server (NTRS)

    Pena, Edward; Thompson, David E.

    1997-01-01

    Knowing the tendon forces generated for a given task such as grasping via a model, an artificial hand can be controlled. A two-dimensional force model for the index finger was developed. This system is assumed to be in static equilibrium, therefore, the equations of equilibrium were applied at each joint. Constraint equations describing the tendon branch connectivity were used. Gaussian elimination was used to solve for the unknowns of the Linear system. Results from initial work on estimating tendon forces in post-operative hands during active motion therapy were discussed. The results are important for understanding the effects of hand position on tendon tension, elastic effects on tendon tension, and overall functional anatomy of the hand.

  2. A depth-first search algorithm to compute elementary flux modes by linear programming.

    PubMed

    Quek, Lake-Ee; Nielsen, Lars K

    2014-07-30

    The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints.

  3. Scenario and modelling uncertainty in global mean temperature change derived from emission driven Global Climate Models

    NASA Astrophysics Data System (ADS)

    Booth, B. B. B.; Bernie, D.; McNeall, D.; Hawkins, E.; Caesar, J.; Boulton, C.; Friedlingstein, P.; Sexton, D.

    2012-09-01

    We compare future changes in global mean temperature in response to different future scenarios which, for the first time, arise from emission driven rather than concentration driven perturbed parameter ensemble of a Global Climate Model (GCM). These new GCM simulations sample uncertainties in atmospheric feedbacks, land carbon cycle, ocean physics and aerosol sulphur cycle processes. We find broader ranges of projected temperature responses arising when considering emission rather than concentration driven simulations (with 10-90 percentile ranges of 1.7 K for the aggressive mitigation scenario up to 3.9 K for the high end business as usual scenario). A small minority of simulations resulting from combinations of strong atmospheric feedbacks and carbon cycle responses show temperature increases in excess of 9 degrees (RCP8.5) and even under aggressive mitigation (RCP2.6) temperatures in excess of 4 K. While the simulations point to much larger temperature ranges for emission driven experiments, they do not change existing expectations (based on previous concentration driven experiments) on the timescale that different sources of uncertainty are important. The new simulations sample a range of future atmospheric concentrations for each emission scenario. Both in case of SRES A1B and the Representative Concentration Pathways (RCPs), the concentration pathways used to drive GCM ensembles lies towards the lower end of our simulated distribution. This design decision (a legecy of previous assessments) is likely to lead concentration driven experiments to under-sample strong feedback responses in concentration driven projections. Our ensemble of emission driven simulations span the global temperature response of other multi-model frameworks except at the low end, where combinations of low climate sensitivity and low carbon cycle feedbacks lead to responses outside our ensemble range. The ensemble simulates a number of high end responses which lie above the CMIP5 carbon cycle range. These high end simulations can be linked to sampling a number of stronger carbon cycle feedbacks and to sampling climate sensitivities above 4.5 K. This latter aspect highlights the priority in identifying real world climate sensitivity constraints which, if achieved, would lead to reductions on the uppper bound of projected global mean temperature change. The ensembles of simulations presented here provides a framework to explore relationships between present day observables and future changes while the large spread of future projected changes, highlights the ongoing need for such work.

  4. Overview of ASC Capability Computing System Governance Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott W.

    This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a priority-driven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.

  5. Combining Model-driven and Schema-based Program Synthesis

    NASA Technical Reports Server (NTRS)

    Denney, Ewen; Whittle, John

    2004-01-01

    We describe ongoing work which aims to extend the schema-based program synthesis paradigm with explicit models. In this context, schemas can be considered as model-to-model transformations. The combination of schemas with explicit models offers a number of advantages, namely, that building synthesis systems becomes much easier since the models can be used in verification and in adaptation of the synthesis systems. We illustrate our approach using an example from signal processing.

  6. HERMIES-3: A step toward autonomous mobility, manipulation, and perception

    NASA Technical Reports Server (NTRS)

    Weisbin, C. R.; Burks, B. L.; Einstein, J. R.; Feezell, R. R.; Manges, W. W.; Thompson, D. H.

    1989-01-01

    HERMIES-III is an autonomous robot comprised of a seven degree-of-freedom (DOF) manipulator designed for human scale tasks, a laser range finder, a sonar array, an omni-directional wheel-driven chassis, multiple cameras, and a dual computer system containing a 16-node hypercube expandable to 128 nodes. The current experimental program involves performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES-III). The environment in which the robots operate has been designed to include multiple valves, pipes, meters, obstacles on the floor, valves occluded from view, and multiple paths of differing navigation complexity. The ongoing research program supports the development of autonomous capability for HERMIES-IIB and III to perform complex navigation and manipulation under time constraints, while dealing with imprecise sensory information.

  7. Analysis of differences in exercise recognition by constraints on physical activity of hospitalized cancer patients based on their medical history.

    PubMed

    Choi, Mi-Ri; Jeon, Sang-Wan; Yi, Eun-Surk

    2018-04-01

    The purpose of this study is to analyze the differences among the hospitalized cancer patients on their perception of exercise and physical activity constraints based on their medical history. The study used questionnaire survey as measurement tool for 194 cancer patients (male or female, aged 20 or older) living in Seoul metropolitan area (Seoul, Gyeonggi, Incheon). The collected data were analyzed using frequency analysis, exploratory factor analysis, reliability analysis t -test, and one-way distribution using statistical program SPSS 18.0. The following results were obtained. First, there was no statistically significant difference between cancer stage and exercise recognition/physical activity constraint. Second, there was a significant difference between cancer stage and sociocultural constraint/facility constraint/program constraint. Third, there was a significant difference between cancer operation history and physical/socio-cultural/facility/program constraint. Fourth, there was a significant difference between cancer operation history and negative perception/facility/program constraint. Fifth, there was a significant difference between ancillary cancer treatment method and negative perception/facility/program constraint. Sixth, there was a significant difference between hospitalization period and positive perception/negative perception/physical constraint/cognitive constraint. In conclusion, this study will provide information necessary to create patient-centered healthcare service system by analyzing exercise recognition of hospitalized cancer patients based on their medical history and to investigate the constraint factors that prevents patients from actually making efforts to exercise.

  8. Bayesian Inversion of 2D Models from Airborne Transient EM Data

    NASA Astrophysics Data System (ADS)

    Blatter, D. B.; Key, K.; Ray, A.

    2016-12-01

    The inherent non-uniqueness in most geophysical inverse problems leads to an infinite number of Earth models that fit observed data to within an adequate tolerance. To resolve this ambiguity, traditional inversion methods based on optimization techniques such as the Gauss-Newton and conjugate gradient methods rely on an additional regularization constraint on the properties that an acceptable model can possess, such as having minimal roughness. While allowing such an inversion scheme to converge on a solution, regularization makes it difficult to estimate the uncertainty associated with the model parameters. This is because regularization biases the inversion process toward certain models that satisfy the regularization constraint and away from others that don't, even when both may suitably fit the data. By contrast, a Bayesian inversion framework aims to produce not a single `most acceptable' model but an estimate of the posterior likelihood of the model parameters, given the observed data. In this work, we develop a 2D Bayesian framework for the inversion of transient electromagnetic (TEM) data. Our method relies on a reversible-jump Markov Chain Monte Carlo (RJ-MCMC) Bayesian inverse method with parallel tempering. Previous gradient-based inversion work in this area used a spatially constrained scheme wherein individual (1D) soundings were inverted together and non-uniqueness was tackled by using lateral and vertical smoothness constraints. By contrast, our work uses a 2D model space of Voronoi cells whose parameterization (including number of cells) is fully data-driven. To make the problem work practically, we approximate the forward solution for each TEM sounding using a local 1D approximation where the model is obtained from the 2D model by retrieving a vertical profile through the Voronoi cells. The implicit parsimony of the Bayesian inversion process leads to the simplest models that adequately explain the data, obviating the need for explicit smoothness constraints. In addition, credible intervals in model space are directly obtained, resolving some of the uncertainty introduced by regularization. An example application shows how the method can be used to quantify the uncertainty in airborne EM soundings for imaging subglacial brine channels and groundwater systems.

  9. Optimizing Constrained Single Period Problem under Random Fuzzy Demand

    NASA Astrophysics Data System (ADS)

    Taleizadeh, Ata Allah; Shavandi, Hassan; Riazi, Afshin

    2008-09-01

    In this paper, we consider the multi-product multi-constraint newsboy problem with random fuzzy demands and total discount. The demand of the products is often stochastic in the real word but the estimation of the parameters of distribution function may be done by fuzzy manner. So an appropriate option to modeling the demand of products is using the random fuzzy variable. The objective function of proposed model is to maximize the expected profit of newsboy. We consider the constraints such as warehouse space and restriction on quantity order for products, and restriction on budget. We also consider the batch size for products order. Finally we introduce a random fuzzy multi-product multi-constraint newsboy problem (RFM-PM-CNP) and it is changed to a multi-objective mixed integer nonlinear programming model. Furthermore, a hybrid intelligent algorithm based on genetic algorithm, Pareto and TOPSIS is presented for the developed model. Finally an illustrative example is presented to show the performance of the developed model and algorithm.

  10. Community Microgrid Scheduling Considering Network Operational Constraints and Building Thermal Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu

    Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less

  11. Community Microgrid Scheduling Considering Network Operational Constraints and Building Thermal Dynamics

    DOE PAGES

    Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu; ...

    2017-10-10

    Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less

  12. Value Driven Information Processing and Fusion

    DTIC Science & Technology

    2016-03-01

    consensus approach allows a decentralized approach to achieve the optimal error exponent of the centralized counterpart, a conclusion that is signifi...SECURITY CLASSIFICATION OF: The objective of the project is to develop a general framework for value driven decentralized information processing...including: optimal data reduction in a network setting for decentralized inference with quantization constraint; interactive fusion that allows queries and

  13. Pyrame 3, an online framework for Calice SiW-Ecal

    NASA Astrophysics Data System (ADS)

    Magniette, F.; Irles, A.

    2018-03-01

    Pyrame 3 is the new version of the Pyrame framework [1], with emphasize on the online data treatment and the complex tasks scripting. A new mechanism has been implemented to allow any module to treat and publish data in real time. Those data are made available to any requesting module. A circular buffer mechanism allows to break the real-time constraint and to serve the slower programs in a generic subsampling way. On the other side, a programming facility called event-loop has been provided in C/C++ language to ease the development of monitoring programs. On the SiW-Ecal prototype, the acquisition chain launches a bunch of online decoders that makes available raw data plus some basic reconstruction data (true coordinate, true time, data quality tags\\ldots). With the event-loop, it is now really very easy to implement new online monitoring programs. On the other side, the scripting mechanism has been enhanced to provide complete control of the detector to the scripts. This way, we are able to script and monitor complex behaviours like position or energy scanning, calibrations or data driven reconfigurations.

  14. Modeling mass loss from B(e) stars

    NASA Technical Reports Server (NTRS)

    Cassinelli, J. P.; Schulte-Ladbeck, R. E.; Abbott, M.; Poe, C. H.

    1989-01-01

    It was suggested by Zickgraf et al. (1986) that the outer atmospheres of some B(e) stars have a two-component structure: a fast, radiation-driven wind from the pole, and a dense, slow outflow from the equator. Poe et al. (1989) developed this theory to explain the momentum problem associated with WR stars. This paper uses the multiforce wind theory of Poe et al. to model the B(e) outflow phenomenon. Two general questions are investigated: (1) whether B(e) stars can be rotating near critical speed, and if so, (2) what constraints can be placed on the parameters that determine the two-component flow structure.

  15. The mineralogic evolution of the Martian surface through time: Implications from chemical reaction path modeling studies

    NASA Technical Reports Server (NTRS)

    Plumlee, G. S.; Ridley, W. I.; Debraal, J. D.; Reed, M. H.

    1993-01-01

    Chemical reaction path calculations were used to model the minerals that might have formed at or near the Martian surface as a result of volcano or meteorite impact driven hydrothermal systems; weathering at the Martian surface during an early warm, wet climate; and near-zero or sub-zero C brine-regolith reactions in the current cold climate. Although the chemical reaction path calculations carried out do not define the exact mineralogical evolution of the Martian surface over time, they do place valuable geochemical constraints on the types of minerals that formed from an aqueous phase under various surficial and geochemically complex conditions.

  16. Model-Driven Energy Intelligence

    DTIC Science & Technology

    2015-03-01

    building information model ( BIM ) for operations...estimate of the potential impact on energy performance at Fort Jackson. 15. SUBJECT TERMS Building Information Modeling ( BIM ), Energy, ECMs, monitoring...dimensional AHU Air Handling Unit API Application Programming Interface BIM building information model BLCC Building Life Cycle Cost

  17. Novel Characterization of Capsule X-Ray Drive at the National Ignition Facility [Using ViewFactor Experiments to Measure Hohlraum X-Radiation Drive from the Capsule Point-of-View in Ignition Experiments on the National Ignition Facility

    DOE PAGES

    MacLaren, S. A.; Schneider, M. B.; Widmann, K.; ...

    2014-03-13

    Here, indirect drive experiments at the National Ignition Facility are designed to achieve fusion by imploding a fuel capsule with x rays from a laser-driven hohlraum. Previous experiments have been unable to determine whether a deficit in measured ablator implosion velocity relative to simulations is due to inadequate models of the hohlraum or ablator physics. ViewFactor experiments allow for the first time a direct measure of the x-ray drive from the capsule point of view. The experiments show a 15%–25% deficit relative to simulations and thus explain nearly all of the disagreement with the velocity data. In addition, the datamore » from this open geometry provide much greater constraints on a predictive model of laser-driven hohlraum performance than the nominal ignition target.« less

  18. The ABC of stereotypes about groups: Agency/socioeconomic success, conservative-progressive beliefs, and communion.

    PubMed

    Koch, Alex; Imhoff, Roland; Dotsch, Ron; Unkelbach, Christian; Alves, Hans

    2016-05-01

    Previous research argued that stereotypes differ primarily on the 2 dimensions of warmth/communion and competence/agency. We identify an empirical gap in support for this notion. The theoretical model constrains stereotypes a priori to these 2 dimensions; without this constraint, participants might spontaneously employ other relevant dimensions. We fill this gap by complementing the existing theory-driven approaches with a data-driven approach that allows an estimation of the spontaneously employed dimensions of stereotyping. Seven studies (total N = 4,451) show that people organize social groups primarily based on their agency/socioeconomic success (A), and as a second dimension, based on their conservative-progressive beliefs (B). Communion (C) is not found as a dimension by its own, but rather as an emergent quality in the two-dimensional space of A and B, resulting in a 2D ABC model of stereotype content about social groups. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. A DATA-DRIVEN MODEL FOR SPECTRA: FINDING DOUBLE REDSHIFTS IN THE SLOAN DIGITAL SKY SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsalmantza, P.; Hogg, David W., E-mail: vivitsal@mpia.de

    2012-07-10

    We present a data-driven method-heteroscedastic matrix factorization, a kind of probabilistic factor analysis-for modeling or performing dimensionality reduction on observed spectra or other high-dimensional data with known but non-uniform observational uncertainties. The method uses an iterative inverse-variance-weighted least-squares minimization procedure to generate a best set of basis functions. The method is similar to principal components analysis (PCA), but with the substantial advantage that it uses measurement uncertainties in a responsible way and accounts naturally for poorly measured and missing data; it models the variance in the noise-deconvolved data space. A regularization can be applied, in the form of a smoothnessmore » prior (inspired by Gaussian processes) or a non-negative constraint, without making the method prohibitively slow. Because the method optimizes a justified scalar (related to the likelihood), the basis provides a better fit to the data in a probabilistic sense than any PCA basis. We test the method on Sloan Digital Sky Survey (SDSS) spectra, concentrating on spectra known to contain two redshift components: these are spectra of gravitational lens candidates and massive black hole binaries. We apply a hypothesis test to compare one-redshift and two-redshift models for these spectra, utilizing the data-driven model trained on a random subset of all SDSS spectra. This test confirms 129 of the 131 lens candidates in our sample and all of the known binary candidates, and turns up very few false positives.« less

  20. A robust optimization methodology for preliminary aircraft design

    NASA Astrophysics Data System (ADS)

    Prigent, S.; Maréchal, P.; Rondepierre, A.; Druot, T.; Belleville, M.

    2016-05-01

    This article focuses on a robust optimization of an aircraft preliminary design under operational constraints. According to engineers' know-how, the aircraft preliminary design problem can be modelled as an uncertain optimization problem whose objective (the cost or the fuel consumption) is almost affine, and whose constraints are convex. It is shown that this uncertain optimization problem can be approximated in a conservative manner by an uncertain linear optimization program, which enables the use of the techniques of robust linear programming of Ben-Tal, El Ghaoui, and Nemirovski [Robust Optimization, Princeton University Press, 2009]. This methodology is then applied to two real cases of aircraft design and numerical results are presented.

  1. Evolutionary branching under multi-dimensional evolutionary constraints.

    PubMed

    Ito, Hiroshi; Sasaki, Akira

    2016-10-21

    The fitness of an existing phenotype and of a potential mutant should generally depend on the frequencies of other existing phenotypes. Adaptive evolution driven by such frequency-dependent fitness functions can be analyzed effectively using adaptive dynamics theory, assuming rare mutation and asexual reproduction. When possible mutations are restricted to certain directions due to developmental, physiological, or physical constraints, the resulting adaptive evolution may be restricted to subspaces (constraint surfaces) with fewer dimensionalities than the original trait spaces. To analyze such dynamics along constraint surfaces efficiently, we develop a Lagrange multiplier method in the framework of adaptive dynamics theory. On constraint surfaces of arbitrary dimensionalities described with equality constraints, our method efficiently finds local evolutionarily stable strategies, convergence stable points, and evolutionary branching points. We also derive the conditions for the existence of evolutionary branching points on constraint surfaces when the shapes of the surfaces can be chosen freely. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. The Relationship of Magnetotail Flow Bursts and Ground Onset Signatures

    NASA Technical Reports Server (NTRS)

    Kepko, Larry; Spanswick, Emma; Angelopoulos, Vassilis; Donovan, Eric

    2010-01-01

    It has been known for decades that auroral substorm onset occurs on (or at least near) the most equatorward auroral arc, which is thought to map to the near geosynchronous region. The lack of auroral signatures poleward of this arc prior to onset has been a major criticism of flow-burst driven models of substorm onset. The combined THEMIS 5 spacecraft in-situ and ground array measurements provide an unprecedented opportunity to examine the causal relationship between midtail plasma flows, aurora, and ground magnetic signatures. I first present an event from 2008 using multi-spectral all sky imager data from Gillam and in-situ data from THEMIS. The multispectral data indicate an equatorward moving auroral form prior to substorm onset. When this forms reaches the most equatorward arc, the arc brightens and an auroral substorm begins. The THEMIS data show fast Earthward flows prior to onset as well. I discuss further the association of flow bursts and Pi2 pulsations, in the con text of the directly-driven Pi2 model. This model directly links flows and Pi2 pulsations, providing an important constraint on substorm onset theories.

  3. Modeling new coal projects: supercritical or subcritical?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrino, A.J.; Jones, R.B.

    Decisions made on new build coal-fired plants are driven by several factors - emissions, fuel logistics and electric transmission access all provide constraints. The crucial economic decision whether to build supercritical or subcritical units often depends on assumptions concerning the reliability/availability of each technology, the cost of on-fuel operations including maintenance, the generation efficiencies and the potential for emissions credits at some future value. Modeling the influence of these key factors requires analysis and documentation to assure the assets actually meet the projected financial performance. This article addresses some of the issue related to the trade-offs that have the potentialmore » to be driven by the supercritical/subcritical decision. Solomon Associates has been collecting cost, generation and reliability data on coal-fired power generation assets for approximately 10 years using a strict methodology and taxonomy to categorize and compare actual plant operations data. This database provides validated information not only on performance, but also on alternative performance scenarios, which can provide useful insights in the pro forma financial analysis and models of new plants. 1 ref., 1 fig., 3 tabs.« less

  4. The Future of Family Engagement in Residential Care Settings

    ERIC Educational Resources Information Center

    Affronti, Melissa L.; Levison-Johnson, Jody

    2009-01-01

    Residential programs for children and youth are increasingly implementing engagement strategies to promote family-centered and family-driven models of care (Leichtman, 2008). The practice of engagement is a fairly new area of research, especially in residential care. Driven by their goal to increase the use of state-of-the-art family engagement…

  5. Closing the Loop: How We Better Serve Our Students through a Comprehensive Assessment Process

    ERIC Educational Resources Information Center

    Arcario, Paul; Eynon, Bret; Klages, Marisa; Polnariev, Bernard A.

    2013-01-01

    Outcomes assessment is often driven by demands for accountability. LaGuardia Community College's outcomes assessment model has advanced student learning, shaped academic program development, and created an impressive culture of faculty-driven assessment. Our inquiry-based approach uses ePortfolios for collection of student work and demonstrates…

  6. Statistical Inference in Hidden Markov Models Using k-Segment Constraints

    PubMed Central

    Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher

    2016-01-01

    Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674

  7. Theory-Based Stakeholder Evaluation

    ERIC Educational Resources Information Center

    Hansen, Morten Balle; Vedung, Evert

    2010-01-01

    This article introduces a new approach to program theory evaluation called theory-based stakeholder evaluation or the TSE model for short. Most theory-based approaches are program theory driven and some are stakeholder oriented as well. Practically, all of the latter fuse the program perceptions of the various stakeholder groups into one unitary…

  8. Robust Programming Problems Based on the Mean-Variance Model Including Uncertainty Factors

    NASA Astrophysics Data System (ADS)

    Hasuike, Takashi; Ishii, Hiroaki

    2009-01-01

    This paper considers robust programming problems based on the mean-variance model including uncertainty sets and fuzzy factors. Since these problems are not well-defined problems due to fuzzy factors, it is hard to solve them directly. Therefore, introducing chance constraints, fuzzy goals and possibility measures, the proposed models are transformed into the deterministic equivalent problems. Furthermore, in order to solve these equivalent problems efficiently, the solution method is constructed introducing the mean-absolute deviation and doing the equivalent transformations.

  9. A generalized interval fuzzy mixed integer programming model for a multimodal transportation problem under uncertainty

    NASA Astrophysics Data System (ADS)

    Tian, Wenli; Cao, Chengxuan

    2017-03-01

    A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.

  10. Data to Decisions: Creating a Culture of Model-Driven Drug Discovery.

    PubMed

    Brown, Frank K; Kopti, Farida; Chang, Charlie Zhenyu; Johnson, Scott A; Glick, Meir; Waller, Chris L

    2017-09-01

    Merck & Co., Inc., Kenilworth, NJ, USA, is undergoing a transformation in the way that it prosecutes R&D programs. Through the adoption of a "model-driven" culture, enhanced R&D productivity is anticipated, both in the form of decreased attrition at each stage of the process and by providing a rational framework for understanding and learning from the data generated along the way. This new approach focuses on the concept of a "Design Cycle" that makes use of all the data possible, internally and externally, to drive decision-making. These data can take the form of bioactivity, 3D structures, genomics, pathway, PK/PD, safety data, etc. Synthesis of high-quality data into models utilizing both well-established and cutting-edge methods has been shown to yield high confidence predictions to prioritize decision-making and efficiently reposition resources within R&D. The goal is to design an adaptive research operating plan that uses both modeled data and experiments, rather than just testing, to drive project decision-making. To support this emerging culture, an ambitious information management (IT) program has been initiated to implement a harmonized platform to facilitate the construction of cross-domain workflows to enable data-driven decision-making and the construction and validation of predictive models. These goals are achieved through depositing model-ready data, agile persona-driven access to data, a unified cross-domain predictive model lifecycle management platform, and support for flexible scientist-developed workflows that simplify data manipulation and consume model services. The end-to-end nature of the platform, in turn, not only supports but also drives the culture change by enabling scientists to apply predictive sciences throughout their work and over the lifetime of a project. This shift in mindset for both scientists and IT was driven by an early impactful demonstration of the potential benefits of the platform, in which expert-level early discovery predictive models were made available from familiar desktop tools, such as ChemDraw. This was built using a workflow-driven service-oriented architecture (SOA) on top of the rigorous registration of all underlying model entities.

  11. An attribute-driven statistics generator for use in a G.I.S. environment

    NASA Technical Reports Server (NTRS)

    Thomas, R. W.; Ritter, P. R.; Kaugars, A.

    1984-01-01

    When performing research using digital geographic information it is often useful to produce quantitative characterizations of the data, usually within some constraints. In the research environment the different combinations of required data and constraints can often become quite complex. This paper describes a technique that gives the researcher a powerful and flexible way to set up many possible combinations of data and constraints without having to perform numerous intermediate steps or create temporary data bands. This method provides an efficient way to produce descriptive statistics in such situations.

  12. solveME: fast and reliable solution of nonlinear ME models.

    PubMed

    Yang, Laurence; Ma, Ding; Ebrahim, Ali; Lloyd, Colton J; Saunders, Michael A; Palsson, Bernhard O

    2016-09-22

    Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints. Here, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models using a quad-precision NLP solver (Quad MINOS). Our method was up to 45 % faster than binary search for six significant digits in growth rate. We also develop a fast, quad-precision flux variability analysis that is accelerated (up to 60× speedup) via solver warm-starts. Finally, we employ the tools developed to investigate growth-coupled succinate overproduction, accounting for proteome constraints. Just as genome-scale metabolic reconstructions have become an invaluable tool for computational and systems biologists, we anticipate that these fast and numerically reliable ME solution methods will accelerate the wide-spread adoption of ME models for researchers in these fields.

  13. Using a Time-Driven Activity-Based Costing Model To Determine the Actual Cost of Services Provided by a Transgenic Core.

    PubMed

    Gerwin, Philip M; Norinsky, Rada M; Tolwani, Ravi J

    2018-03-01

    Laboratory animal programs and core laboratories often set service rates based on cost estimates. However, actual costs may be unknown, and service rates may not reflect the actual cost of services. Accurately evaluating the actual costs of services can be challenging and time-consuming. We used a time-driven activity-based costing (ABC) model to determine the cost of services provided by a resource laboratory at our institution. The time-driven approach is a more efficient approach to calculating costs than using a traditional ABC model. We calculated only 2 parameters: the time required to perform an activity and the unit cost of the activity based on employee cost. This method allowed us to rapidly and accurately calculate the actual cost of services provided, including microinjection of a DNA construct, microinjection of embryonic stem cells, embryo transfer, and in vitro fertilization. We successfully implemented a time-driven ABC model to evaluate the cost of these services and the capacity of labor used to deliver them. We determined how actual costs compared with current service rates. In addition, we determined that the labor supplied to conduct all services (10,645 min/wk) exceeded the practical labor capacity (8400 min/wk), indicating that the laboratory team was highly efficient and that additional labor capacity was needed to prevent overloading of the current team. Importantly, this time-driven ABC approach allowed us to establish a baseline model that can easily be updated to reflect operational changes or changes in labor costs. We demonstrated that a time-driven ABC model is a powerful management tool that can be applied to other core facilities as well as to entire animal programs, providing valuable information that can be used to set rates based on the actual cost of services and to improve operating efficiency.

  14. Reconstructing cerebrovascular networks under local physiological constraints by integer programming

    DOE PAGES

    Rempfler, Markus; Schneider, Matthias; Ielacqua, Giovanna D.; ...

    2015-04-23

    We introduce a probabilistic approach to vessel network extraction that enforces physiological constraints on the vessel structure. The method accounts for both image evidence and geometric relationships between vessels by solving an integer program, which is shown to yield the maximum a posteriori (MAP) estimate to the probabilistic model. Starting from an over-connected network, it is pruning vessel stumps and spurious connections by evaluating the local geometry and the global connectivity of the graph. We utilize a high-resolution micro computed tomography (µCT) dataset of a cerebrovascular corrosion cast to obtain a reference network and learn the prior distributions of ourmore » probabilistic model. As a result, we perform experiments on micro magnetic resonance angiography (µMRA) images of mouse brains and discuss properties of the networks obtained under different tracking and pruning approaches.« less

  15. Space-Frequency Correlations in Multistatic Acoustic Reverberation Due to a Wind-Driven Sea Surface: Theoretical Results at Low Frequency

    DTIC Science & Technology

    1999-11-26

    basic goal of the analysis . In other respects, however, the two approaches differ. Harper and Labianca began by modeling the input stochastic processes...contribution. To facilitate the analysis , however, he placed the receivers at a common depth and was, thus, unable to examine the vertical aspects of...v.p-ikovtot’ILW ... ft a FodA H+TIWAI),** *. «) = -^rf"« x { *+*wa«,> ... , % ;/<„’, • (»D 4-6.5 Bragg-Only Constraint For v < 1 — U

  16. Hybrid reflecting objectives for functional multiphoton microscopy in turbid media

    PubMed Central

    Vučinić, Dejan; Bartol, Thomas M.; Sejnowski, Terrence J.

    2010-01-01

    Most multiphoton imaging of biological specimens is performed using microscope objectives optimized for high image quality under wide-field illumination. We present a class of objectives designed de novo without regard for these traditional constraints, driven exclusively by the needs of fast multiphoton imaging in turbid media: the delivery of femtosecond pulses without dispersion and the efficient collection of fluorescence. We model the performance of one such design optimized for a typical brain-imaging setup and show that it can greatly outperform objectives commonly used for this task. PMID:16880851

  17. Translational Neuroscience as a Tool for Intervention Development in the Context of High-Adversity Families

    ERIC Educational Resources Information Center

    Rutherford, Helena J. V.; Mayes, Linda C.; Fisher, Philip A.

    2016-01-01

    The use of theory-driven models to develop and evaluate family-based intervention programs has a long history in psychology. Some of the first evidence-based parenting programs to address child problem behavior, developed in the 1970s, were grounded in causal models derived from longitudinal developmental research. The same translational…

  18. Finding optimal vaccination strategies under parameter uncertainty using stochastic programming.

    PubMed

    Tanner, Matthew W; Sattenspiel, Lisa; Ntaimo, Lewis

    2008-10-01

    We present a stochastic programming framework for finding the optimal vaccination policy for controlling infectious disease epidemics under parameter uncertainty. Stochastic programming is a popular framework for including the effects of parameter uncertainty in a mathematical optimization model. The problem is initially formulated to find the minimum cost vaccination policy under a chance-constraint. The chance-constraint requires that the probability that R(*)

  19. Development of a decision support tool for seasonal water supply management incorporating system uncertainties and operational constraints

    NASA Astrophysics Data System (ADS)

    Wang, H.; Asefa, T.

    2017-12-01

    A real-time decision support tool (DST) for water supply system would consider system uncertainties, e.g., uncertain streamflow and demand, as well as operational constraints and infrastructure outage (e.g., pump station shutdown, an offline reservoir due to maintenance). Such DST is often used by water managers for resource allocation and delivery for customers. Although most seasonal DST used by water managers recognize those system uncertainties and operational constraints, most use only historical information or assume deterministic outlook of water supply systems. This study presents a seasonal DST that incorporates rainfall/streamflow uncertainties, seasonal demand outlook and system operational constraints. Large scale climate-information is captured through a rainfall simulator driven by a Bayesian non-homogeneous Markov Chain Monte Carlo model that allows non-stationary transition probabilities contingent on Nino 3.4 index. An ad-hoc seasonal demand forecasting model considers weather conditions explicitly and socio-economic factors implicitly. Latin Hypercube sampling is employed to effectively sample probability density functions of flow and demand. Seasonal system operation is modelled as a mixed-integer optimization problem that aims at minimizing operational costs. It embeds the flexibility of modifying operational rules at different components, e.g., surface water treatment plants, desalination facilities, and groundwater pumping stations. The proposed framework is illustrated at a wholesale water supplier in Southeastern United States, Tampa Bay Water. The use of the tool is demonstrated in proving operational guidance in a typical drawdown and refill cycle of a regional reservoir. The DST provided: 1) probabilistic outlook of reservoir storage and chance of a successful refill by the end of rainy season; 2) operational expectations for large infrastructures (e.g., high service pumps and booster stations) throughout the season. Other potential use of such DST is also discussed.

  20. Evidence of market-driven size-selective fishing and the mediating effects of biological and institutional factors

    PubMed Central

    Reddy, Sheila M. W.; Wentz, Allison; Aburto-Oropeza, Octavio; Maxey, Martin; Nagavarapu, Sriniketh; Leslie, Heather M.

    2014-01-01

    Market demand is often ignored or assumed to lead uniformly to the decline of resources. Yet little is known about how market demand influences natural resources in particular contexts, or the mediating effects of biological or institutional factors. Here, we investigate this problem by examining the Pacific red snapper (Lutjanus peru) fishery around La Paz, Mexico, where medium or “plate-sized” fish are sold to restaurants at a premium price. If higher demand for plate-sized fish increases the relative abundance of the smallest (recruit size class) and largest (most fecund) fish, this may be a market mechanism to increase stocks and fishermen’s revenues. We tested this hypothesis by estimating the effect of prices on the distribution of catch across size classes using daily records of prices and catch. We linked predictions from this economic choice model to a staged-based model of the fishery to estimate the effects on the stock and revenues from harvest. We found that the supply of plate-sized fish increased by 6%, while the supply of large fish decreased by 4% as a result of a 13% price premium for plate-sized fish. This market-driven size selection increased revenues (14%) but decreased total fish biomass (−3%). However, when market-driven size selection was combined with limited institutional constraints, both fish biomass (28%) and fishermen’s revenue (22%) increased. These results show that the direction and magnitude of the effects of market demand on biological populations and human behavior can depend on both biological attributes and institutional constraints. Fisheries management may capitalize on these conditional effects by implementing size-based regulations when economic and institutional incentives will enhance compliance, as in the case we describe here, or by creating compliance enhancing conditions for existing regulations. PMID:23865225

  1. Strong Lensing Mass Reconstruction: from Frontier Fields to the Typical Lensing Clusters of Future Surveys

    NASA Astrophysics Data System (ADS)

    Sharon, Keren; Gladders, Michael D.; Rigby, Jane R.; Bayliss, Matthew B.; Wuyts, Eva; Dahle, Håkon; Johnson, Traci L.; Florian, Michael K.; Dunham, Samuel; Murray, Katherine; Whitaker, Kate; Li, Nan

    Driven by the unprecedented wealth of high quality data that is accumulating for the Frontier Fields, they are becoming some of the best-studied strong lensing clusters to date, and probably the next few years. As will be discussed intensively in this focus meeting, the FF prove transformative for many fields: from studies of the high redshift Universe, to the assembly and structure of the clusters themselves. The FF data and the extensive collaborative effort around this program will also allow us to examine and improve upon current lens modeling techniques. Strong lensing is a powerful tool for mass reconstruction of the cores of galaxy clusters of all scales, providing an estimate of the total (dark and seen) projected mass density distribution out to 0.5 Mpc. Though SL mass may be biased by contribution from structures along the line of sight, its strength is that it is relatively insensitive to assumptions on cluster baryon astrophysics and dynamical state. Like the Frontier Fields clusters, the most ``famous'' strong lensing clusters are at the high mass end; they lens dozens of background sources into multiple images, providing ample lensing constraints. In this talk, I will focus on how we can leverage what we learn from modeling the FF clusters in strong lensing studies of the hundreds of clusters that will be discovered in upcoming surveys. In typical clusters, unlike the Frontier Fields, the Bullet Cluster and A1689, we observe only one to a handful of background sources, and have limited lensing constraints. I will describe the limitations that such a configuration imposes on strong lens modeling, highlight measurements that are robust to the richness of lensing evidence, and address the sources of uncertainty and what sort of information can help reduce those uncertainties. This category of lensing clusters is most relevant to the wide cluster surveys of the future.

  2. Strong Lensing Mass Reconstruction: from Frontier Fields to the Typical Lensing Clusters of Future Surveys

    NASA Astrophysics Data System (ADS)

    Sharon, Keren

    2015-08-01

    Driven by the unprecedented wealth of high quality data that is accumulating for the Frontier Fields, they are becoming some of the best-studied strong lensing clusters to date, and probably the next few years. As will be discussed intensively in this focus meeting, the FF prove transformative for many fields: from studies of the high redshift Universe, to the assembly and structure of the clusters themselves. The FF data and the extensive collaborative effort around this program will also allow us to examine and improve upon current lens modeling techniques. Strong lensing is a powerful tool for mass reconstruction of the cores of galaxy clusters of all scales, providing an estimate of the total (dark and seen) projected mass density distribution out to ~0.5 Mpc. Though SL mass may be biased by contribution from structures along the line of sight, its strength is that it is relatively insensitive to assumptions on cluster baryon astrophysics and dynamical state. Like the Frontier Fields clusters, the most "famous" strong lensing clusters are at the high mass end; they lens dozens of background sources into multiple images, providing ample lensing constraints. In this talk, I will focus on how we can leverage what we learn from modeling the FF clusters in strong lensing studies of the hundreds of clusters that will be discovered in upcoming surveys. In typical clusters, unlike the Frontier Fields, the Bullet Cluster and A1689, we observe only one to a handful of background sources, and have limited lensing constraints. I will describe the limitations that such a configuration imposes on strong lens modeling, highlight measurements that are robust to the richness of lensing evidence, and address the sources of uncertainty and what sort of information can help reduce those uncertainties. This category of lensing clusters is most relevant to the wide cluster surveys of the future.

  3. Modelling the formation of working memory with networks of integrate-and-fire neurons connected by plastic synapses.

    PubMed

    Del Giudice, Paolo; Fusi, Stefano; Mattia, Maurizio

    2003-01-01

    In this paper we review a series of works concerning models of spiking neurons interacting via spike-driven, plastic, Hebbian synapses, meant to implement stimulus driven, unsupervised formation of working memory (WM) states. Starting from a summary of the experimental evidence emerging from delayed matching to sample (DMS) experiments, we briefly review the attractor picture proposed to underlie WM states. We then describe a general framework for a theoretical approach to learning with synapses subject to realistic constraints and outline some general requirements to be met by a mechanism of Hebbian synaptic structuring. We argue that a stochastic selection of the synapses to be updated allows for optimal memory storage, even if the number of stable synaptic states is reduced to the extreme (bistable synapses). A description follows of models of spike-driven synapses that implement the stochastic selection by exploiting the high irregularity in the pre- and post-synaptic activity. Reasons are listed why dynamic learning, that is the process by which the synaptic structure develops under the only guidance of neural activities, driven in turn by stimuli, is hard to accomplish. We provide a 'feasibility proof' of dynamic formation of WM states in this context the beneficial role of short-term depression (STD) is illustrated. by showing how an initially unstructured network autonomously develops a synaptic structure supporting simultaneously stable spontaneous and WM states in this context the beneficial role of short-term depression (STD) is illustrated. After summarizing heuristic indications emerging from the study performed, we conclude by briefly discussing open problems and critical issues still to be clarified.

  4. USACM Thematic Workshop On Uncertainty Quantification And Data-Driven Modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, James R.

    The USACM Thematic Workshop on Uncertainty Quantification and Data-Driven Modeling was held on March 23-24, 2017, in Austin, TX. The organizers of the technical program were James R. Stewart of Sandia National Laboratories and Krishna Garikipati of University of Michigan. The administrative organizer was Ruth Hengst, who serves as Program Coordinator for the USACM. The organization of this workshop was coordinated through the USACM Technical Thrust Area on Uncertainty Quantification and Probabilistic Analysis. The workshop website (http://uqpm2017.usacm.org) includes the presentation agenda as well as links to several of the presentation slides (permission to access the presentations was granted by eachmore » of those speakers, respectively). Herein, this final report contains the complete workshop program that includes the presentation agenda, the presentation abstracts, and the list of posters.« less

  5. Information-Driven Blind Doppler Shift Estimation and Compensation Methods for Underwater Wireless Sensor Networks

    DTIC Science & Technology

    2015-08-24

    SUPPLEMENTARY NOTES 12. DISTRIBUTION AVAILIBILITY STATEMENT 6. AUTHORS 7. PERFORMING ORGANIZATION NAMES AND ADDRESSES 15. SUBJECT TERMS b. ABSTRACT 2...network keeping constraints such as transmission rate, transmission delay, Signal-to-Interference and Noise Ratio (SINR) under consideration. Table...distances. It is advantageous to accomplish such transmission using sensors in a multi-hop relay form keeping constraints such as transmission rate

  6. jFuzz: A Concolic Whitebox Fuzzer for Java

    NASA Technical Reports Server (NTRS)

    Jayaraman, Karthick; Harvison, David; Ganesh, Vijay; Kiezun, Adam

    2009-01-01

    We present jFuzz, a automatic testing tool for Java programs. jFuzz is a concolic whitebox fuzzer, built on the NASA Java PathFinder, an explicit-state Java model checker, and a framework for developing reliability and analysis tools for Java. Starting from a seed input, jFuzz automatically and systematically generates inputs that exercise new program paths. jFuzz uses a combination of concrete and symbolic execution, and constraint solving. Time spent on solving constraints can be significant. We implemented several well-known optimizations and name-independent caching, which aggressively normalizes the constraints to reduce the number of calls to the constraint solver. We present preliminary results due to the optimizations, and demonstrate the effectiveness of jFuzz in creating good test inputs. The source code of jFuzz is available as part of the NASA Java PathFinder. jFuzz is intended to be a research testbed for investigating new testing and analysis techniques based on concrete and symbolic execution. The source code of jFuzz is available as part of the NASA Java PathFinder.

  7. pyomo.dae: a modeling and automatic discretization framework for optimization with differential and algebraic equations

    DOE PAGES

    Nicholson, Bethany; Siirola, John D.; Watson, Jean-Paul; ...

    2017-12-20

    We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org. One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differentialmore » equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.« less

  8. pyomo.dae: a modeling and automatic discretization framework for optimization with differential and algebraic equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Bethany; Siirola, John D.; Watson, Jean-Paul

    We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org. One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differentialmore » equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.« less

  9. Spatial distribution of radionuclides in 3D models of SN 1987A and Cas A

    NASA Astrophysics Data System (ADS)

    Janka, Hans-Thomas; Gabler, Michael; Wongwathanarat, Annop

    2017-02-01

    Fostered by the possibilities of multi-dimensional computational modeling, in particular the advent of three-dimensional (3D) simulations, our understanding of the neutrino-driven explosion mechanism of core-collapse supernovae (SNe) has experienced remarkable progress over the past decade. First self-consistent, first-principle models have shown successful explosions in 3D, and even failed cases may be cured by moderate changes of the microphysics inside the neutron star (NS), better grid resolution, or more detailed progenitor conditions at the onset of core collapse, in particular large-scale perturbations in the convective Si and O burning shells. 3D simulations have also achieved to follow neutrino-driven explosions continuously from the initiation of the blast wave, through the shock breakout from the progenitor surface, into the radioactively powered evolution of the SN, and towards the free expansion phase of the emerging remnant. Here we present results from such simulations, which form the basis for direct comparisons with observations of SNe and SN remnants in order to derive constraints on the still disputed explosion mechanism. It is shown that predictions based on hydrodynamic instabilities and mixing processes associated with neutrino-driven explosions yield good agreement with measured NS kicks, light-curve properties of SN 1987A and asymmetries of iron and 44Ti distributions observed in SN 1987A and Cassiopeia A.

  10. Data collection and analysis strategies for phMRI.

    PubMed

    Mandeville, Joseph B; Liu, Christina H; Vanduffel, Wim; Marota, John J A; Jenkins, Bruce G

    2014-09-01

    Although functional MRI traditionally has been applied mainly to study changes in task-induced brain function, evolving acquisition methodologies and improved knowledge of signal mechanisms have increased the utility of this method for studying responses to pharmacological stimuli, a technique often dubbed "phMRI". The proliferation of higher magnetic field strengths and the use of exogenous contrast agent have boosted detection power, a critical factor for successful phMRI due to the restricted ability to average multiple stimuli within subjects. Receptor-based models of neurovascular coupling, including explicit pharmacological models incorporating receptor densities and affinities and data-driven models that incorporate weak biophysical constraints, have demonstrated compelling descriptions of phMRI signal induced by dopaminergic stimuli. This report describes phMRI acquisition and analysis methodologies, with an emphasis on data-driven analyses. As an example application, statistically efficient data-driven regressors were used to describe the biphasic response to the mu-opioid agonist remifentanil, and antagonism using dopaminergic and GABAergic ligands revealed modulation of the mesolimbic pathway. Results illustrate the power of phMRI as well as our incomplete understanding of mechanisms underlying the signal. Future directions are discussed for phMRI acquisitions in human studies, for evolving analysis methodologies, and for interpretative studies using the new generation of simultaneous PET/MRI scanners. This article is part of the Special Issue Section entitled 'Neuroimaging in Neuropharmacology'. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Constrained spacecraft reorientation using mixed integer convex programming

    NASA Astrophysics Data System (ADS)

    Tam, Margaret; Glenn Lightsey, E.

    2016-10-01

    A constrained attitude guidance (CAG) system is developed using convex optimization to autonomously achieve spacecraft pointing objectives while meeting the constraints imposed by on-board hardware. These constraints include bounds on the control input and slew rate, as well as pointing constraints imposed by the sensors. The pointing constraints consist of inclusion and exclusion cones that dictate permissible orientations of the spacecraft in order to keep objects in or out of the field of view of the sensors. The optimization scheme drives a body vector towards a target inertial vector along a trajectory that consists solely of permissible orientations in order to achieve the desired attitude for a given mission mode. The non-convex rotational kinematics are handled by discretization, which also ensures that the quaternion stays unity norm. In order to guarantee an admissible path, the pointing constraints are relaxed. Depending on how strict the pointing constraints are, the degree of relaxation is tuneable. The use of binary variables permits the inclusion of logical expressions in the pointing constraints in the case that a set of sensors has redundancies. The resulting mixed integer convex programming (MICP) formulation generates a steering law that can be easily integrated into an attitude determination and control (ADC) system. A sample simulation of the system is performed for the Bevo-2 satellite, including disturbance torques and actuator dynamics which are not modeled by the controller. Simulation results demonstrate the robustness of the system to disturbances while meeting the mission requirements with desirable performance characteristics.

  12. The application of domain-driven design in NMS

    NASA Astrophysics Data System (ADS)

    Zhang, Jinsong; Chen, Yan; Qin, Shengjun

    2011-12-01

    In the traditional design approach of data-model-driven, system analysis and design phases are often separated which makes the demand information can not be expressed explicitly. The method is also easy to lead developer to the process-oriented programming, making codes between the modules or between hierarchies disordered. So it is hard to meet requirement of system scalability. The paper proposes a software hiberarchy based on rich domain model according to domain-driven design named FHRDM, then the Webwork + Spring + Hibernate (WSH) framework is determined. Domain-driven design aims to construct a domain model which not only meets the demand of the field where the software exists but also meets the need of software development. In this way, problems in Navigational Maritime System (NMS) development like big system business volumes, difficulty of requirement elicitation, high development costs and long development cycle can be resolved successfully.

  13. Why Bother to Calibrate? Model Consistency and the Value of Prior Information

    NASA Astrophysics Data System (ADS)

    Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal

    2015-04-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.

  14. Why Bother and Calibrate? Model Consistency and the Value of Prior Information.

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.

    2014-12-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.

  15. Process consistency in models: The importance of system signatures, expert knowledge, and process complexity

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.

    2014-09-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.

  16. Modeling Laser-Driven Laboratory Astrophysics Experiments Using the CRASH Code

    NASA Astrophysics Data System (ADS)

    Grosskopf, Michael; Keiter, P.; Kuranz, C. C.; Malamud, G.; Trantham, M.; Drake, R.

    2013-06-01

    Laser-driven, laboratory astrophysics experiments can provide important insight into the physical processes relevant to astrophysical systems. The radiation hydrodynamics code developed by the Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan has been used to model experimental designs for high-energy-density laboratory astrophysics campaigns on OMEGA and other high-energy laser facilities. This code is an Eulerian, block-adaptive AMR hydrodynamics code with implicit multigroup radiation transport and electron heat conduction. The CRASH model has been used on many applications including: radiative shocks, Kelvin-Helmholtz and Rayleigh-Taylor experiments on the OMEGA laser; as well as laser-driven ablative plumes in experiments by the Astrophysical Collisionless Shocks Experiments with Lasers (ACSEL) collaboration. We report a series of results with the CRASH code in support of design work for upcoming high-energy-density physics experiments, as well as comparison between existing experimental data and simulation results. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.

  17. Cournot games with network effects for electric power markets

    NASA Astrophysics Data System (ADS)

    Spezia, Carl John

    The electric utility industry is moving from regulated monopolies with protected service areas to an open market with many wholesale suppliers competing for consumer load. This market is typically modeled by a Cournot game oligopoly where suppliers compete by selecting profit maximizing quantities. The classical Cournot model can produce multiple solutions when the problem includes typical power system constraints. This work presents a mathematical programming formulation of oligopoly that produces unique solutions when constraints limit the supplier outputs. The formulation casts the game as a supply maximization problem with power system physical limits and supplier incremental profit functions as constraints. The formulation gives Cournot solutions identical to other commonly used algorithms when suppliers operate within the constraints. Numerical examples demonstrate the feasibility of the theory. The results show that the maximization formulation will give system operators more transmission capacity when compared to the actions of suppliers in a classical constrained Cournot game. The results also show that the profitability of suppliers in constrained networks depends on their location relative to the consumers' load concentration.

  18. A Program of Continuing Research on Representing, Manipulating, and Reasoning about Physical Objects

    DTIC Science & Technology

    1991-09-30

    graphics with the goal of automatically converting complex graphics models into forms more appropriate for radiosity computation. 2.4 Least Constraint We...to computer graphics with the goal of automatically 7 converting complex graphics models into forms more appropriate for radiosity com- putation. 8 4

  19. Resource Manual for Teacher Training Programs in Economics.

    ERIC Educational Resources Information Center

    Saunders, Phillip, Ed.; And Others

    This resource manual uses a general systems model for educational planning, instruction, and evaluation to describe a college introductory economics course. The goal of the manual is to help beginning or experienced instructors teach more effectively. The model components include needs, goals, objectives, constraints, planning and strategy,…

  20. Constraints in Genetic Programming

    NASA Technical Reports Server (NTRS)

    Janikow, Cezary Z.

    1996-01-01

    Genetic programming refers to a class of genetic algorithms utilizing generic representation in the form of program trees. For a particular application, one needs to provide the set of functions, whose compositions determine the space of program structures being evolved, and the set of terminals, which determine the space of specific instances of those programs. The algorithm searches the space for the best program for a given problem, applying evolutionary mechanisms borrowed from nature. Genetic algorithms have shown great capabilities in approximately solving optimization problems which could not be approximated or solved with other methods. Genetic programming extends their capabilities to deal with a broader variety of problems. However, it also extends the size of the search space, which often becomes too large to be effectively searched even by evolutionary methods. Therefore, our objective is to utilize problem constraints, if such can be identified, to restrict this space. In this publication, we propose a generic constraint specification language, powerful enough for a broad class of problem constraints. This language has two elements -- one reduces only the number of program instances, the other reduces both the space of program structures as well as their instances. With this language, we define the minimal set of complete constraints, and a set of operators guaranteeing offspring validity from valid parents. We also show that these operators are not less efficient than the standard genetic programming operators if one preprocesses the constraints - the necessary mechanisms are identified.

  1. Modeling and control of plasma rotation for NSTX using neoclassical toroidal viscosity and neutral beam injection

    NASA Astrophysics Data System (ADS)

    Goumiri, I. R.; Rowley, C. W.; Sabbagh, S. A.; Gates, D. A.; Gerhardt, S. P.; Boyer, M. D.; Andre, R.; Kolemen, E.; Taira, K.

    2016-03-01

    A model-based feedback system is presented to control plasma rotation in a magnetically confined toroidal fusion device, to maintain plasma stability for long-pulse operation. This research uses experimental measurements from the National Spherical Torus Experiment (NSTX) and is aimed at controlling plasma rotation using two different types of actuation: momentum from injected neutral beams and neoclassical toroidal viscosity generated by three-dimensional applied magnetic fields. Based on the data-driven model obtained, a feedback controller is designed, and predictive simulations using the TRANSP plasma transport code show that the controller is able to attain desired plasma rotation profiles given practical constraints on the actuators and the available measurements of rotation.

  2. Modeling and control of plasma rotation for NSTX using neoclassical toroidal viscosity and neutral beam injection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goumiri, I. R.; Rowley, C. W.; Sabbagh, S. A.

    2016-02-19

    A model-based feedback system is presented to control plasma rotation in a magnetically confined toroidal fusion device, to maintain plasma stability for long-pulse operation. This research uses experimental measurements from the National Spherical Torus Experiment (NSTX) and is aimed at controlling plasma rotation using two different types of actuation: momentum from injected neutral beams and neoclassical toroidal viscosity generated by three-dimensional applied magnetic fields. Based on the data-driven model obtained, a feedback controller is designed, and predictive simulations using the TRANSP plasma transport code show that the controller is able to attain desired plasma rotation profiles given practical constraints onmore » the actuators and the available measurements of rotation.« less

  3. Solar electric geocentric transfer with attitude constraints: Analysis

    NASA Technical Reports Server (NTRS)

    Sackett, L. L.; Malchow, H. L.; Delbaum, T. N.

    1975-01-01

    A time optimal or nearly time optimal trajectory program was developed for solar electric geocentric transfer with or without attitude constraints and with an optional initial high thrust stage. The method of averaging reduces computation time. A nonsingular set of orbital elements is used. The constraints, which are those of one of the SERT-C designs, introduce complexities into the analysis and the solution yields possible discontinuous changes in thrust direction. The power degradation due to VanAllen radiation is modeled analytically. A wide range of solar cell characteristics is assumed. Effects such as oblateness and shadowing are included. The analysis and the results of many example runs are included.

  4. A Gauge Invariant Description for the General Conic Constrained Particle from the FJBW Iteration Algorithm

    NASA Astrophysics Data System (ADS)

    Barbosa, Gabriel D.; Thibes, Ronaldo

    2018-06-01

    We consider a second-degree algebraic curve describing a general conic constraint imposed on the motion of a massive spinless particle. The problem is trivial at classical level but becomes involved and interesting concerning its quantum counterpart with subtleties in its symplectic structure and symmetries. We start with a second-class version of the general conic constrained particle, which encompasses previous versions of circular and elliptical paths discussed in the literature. By applying the symplectic FJBW iteration program, we proceed on to show how a gauge invariant version for the model can be achieved from the originally second-class system. We pursue the complete constraint analysis in phase space and perform the Faddeev-Jackiw symplectic quantization following the Barcelos-Wotzasek iteration program to unravel the essential aspects of the constraint structure. While in the standard Dirac-Bergmann approach there are four second-class constraints, in the FJBW they reduce to two. By using the symplectic potential obtained in the last step of the FJBW iteration process, we construct a gauge invariant model exhibiting explicitly its BRST symmetry. We obtain the quantum BRST charge and write the Green functions generator for the gauge invariant version. Our results reproduce and neatly generalize the known BRST symmetry of the rigid rotor, clearly showing that this last one constitutes a particular case of a broader class of theories.

  5. A depth-first search algorithm to compute elementary flux modes by linear programming

    PubMed Central

    2014-01-01

    Background The decomposition of complex metabolic networks into elementary flux modes (EFMs) provides a useful framework for exploring reaction interactions systematically. Generating a complete set of EFMs for large-scale models, however, is near impossible. Even for moderately-sized models (<400 reactions), existing approaches based on the Double Description method must iterate through a large number of combinatorial candidates, thus imposing an immense processor and memory demand. Results Based on an alternative elementarity test, we developed a depth-first search algorithm using linear programming (LP) to enumerate EFMs in an exhaustive fashion. Constraints can be introduced to directly generate a subset of EFMs satisfying the set of constraints. The depth-first search algorithm has a constant memory overhead. Using flux constraints, a large LP problem can be massively divided and parallelized into independent sub-jobs for deployment into computing clusters. Since the sub-jobs do not overlap, the approach scales to utilize all available computing nodes with minimal coordination overhead or memory limitations. Conclusions The speed of the algorithm was comparable to efmtool, a mainstream Double Description method, when enumerating all EFMs; the attrition power gained from performing flux feasibility tests offsets the increased computational demand of running an LP solver. Unlike the Double Description method, the algorithm enables accelerated enumeration of all EFMs satisfying a set of constraints. PMID:25074068

  6. Verification and Planning Based on Coinductive Logic Programming

    NASA Technical Reports Server (NTRS)

    Bansal, Ajay; Min, Richard; Simon, Luke; Mallya, Ajay; Gupta, Gopal

    2008-01-01

    Coinduction is a powerful technique for reasoning about unfounded sets, unbounded structures, infinite automata, and interactive computations [6]. Where induction corresponds to least fixed point's semantics, coinduction corresponds to greatest fixed point semantics. Recently coinduction has been incorporated into logic programming and an elegant operational semantics developed for it [11, 12]. This operational semantics is the greatest fix point counterpart of SLD resolution (SLD resolution imparts operational semantics to least fix point based computations) and is termed co- SLD resolution. In co-SLD resolution, a predicate goal p( t) succeeds if it unifies with one of its ancestor calls. In addition, rational infinite terms are allowed as arguments of predicates. Infinite terms are represented as solutions to unification equations and the occurs check is omitted during the unification process. Coinductive Logic Programming (Co-LP) and Co-SLD resolution can be used to elegantly perform model checking and planning. A combined SLD and Co-SLD resolution based LP system forms the common basis for planning, scheduling, verification, model checking, and constraint solving [9, 4]. This is achieved by amalgamating SLD resolution, co-SLD resolution, and constraint logic programming [13] in a single logic programming system. Given that parallelism in logic programs can be implicitly exploited [8], complex, compute-intensive applications (planning, scheduling, model checking, etc.) can be executed in parallel on multi-core machines. Parallel execution can result in speed-ups as well as in larger instances of the problems being solved. In the remainder we elaborate on (i) how planning can be elegantly and efficiently performed under real-time constraints, (ii) how real-time systems can be elegantly and efficiently model- checked, as well as (iii) how hybrid systems can be verified in a combined system with both co-SLD and SLD resolution. Implementations of co-SLD resolution as well as preliminary implementations of the planning and verification applications have been developed [4]. Co-LP and Model Checking: The vast majority of properties that are to be verified can be classified into safety properties and liveness properties. It is well known within model checking that safety properties can be verified by reachability analysis, i.e, if a counter-example to the property exists, it can be finitely determined by enumerating all the reachable states of the Kripke structure.

  7. Using Participatory Action Research To Evaluate Programs Serving People with Severe Disabilities: Reflections from the Field.

    ERIC Educational Resources Information Center

    Stevens, Karen A.; Folchman, Ruth

    1998-01-01

    This article discusses challenges in using participatory action research (PAR) in the evaluation of programs that provide services and supports to people with severe disabilities. Challenges include the need for modification of the model, time constraints, issues around power and position, and inclusion of individuals with severe disabilities.…

  8. A Model for Determining School District Cash Flow Needs.

    ERIC Educational Resources Information Center

    Dembowski, Frederick L.

    This paper discusses a model to optimize cash management in school districts. A brief discussion of the cash flow pattern of school districts is followed by an analysis of the constraints faced by the school districts in their investment planning process. A linear programming model used to optimize net interest earnings on investments is developed…

  9. Accretion dynamics in pre-main sequence binaries

    NASA Astrophysics Data System (ADS)

    Tofflemire, B.; Mathieu, R.; Herczeg, G.; Ardila, D.; Akeson, R.; Ciardi, D.; Johns-Krull, C.

    Binary stars are a common outcome of star formation. Orbital resonances, especially in short-period systems, are capable of reshaping the distribution and flows of circumstellar material. Simulations of the binary-disk interaction predict a dynamically cleared gap around the central binary, accompanied by periodic ``pulsed'' accretion events that are driven by orbital motion. To place observational constraints on the binary-disk interaction, we have conducted a long-term monitoring program tracing the time-variable accretion behavior of 9 short-period binaries. In this proceeding we present two results from our campaign: 1) the detection of periodic pulsed accretion events in DQ Tau and TWA 3A, and 2) evidence that the TWA 3A primary is the dominant accretor in the system.

  10. Photosynthetic Control of Atmospheric Carbonyl Sulfide during the Growing Season

    NASA Technical Reports Server (NTRS)

    Campbell, J. Elliott; Carmichael, Gregory R.; Chai, T.; Mena-Carrasco, M.; Tang, Y.; Blake, D. R.; Blake, N. J.; Vay, Stephanie A.; Collatz, G. James; Baker, I.; hide

    2008-01-01

    Climate models incorporate photosynthesis-climate feedbacks, yet we lack robust tools for large-scale assessments of these processes. Recent work suggests that carbonyl sulfide (COS), a trace gas consumed by plants, could provide a valuable constraint on photosynthesis. Here we analyze airborne observations of COS and carbon dioxide concentrations during the growing season over North America with a three-dimensional atmospheric transport model. We successfully modeled the persistent vertical drawdown of atmospheric COS using the quantitative relation between COS and photosynthesis that has been measured in plant chamber experiments. Furthermore, this drawdown is driven by plant uptake rather than other continental and oceanic fluxes in the model. These results provide quantitative evidence that COS gradients in the continental growing season may have broad use as a measurement-based photosynthesis tracer.

  11. Nutrient profiling can help identify foods of good nutritional quality for their price: a validation study with linear programming.

    PubMed

    Maillot, Matthieu; Ferguson, Elaine L; Drewnowski, Adam; Darmon, Nicole

    2008-06-01

    Nutrient profiling ranks foods based on their nutrient content. They may help identify foods with a good nutritional quality for their price. This hypothesis was tested using diet modeling with linear programming. Analyses were undertaken using food intake data from the nationally representative French INCA (enquête Individuelle et Nationale sur les Consommations Alimentaires) survey and its associated food composition and price database. For each food, a nutrient profile score was defined as the ratio between the previously published nutrient density score (NDS) and the limited nutrient score (LIM); a nutritional quality for price indicator was developed and calculated from the relationship between its NDS:LIM and energy cost (in euro/100 kcal). We developed linear programming models to design diets that fulfilled increasing levels of nutritional constraints at a minimal cost. The median NDS:LIM values of foods selected in modeled diets increased as the levels of nutritional constraints increased (P = 0.005). In addition, the proportion of foods with a good nutritional quality for price indicator was higher (P < 0.0001) among foods selected (81%) than among foods not selected (39%) in modeled diets. This agreement between the linear programming and the nutrient profiling approaches indicates that nutrient profiling can help identify foods of good nutritional quality for their price. Linear programming is a useful tool for testing nutrient profiling systems and validating the concept of nutrient profiling.

  12. A muscle-driven approach to restore stepping with an exoskeleton for individuals with paraplegia.

    PubMed

    Chang, Sarah R; Nandor, Mark J; Li, Lu; Kobetic, Rudi; Foglyano, Kevin M; Schnellenberger, John R; Audu, Musa L; Pinault, Gilles; Quinn, Roger D; Triolo, Ronald J

    2017-05-30

    Functional neuromuscular stimulation, lower limb orthosis, powered lower limb exoskeleton, and hybrid neuroprosthesis (HNP) technologies can restore stepping in individuals with paraplegia due to spinal cord injury (SCI). However, a self-contained muscle-driven controllable exoskeleton approach based on an implanted neural stimulator to restore walking has not been previously demonstrated, which could potentially result in system use outside the laboratory and viable for long term use or clinical testing. In this work, we designed and evaluated an untethered muscle-driven controllable exoskeleton to restore stepping in three individuals with paralysis from SCI. The self-contained HNP combined neural stimulation to activate the paralyzed muscles and generate joint torques for limb movements with a controllable lower limb exoskeleton to stabilize and support the user. An onboard controller processed exoskeleton sensor signals, determined appropriate exoskeletal constraints and stimulation commands for a finite state machine (FSM), and transmitted data over Bluetooth to an off-board computer for real-time monitoring and data recording. The FSM coordinated stimulation and exoskeletal constraints to enable functions, selected with a wireless finger switch user interface, for standing up, standing, stepping, or sitting down. In the stepping function, the FSM used a sensor-based gait event detector to determine transitions between gait phases of double stance, early swing, late swing, and weight acceptance. The HNP restored stepping in three individuals with motor complete paralysis due to SCI. The controller appropriately coordinated stimulation and exoskeletal constraints using the sensor-based FSM for subjects with different stimulation systems. The average range of motion at hip and knee joints during walking were 8.5°-20.8° and 14.0°-43.6°, respectively. Walking speeds varied from 0.03 to 0.06 m/s, and cadences from 10 to 20 steps/min. A self-contained muscle-driven exoskeleton was a feasible intervention to restore stepping in individuals with paraplegia due to SCI. The untethered hybrid system was capable of adjusting to different individuals' needs to appropriately coordinate exoskeletal constraints with muscle activation using a sensor-driven FSM for stepping. Further improvements for out-of-the-laboratory use should include implantation of plantar flexor muscles to improve walking speed and power assist as needed at the hips and knees to maintain walking as muscles fatigue.

  13. Using New Remotely-sensed Biomass To Estimate Co2 Fluxes Over Siberia

    NASA Astrophysics Data System (ADS)

    Lafont, S.; Kergoat, L.; Dedieu, G.; Le Toan, T.

    Two european programs recently focused on Siberia. The first one, Eurosiberian Car- bonflux was a faisability study for an observation system of the regional CO2 fluxes. The second one, SIBERIA was a big effort to develop and validate a biomass map on Siberia using radar data from satelltes (J-ERS, ERS). Here, we extend the simula- tion of NPP performed for the first program by using the biomass data of the second program. The TURC model, used here, is a global NPP model, based on light use efficiency, where photosynthetic assimilation is driven by satellite vegetation index, and au- totrophic respiration is driven by biomass. In this study, we will present a n´ zoom z on siberian region. The TURC model was run with a fine resolution (few kilometers) and a daily time step. We will discuss the impact of a new biomass dataset description on Net Primary Pro- ductivity (NPP) and CO2 fluxes estimation.

  14. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method.

    PubMed

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-21

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.

  15. How Evolution May Work Through Curiosity-Driven Developmental Process.

    PubMed

    Oudeyer, Pierre-Yves; Smith, Linda B

    2016-04-01

    Infants' own activities create and actively select their learning experiences. Here we review recent models of embodied information seeking and curiosity-driven learning and show that these mechanisms have deep implications for development and evolution. We discuss how these mechanisms yield self-organized epigenesis with emergent ordered behavioral and cognitive developmental stages. We describe a robotic experiment that explored the hypothesis that progress in learning, in and for itself, generates intrinsic rewards: The robot learners probabilistically selected experiences according to their potential for reducing uncertainty. In these experiments, curiosity-driven learning led the robot learner to successively discover object affordances and vocal interaction with its peers. We explain how a learning curriculum adapted to the current constraints of the learning system automatically formed, constraining learning and shaping the developmental trajectory. The observed trajectories in the robot experiment share many properties with those in infant development, including a mixture of regularities and diversities in the developmental patterns. Finally, we argue that such emergent developmental structures can guide and constrain evolution, in particular with regard to the origins of language. Copyright © 2016 Cognitive Science Society, Inc.

  16. Energy efficiency drives the global seasonal distribution of birds.

    PubMed

    Somveille, Marius; Rodrigues, Ana S L; Manica, Andrea

    2018-06-01

    The uneven distribution of biodiversity on Earth is one of the most general and puzzling patterns in ecology. Many hypotheses have been proposed to explain it, based on evolutionary processes or on constraints related to geography and energy. However, previous studies investigating these hypotheses have been largely descriptive due to the logistical difficulties of conducting controlled experiments on such large geographical scales. Here, we use bird migration-the seasonal redistribution of approximately 15% of bird species across the world-as a natural experiment for testing the species-energy relationship, the hypothesis that animal diversity is driven by energetic constraints. We develop a mechanistic model of bird distributions across the world, and across seasons, based on simple ecological and energetic principles. Using this model, we show that bird species distributions optimize the balance between energy acquisition and energy expenditure while taking into account competition with other species. These findings support, and provide a mechanistic explanation for, the species-energy relationship. The findings also provide a general explanation of migration as a mechanism that allows birds to optimize their energy budget in the face of seasonality and competition. Finally, our mechanistic model provides a tool for predicting how ecosystems will respond to global anthropogenic change.

  17. The Need and Challenges for Distributed Engine Control

    NASA Technical Reports Server (NTRS)

    Culley, Dennis E.

    2013-01-01

    The presentation describes the challenges facing the turbine engine control system. These challenges are primarily driven by a dependence on commercial electronics and an increasingly severe environment on board the turbine engine. The need for distributed control is driven by the need to overcome these system constraints and develop a new growth path for control technology and, as a result, improved turbine engine performance.

  18. Developing a habitat-driven approach to CWWT design

    USGS Publications Warehouse

    Sartoris, James J.; Thullen, Joan S.

    1998-01-01

    A habitat-driven approach to CWWT design is defined as designing the constructed wetland to maximize habitat values for a given site within the constraints of meeting specified treatment criteria. This is in contrast to the more typical approach of designing the CWWT to maximize treatment efficiency, and then, perhaps, adding wildlife habitat features. The habitat-driven approach is advocated for two reasons: (1) because good wetland habitat is critically lacking, and (2) because it is hypothesized that well-designed habitat will result in good, sustainable wastewater treatment.

  19. Curiosity driven reinforcement learning for motion planning on humanoids

    PubMed Central

    Frank, Mikhail; Leitner, Jürgen; Stollenga, Marijn; Förster, Alexander; Schmidhuber, Jürgen

    2014-01-01

    Most previous work on artificial curiosity (AC) and intrinsic motivation focuses on basic concepts and theory. Experimental results are generally limited to toy scenarios, such as navigation in a simulated maze, or control of a simple mechanical system with one or two degrees of freedom. To study AC in a more realistic setting, we embody a curious agent in the complex iCub humanoid robot. Our novel reinforcement learning (RL) framework consists of a state-of-the-art, low-level, reactive control layer, which controls the iCub while respecting constraints, and a high-level curious agent, which explores the iCub's state-action space through information gain maximization, learning a world model from experience, controlling the actual iCub hardware in real-time. To the best of our knowledge, this is the first ever embodied, curious agent for real-time motion planning on a humanoid. We demonstrate that it can learn compact Markov models to represent large regions of the iCub's configuration space, and that the iCub explores intelligently, showing interest in its physical constraints as well as in objects it finds in its environment. PMID:24432001

  20. Reconciling pairs of concurrently used clinical practice guidelines using Constraint Logic Programming.

    PubMed

    Wilk, Szymon; Michalowski, Martin; Michalowski, Wojtek; Hing, Marisela Mainegra; Farion, Ken

    2011-01-01

    This paper describes a new methodological approach to reconciling adverse and contradictory activities (called points of contention) occurring when a patient is managed according to two or more concurrently used clinical practice guidelines (CPGs). The need to address these inconsistencies occurs when a patient with more than one disease, each of which is a comorbid condition, has to be managed according to different treatment regimens. We propose an automatic procedure that constructs a mathematical guideline model using the Constraint Logic Programming (CLP) methodology, uses this model to identify and mitigate encountered points of contention, and revises the considered CPGs accordingly. The proposed procedure is used as an alerting mechanism and coupled with a guideline execution engine warns the physician about potential problems with the concurrent application of two or more guidelines. We illustrate the operation of our procedure in a clinical scenario describing simultaneous use of CPGs for duodenal ulcer and transient ischemic attack.

  1. Academic program models for undergraduate biomedical engineering.

    PubMed

    Krishnan, Shankar M

    2014-01-01

    There is a proliferation of medical devices across the globe for the diagnosis and therapy of diseases. Biomedical engineering (BME) plays a significant role in healthcare and advancing medical technologies thus creating a substantial demand for biomedical engineers at undergraduate and graduate levels. There has been a surge in undergraduate programs due to increasing demands from the biomedical industries to cover many of their segments from bench to bedside. With the requirement of multidisciplinary training within allottable duration, it is indeed a challenge to design a comprehensive standardized undergraduate BME program to suit the needs of educators across the globe. This paper's objective is to describe three major models of undergraduate BME programs and their curricular requirements, with relevant recommendations to be applicable in institutions of higher education located in varied resource settings. Model 1 is based on programs to be offered in large research-intensive universities with multiple focus areas. The focus areas depend on the institution's research expertise and training mission. Model 2 has basic segments similar to those of Model 1, but the focus areas are limited due to resource constraints. In this model, co-op/internship in hospitals or medical companies is included which prepares the graduates for the work place. In Model 3, students are trained to earn an Associate Degree in the initial two years and they are trained for two more years to be BME's or BME Technologists. This model is well suited for the resource-poor countries. All three models must be designed to meet applicable accreditation requirements. The challenges in designing undergraduate BME programs include manpower, facility and funding resource requirements and time constraints. Each academic institution has to carefully analyze its short term and long term requirements. In conclusion, three models for BME programs are described based on large universities, colleges, and community colleges. Model 1 is suitable for research-intensive universities. Models 2 and 3 can be successfully implemented in higher education institutions with low and limited resources with appropriate guidance and support from international organizations. The models will continually evolve mainly to meet the industry needs.

  2. Devil is in the details: Using logic models to investigate program process.

    PubMed

    Peyton, David J; Scicchitano, Michael

    2017-12-01

    Theory-based logic models are commonly developed as part of requirements for grant funding. As a tool to communicate complex social programs, theory based logic models are an effective visual communication. However, after initial development, theory based logic models are often abandoned and remain in their initial form despite changes in the program process. This paper examines the potential benefits of committing time and resources to revising the initial theory driven logic model and developing detailed logic models that describe key activities to accurately reflect the program and assist in effective program management. The authors use a funded special education teacher preparation program to exemplify the utility of drill down logic models. The paper concludes with lessons learned from the iterative revision process and suggests how the process can lead to more flexible and calibrated program management. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Chemical modeling constraints on Martian surface mineralogies formed in an early, warm, wet climate, and speculations on the occurrence of phosphate minerals in the Martian regolith

    NASA Technical Reports Server (NTRS)

    Plumlee, Geoffrey S.; Ridley, W. Ian; Debraal, Jeffrey D.

    1992-01-01

    This is one in a series of reports summarizing our chemical modeling studies of water-rock-gas interactions at the martian surface through time. The purpose of these studies is to place constraints on possible mineralogies formed at the martian surface and to model the geochemical implications of martian surficial processes proposed by previous researchers. Plumlee and Ridley summarize geochemical processes that may have occurred as a result of inferred volcano- and impact-driven hydrothermal activity on Mars. DeBraal et al. model the geochemical aspects of water-rock interactions and water evaporation near 0 C, as a prelude to future calculations that will model sub-0 C brine-rock-clathrate interactions under the current martian climate. In this report, we discuss reaction path calculations that model chemical processes that may have occurred at the martian surface in a postulated early, warm, wet climate. We assume a temperature of 25 C in all our calculations. Processes we model here include (1) the reaction of rainwater under various ambient CO2 and O2 pressures with basaltic rocks at the martian surface, (2) the formation of acid rain by volcanic gases such as HCl and SO2, (3) the reactions of acid rain with basaltic surficial materials, and (4) evaporation of waters resulting from rainwater-basalt interactions.

  4. Exact simulation of integrate-and-fire models with exponential currents.

    PubMed

    Brette, Romain

    2007-10-01

    Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.

  5. High-resolution seismic constraints on flow dynamics in the oceanic asthenosphere.

    PubMed

    Lin, Pei-Ying Patty; Gaherty, James B; Jin, Ge; Collins, John A; Lizarralde, Daniel; Evans, Rob L; Hirth, Greg

    2016-07-28

    Convective flow in the mantle and the motions of tectonic plates produce deformation of Earth's interior, and the rock fabric produced by this deformation can be discerned using the anisotropy of the seismic wave speed. This deformation is commonly inferred close to lithospheric boundaries beneath the ocean in the uppermost mantle, including near seafloor-spreading centres as new plates are formed via corner flow, and within a weak asthenosphere that lubricates large-scale plate-driven flow and accommodates smaller scale convection. Seismic models of oceanic upper mantle differ as to the relative importance of these deformation processes: seafloor spreading fabric is very strong just beneath the crust-mantle boundary (the Mohorovičić discontinuity, or Moho) at relatively local scales, but at the global and ocean-basin scales, oceanic lithosphere typically appears weakly anisotropic when compared to the asthenosphere. Here we use Rayleigh waves, recorded across an ocean-bottom seismograph array in the central Pacific Ocean (the NoMelt Experiment), to provide unique localized constraints on seismic anisotropy within the oceanic lithosphere-asthenosphere system in the middle of a plate. We find that azimuthal anisotropy is strongest within the high-seismic-velocity lid, with the fast direction coincident with seafloor spreading. A minimum in the magnitude of azimuthal anisotropy occurs within the middle of the seismic low-velocity zone, and then increases with depth below the weakest portion of the asthenosphere. At no depth does the fast direction correlate with the apparent plate motion. Our results suggest that the highest strain deformation in the shallow oceanic mantle occurs during corner flow at the ridge axis, and via pressure-driven or buoyancy-driven flow within the asthenosphere. Shear associated with motion of the plate over the underlying asthenosphere, if present, is weak compared to these other processes.

  6. Software For Computing Reliability Of Other Software

    NASA Technical Reports Server (NTRS)

    Nikora, Allen; Antczak, Thomas M.; Lyu, Michael

    1995-01-01

    Computer Aided Software Reliability Estimation (CASRE) computer program developed for use in measuring reliability of other software. Easier for non-specialists in reliability to use than many other currently available programs developed for same purpose. CASRE incorporates mathematical modeling capabilities of public-domain Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) computer program and runs in Windows software environment. Provides menu-driven command interface; enabling and disabling of menu options guides user through (1) selection of set of failure data, (2) execution of mathematical model, and (3) analysis of results from model. Written in C language.

  7. Biophysical modeling of the temporal niche: from first principles to the evolution of activity patterns.

    PubMed

    Levy, Ofir; Dayan, Tamar; Kronfeld-Schor, Noga; Porter, Warren P

    2012-06-01

    Most mammals can be characterized as nocturnal or diurnal. However infrequently, species may overcome evolutionary constraints and alter their activity patterns. We modeled the fundamental temporal niche of a diurnal desert rodent, the golden spiny mouse, Acomys russatus. This species can shift into nocturnal activity in the absence of its congener, the common spiny mouse, Acomys cahirinus, suggesting that it was competitively driven into diurnality and that this shift in a small desert rodent may involve physiological costs. Therefore, we compared metabolic costs of diurnal versus nocturnal activity using a biophysical model to evaluate the preferred temporal niche of this species. The model predicted that energy expenditure during foraging is almost always lower during the day except during midday in summer at the less sheltered microhabitat. We also found that a shift in summer to foraging in less sheltered microhabitats in response to predation pressure and food availability involves a significant physiological cost moderated by midday reduction in activity. Thus, adaptation to diurnality may reflect the "ghost of competition past"; climate-driven diurnality is an alternative but less likely hypothesis. While climate is considered to play a major role in the physiology and evolution of mammals, this is the first study to model its potential to affect the evolution of activity patterns of mammals.

  8. Observed-Score Equating as a Test Assembly Problem.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Luecht, Richard M.

    1998-01-01

    Derives a set of linear conditions of item-response functions that guarantees identical observed-score distributions on two test forms. The conditions can be added as constraints to a linear programming model for test assembly. An example illustrates the use of the model for an item pool from the Law School Admissions Test (LSAT). (SLD)

  9. Food choices to meet nutrient recommendations for the adult Brazilian population based on the linear programming approach.

    PubMed

    Dos Santos, Quenia; Sichieri, Rosely; Darmon, Nicole; Maillot, Matthieu; Verly-Junior, Eliseu

    2018-06-01

    To identify optimal food choices that meet nutritional recommendations to reduce prevalence of inadequate nutrient intakes. Linear programming was used to obtain an optimized diet with sixty-eight foods with the least difference from the observed population mean dietary intake while meeting a set of nutritional goals that included reduction in the prevalence of inadequate nutrient intakes to ≤20 %. Brazil. Participants (men and women, n 25 324) aged 20 years or more from the first National Dietary Survey (NDS) 2008-2009. Feasible solution to the model was not found when all constraints were imposed; infeasible nutrients were Ca, vitamins D and E, Mg, Zn, fibre, linolenic acid, monounsaturated fat and Na. Feasible solution was obtained after relaxing the nutritional constraints for these limiting nutrients by including a deviation variable in the model. Estimated prevalence of nutrient inadequacy was reduced by 60-70 % for most nutrients, and mean saturated and trans-fat decreased in the optimized diet meeting the model constraints. Optimized diet was characterized by increases especially in fruits (+92 g), beans (+64 g), vegetables (+43 g), milk (+12 g), fish and seafood (+15 g) and whole cereals (+14 g), and reductions of sugar-sweetened beverages (-90 g), rice (-63 g), snacks (-14 g), red meat (-13 g) and processed meat (-9·7 g). Linear programming is a unique tool to identify which changes in the current diet can increase nutrient intake and place the population at lower risk of nutrient inadequacy. Reaching nutritional adequacy for all nutrients would require major dietary changes in the Brazilian diet.

  10. Introducing health gains in location-allocation models: A stochastic model for planning the delivery of long-term care

    NASA Astrophysics Data System (ADS)

    Cardoso, T.; Oliveira, M. D.; Barbosa-Póvoa, A.; Nickel, S.

    2015-05-01

    Although the maximization of health is a key objective in health care systems, location-allocation literature has not yet considered this dimension. This study proposes a multi-objective stochastic mathematical programming approach to support the planning of a multi-service network of long-term care (LTC), both in terms of services location and capacity planning. This approach is based on a mixed integer linear programming model with two objectives - the maximization of expected health gains and the minimization of expected costs - with satisficing levels in several dimensions of equity - namely, equity of access, equity of utilization, socioeconomic equity and geographical equity - being imposed as constraints. The augmented ε-constraint method is used to explore the trade-off between these conflicting objectives, with uncertainty in the demand and delivery of care being accounted for. The model is applied to analyze the (re)organization of the LTC network currently operating in the Great Lisbon region in Portugal for the 2014-2016 period. Results show that extending the network of LTC is a cost-effective investment.

  11. Symmetry breaking and un-breaking in microhydrodynamical systems: Swimming, pumping and bio-ballistics

    NASA Astrophysics Data System (ADS)

    Roper, Marcus Leigh

    This thesis describes the numerical and asymptotic analysis of symmetry breaking phenomena in three fluid dynamical systems. The first part concerns modeling of a micrometer sized swimming device, comprising a filament composed of superparamagnetic micron-sized beads and driven by an applied magnetic field. The swimming mechanics are deciphered in order to show how actuation by a spatially-homogeneous but temporally-varying torque leads to propagation of a bending wave along the filament and thence to propulsion. Absence of swimming unless the lateral symmetry of the filament is broken by tethering one end to a high drag body is explained. The model is used to determine whether, and to what extent, the micro-swimmer behaves like a flagellated eukaryotic cell. The second part concerns modeling of locomotion using a reversible stroke. Although forbidden at low Reynolds numbers, such symmetric gaits are favored by some microscopic planktonic swimmers. We analyze the constraints upon generation of propulsive force by such swimmers using a numerical model for a flapped limb. Effective locomotion is shown to be possible at arbitrarily low rates of energy expenditure, escaping a formerly postulated time-symmetry constraint, if the limb is shaped in order to exploit slow inertial-streaming eddies. Finally we consider the evolution of explosively launched ascomycete spores toward perfect projectile shapes---bodies that are designed to experience minimum drag in flight---using the variance of spore shapes between species in order to quantify the stiffness of the drag minimization constraint. A surprising observation about the persistent fore-aft symmetry of perfect projectiles, even up to Reynolds numbers great enough that the flow around the projectile is highly asymmetric, points both toward a model for spore ontogeny and to a novel linear approximation for moderate Reynolds flows.

  12. Model driven development of clinical information sytems using openEHR.

    PubMed

    Atalag, Koray; Yang, Hong Yul; Tempero, Ewan; Warren, Jim

    2011-01-01

    openEHR and the recent international standard (ISO 13606) defined a model driven software development methodology for health information systems. However there is little evidence in the literature describing implementation; especially for desktop clinical applications. This paper presents an implementation pathway using .Net/C# technology for Microsoft Windows desktop platforms. An endoscopy reporting application driven by openEHR Archetypes and Templates has been developed. A set of novel GUI directives has been defined and presented which guides the automatic graphical user interface generator to render widgets properly. We also reveal the development steps and important design decisions; from modelling to the final software product. This might provide guidance for other developers and form evidence required for the adoption of these standards for vendors and national programs alike.

  13. A predictive framework for evaluating models of semantic organization in free recall

    PubMed Central

    Morton, Neal W; Polyn, Sean M.

    2016-01-01

    Research in free recall has demonstrated that semantic associations reliably influence the organization of search through episodic memory. However, the specific structure of these associations and the mechanisms by which they influence memory search remain unclear. We introduce a likelihood-based model-comparison technique, which embeds a model of semantic structure within the context maintenance and retrieval (CMR) model of human memory search. Within this framework, model variants are evaluated in terms of their ability to predict the specific sequence in which items are recalled. We compare three models of semantic structure, latent semantic analysis (LSA), global vectors (GloVe), and word association spaces (WAS), and find that models using WAS have the greatest predictive power. Furthermore, we find evidence that semantic and temporal organization is driven by distinct item and context cues, rather than a single context cue. This finding provides important constraint for theories of memory search. PMID:28331243

  14. Fuzzy bi-objective preventive health care network design.

    PubMed

    Davari, Soheil; Kilic, Kemal; Ertek, Gurdal

    2015-09-01

    Preventive health care is unlike health care for acute ailments, as people are less alert to their unknown medical problems. In order to motivate public and to attain desired participation levels for preventive programs, the attractiveness of the health care facility is a major concern. Health economics literature indicates that attractiveness of a facility is significantly influenced by proximity of the clients to it. Hence attractiveness is generally modelled as a function of distance. However, abundant empirical evidence suggests that other qualitative factors such as perceived quality, attractions nearby, amenities, etc. also influence attractiveness. Therefore, a realistic measure should incorporate the vagueness in the concept of attractiveness to the model. The public policy makers should also maintain the equity among various neighborhoods, which should be considered as a second objective. Finally, even though the general tendency in the literature is to focus on health benefits, the cost effectiveness is still a factor that should be considered. In this paper, a fuzzy bi-objective model with budget constraints is developed. Later, by modelling the attractiveness by means of fuzzy triangular numbers and treating the budget constraint as a soft constraint, a modified (and more realistic) version of the model is introduced. Two solution methodologies, namely fuzzy goal programming and fuzzy chance constrained optimization are proposed as solutions. Both the original and the modified models are solved within the framework of a case study in Istanbul, Turkey. In the case study, the Microsoft Bing Map is utilized in order to determine more accurate distance measures among the nodes.

  15. Sybil--efficient constraint-based modelling in R.

    PubMed

    Gelius-Dietrich, Gabriel; Desouki, Abdelmoneim Amer; Fritzemeier, Claus Jonathan; Lercher, Martin J

    2013-11-13

    Constraint-based analyses of metabolic networks are widely used to simulate the properties of genome-scale metabolic networks. Publicly available implementations tend to be slow, impeding large scale analyses such as the genome-wide computation of pairwise gene knock-outs, or the automated search for model improvements. Furthermore, available implementations cannot easily be extended or adapted by users. Here, we present sybil, an open source software library for constraint-based analyses in R; R is a free, platform-independent environment for statistical computing and graphics that is widely used in bioinformatics. Among other functions, sybil currently provides efficient methods for flux-balance analysis (FBA), MOMA, and ROOM that are about ten times faster than previous implementations when calculating the effect of whole-genome single gene deletions in silico on a complete E. coli metabolic model. Due to the object-oriented architecture of sybil, users can easily build analysis pipelines in R or even implement their own constraint-based algorithms. Based on its highly efficient communication with different mathematical optimisation programs, sybil facilitates the exploration of high-dimensional optimisation problems on small time scales. Sybil and all its dependencies are open source. Sybil and its documentation are available for download from the comprehensive R archive network (CRAN).

  16. Regional Issue Identification and Assessment (RIIA). Volume III. Institutional barriers to developing power generation facilities in the Pacific Northwest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, F. A.; Sawyer, C. H.; Maxwell, J. H.

    1979-10-01

    The Regional Assessments Division in the US Department of Energy (DOE) has undertaken a program to assess the probable consequences of various national energy policies in regions of the United States and to evaluate the constraints on national energy policy imposed by conditions in these regions. The program is referred to as the Regional Issues Identification and Assessment (RIIA) Program. Currently the RIIA Program is evaluating the Trendlong Mid-Mid scenario, a pattern of energy development for 1985 and 1990 derived from the Project Independence Evaluation System (PIES) model. This scenario assumes a medium annual growth rate in both the nationalmore » demand for and national supply of energy. It has been disaggregated to specify the generating capacity to be supplied by each energy source in each state. Pacific Northwest Laboratory (PNL) has the responsibility for evaluating the scenario for the Federal Region 10, consisting of Alaska, Idaho, Oregon, and Washington. PNL is identifying impacts and constraints associated with realizing the scenario in a variety of categories, including air and water quality impacts, health and safety effects, and socioeconomic impacts. This report summarizes the analysis of one such category: institutional constraints - defined to include legal, organizational, and political barriers to the achievement of the scenario in the Northwest.« less

  17. Performance comparison of genetic algorithms and particle swarm optimization for model integer programming bus timetabling problem

    NASA Astrophysics Data System (ADS)

    Wihartiko, F. D.; Wijayanti, H.; Virgantari, F.

    2018-03-01

    Genetic Algorithm (GA) is a common algorithm used to solve optimization problems with artificial intelligence approach. Similarly, the Particle Swarm Optimization (PSO) algorithm. Both algorithms have different advantages and disadvantages when applied to the case of optimization of the Model Integer Programming for Bus Timetabling Problem (MIPBTP), where in the case of MIPBTP will be found the optimal number of trips confronted with various constraints. The comparison results show that the PSO algorithm is superior in terms of complexity, accuracy, iteration and program simplicity in finding the optimal solution.

  18. Dark matter interpretations of ATLAS searches for the electroweak production of supersymmetric particles in s = 8 $$ \\sqrt{s}=8 $$ TeV proton-proton collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aaboud, M.; Aad, G.; Abbott, B.

    2016-09-01

    A selection of searches by the ATLAS experiment at the LHC for the electroweak production of SUSY particles are used to study their impact on the constraints on dark matter candidates. The searches use 20 fb-1 of proton-proton collision data at s√=8s=8 TeV. A likelihood-driven scan of a five-dimensional effective model focusing on the gaugino-higgsino and Higgs sector of the phenomenological minimal supersymmetric Standard Model is performed. This scan uses data from direct dark matter detection experiments, the relic dark matter density and precision flavour physics results. Further constraints from the ATLAS Higgs mass measurement and SUSY searches at LEPmore » are also applied. A subset of models selected from this scan are used to assess the impact of the selected ATLAS searches in this five-dimensional parameter space. These ATLAS searches substantially impact those models for which the mass m(χ~01)m(χ~10) of the lightest neutralino is less than 65 GeV, excluding 86% of such models. The searches have limited impact on models with larger m(χ~01)m(χ~10) due to either heavy electroweakinos or compressed mass spectra where the mass splittings between the produced particles and the lightest supersymmetric particle is small.« less

  19. Dark matter interpretations of ATLAS searches for the electroweak production of supersymmetric particles in $$ \\sqrt{s}=8 $$ TeV proton-proton collisions

    DOE PAGES

    Aaboud, M.; Aad, G.; Abbott, B.; ...

    2016-09-30

    A selection of searches by the ATLAS experiment at the LHC for the electroweak production of SUSY particles are used to study their impact on the constraints on dark matter candidates. The searches use 20 fb -1 of proton-proton collision data at √s=8 TeV. A likelihood-driven scan of a five-dimensional effective model focusing on the gaugino-higgsino and Higgs sector of the phenomenological minimal supersymmetric Standard Model is performed. This scan uses data from direct dark matter detection experiments, the relic dark matter density and precision flavour physics results. Further constraints from the ATLAS Higgs mass measurement and SUSY searches at LEP are also applied. A subset of models selected from this scan are used to assess the impact of the selected ATLAS searches in this five-dimensional parameter space. These ATLAS searches substantially impact those models for which the mass m(more » $$\\tilde{χ}$$$0\\atop{1}$$) of the lightest neutralino is less than 65 GeV, excluding 86% of such models. The searches have limited impact on models with larger m($$\\tilde{χ}$$$0\\atop{1}$$) due to either heavy electroweakinos or compressed mass spectra where the mass splittings between the produced particles and the lightest supersymmetric particle is small.« less

  20. Fisher information framework for time series modeling

    NASA Astrophysics Data System (ADS)

    Venkatesan, R. C.; Plastino, A.

    2017-08-01

    A robust prediction model invoking the Takens embedding theorem, whose working hypothesis is obtained via an inference procedure based on the minimum Fisher information principle, is presented. The coefficients of the ansatz, central to the working hypothesis satisfy a time independent Schrödinger-like equation in a vector setting. The inference of (i) the probability density function of the coefficients of the working hypothesis and (ii) the establishing of constraint driven pseudo-inverse condition for the modeling phase of the prediction scheme, is made, for the case of normal distributions, with the aid of the quantum mechanical virial theorem. The well-known reciprocity relations and the associated Legendre transform structure for the Fisher information measure (FIM, hereafter)-based model in a vector setting (with least square constraints) are self-consistently derived. These relations are demonstrated to yield an intriguing form of the FIM for the modeling phase, which defines the working hypothesis, solely in terms of the observed data. Cases for prediction employing time series' obtained from the: (i) the Mackey-Glass delay-differential equation, (ii) one ECG signal from the MIT-Beth Israel Deaconess Hospital (MIT-BIH) cardiac arrhythmia database, and (iii) one ECG signal from the Creighton University ventricular tachyarrhythmia database. The ECG samples were obtained from the Physionet online repository. These examples demonstrate the efficiency of the prediction model. Numerical examples for exemplary cases are provided.

  1. Explosively driven air blast in a conical shock tube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Joel B., E-mail: joel.b.stewart2.civ@mail.mil; Pecora, Collin, E-mail: collin.r.pecora.civ@mail.mil

    2015-03-15

    Explosively driven shock tubes present challenges in terms of safety concerns and expensive upkeep of test facilities but provide more realistic approximations to the air blast resulting from free-field detonations than those provided by gas-driven shock tubes. Likewise, the geometry of conical shock tubes can naturally approximate a sector cut from a spherically symmetric blast, leading to a better agreement with the blast profiles of free-field detonations when compared to those provided by shock tubes employing constant cross sections. The work presented in this article documents the design, fabrication, and testing of an explosively driven conical shock tube whose goalmore » was to closely replicate the blast profile seen from a larger, free-field detonation. By constraining the blast through a finite area, large blasts (which can add significant damage and safety constraints) can be simulated using smaller explosive charges. The experimental data presented herein show that a close approximation to the free-field air blast profile due to a 1.5 lb charge of C4 at 76 in. can be achieved by using a 0.032 lb charge in a 76-in.-long conical shock tube (which translates to an amplification factor of nearly 50). Modeling and simulation tools were used extensively in designing this shock tube to minimize expensive fabrication costs.« less

  2. Centralized Planning for Multiple Exploratory Robots

    NASA Technical Reports Server (NTRS)

    Estlin, Tara; Rabideau, Gregg; Chien, Steve; Barrett, Anthony

    2005-01-01

    A computer program automatically generates plans for a group of robotic vehicles (rovers) engaged in geological exploration of terrain. The program rapidly generates multiple command sequences that can be executed simultaneously by the rovers. Starting from a set of high-level goals, the program creates a sequence of commands for each rover while respecting hardware constraints and limitations on resources of each rover and of hardware (e.g., a radio communication terminal) shared by all the rovers. First, a separate model of each rover is loaded into a centralized planning subprogram. The centralized planning software uses the models of the rovers plus an iterative repair algorithm to resolve conflicts posed by demands for resources and by constraints associated with the all the rovers and the shared hardware. During repair, heuristics are used to make planning decisions that will result in solutions that will be better and will be found faster than would otherwise be possible. In particular, techniques from prior solutions of the multiple-traveling- salesmen problem are used as heuristics to generate plans in which the paths taken by the rovers to assigned scientific targets are shorter than they would otherwise be.

  3. A population-based model for priority setting across the care continuum and across modalities

    PubMed Central

    Segal, Leonie; Mortimer, Duncan

    2006-01-01

    Background The Health-sector Wide (HsW) priority setting model is designed to shift the focus of priority setting away from 'program budgets' – that are typically defined by modality or disease-stage – and towards well-defined target populations with a particular disease/health problem. Methods The key features of the HsW model are i) a disease/health problem framework, ii) a sequential approach to covering the entire health sector, iii) comprehensiveness of scope in identifying intervention options and iv) the use of objective evidence. The HsW model redefines the unit of analysis over which priorities are set to include all mutually exclusive and complementary interventions for the prevention and treatment of each disease/health problem under consideration. The HsW model is therefore incompatible with the fragmented approach to priority setting across multiple program budgets that currently characterises allocation in many health systems. The HsW model employs standard cost-utility analyses and decision-rules with the aim of maximising QALYs contingent upon the global budget constraint for the set of diseases/health problems under consideration. It is recognised that the objective function may include non-health arguments that would imply a departure from simple QALY maximisation and that political constraints frequently limit degrees of freedom. In addressing these broader considerations, the HsW model can be modified to maximise value-weighted QALYs contingent upon the global budget constraint and any political constraints bearing upon allocation decisions. Results The HsW model has been applied in several contexts, recently to osteoarthritis, that has demonstrated both its practical application and its capacity to derive clear evidenced-based policy recommendations. Conclusion Comparisons with other approaches to priority setting, such as Programme Budgeting and Marginal Analysis (PBMA) and modality-based cost-effectiveness comparisons, as typified by Australia's Pharmaceutical Benefits Advisory Committee process for the listing of pharmaceuticals for government funding, demonstrate the value added by the HsW model notably in its greater likelihood of contributing to allocative efficiency. PMID:16566841

  4. Symbolic discrete event system specification

    NASA Technical Reports Server (NTRS)

    Zeigler, Bernard P.; Chi, Sungdo

    1992-01-01

    Extending discrete event modeling formalisms to facilitate greater symbol manipulation capabilities is important to further their use in intelligent control and design of high autonomy systems. An extension to the DEVS formalism that facilitates symbolic expression of event times by extending the time base from the real numbers to the field of linear polynomials over the reals is defined. A simulation algorithm is developed to generate the branching trajectories resulting from the underlying nondeterminism. To efficiently manage symbolic constraints, a consistency checking algorithm for linear polynomial constraints based on feasibility checking algorithms borrowed from linear programming has been developed. The extended formalism offers a convenient means to conduct multiple, simultaneous explorations of model behaviors. Examples of application are given with concentration on fault model analysis.

  5. Optimization Design of Minimum Total Resistance Hull Form Based on CFD Method

    NASA Astrophysics Data System (ADS)

    Zhang, Bao-ji; Zhang, Sheng-long; Zhang, Hui

    2018-06-01

    In order to reduce the resistance and improve the hydrodynamic performance of a ship, two hull form design methods are proposed based on the potential flow theory and viscous flow theory. The flow fields are meshed using body-fitted mesh and structured grids. The parameters of the hull modification function are the design variables. A three-dimensional modeling method is used to alter the geometry. The Non-Linear Programming (NLP) method is utilized to optimize a David Taylor Model Basin (DTMB) model 5415 ship under the constraints, including the displacement constraint. The optimization results show an effective reduction of the resistance. The two hull form design methods developed in this study can provide technical support and theoretical basis for designing green ships.

  6. A Novel Biobjective Risk-Based Model for Stochastic Air Traffic Network Flow Optimization Problem.

    PubMed

    Cai, Kaiquan; Jia, Yaoguang; Zhu, Yanbo; Xiao, Mingming

    2015-01-01

    Network-wide air traffic flow management (ATFM) is an effective way to alleviate demand-capacity imbalances globally and thereafter reduce airspace congestion and flight delays. The conventional ATFM models assume the capacities of airports or airspace sectors are all predetermined. However, the capacity uncertainties due to the dynamics of convective weather may make the deterministic ATFM measures impractical. This paper investigates the stochastic air traffic network flow optimization (SATNFO) problem, which is formulated as a weighted biobjective 0-1 integer programming model. In order to evaluate the effect of capacity uncertainties on ATFM, the operational risk is modeled via probabilistic risk assessment and introduced as an extra objective in SATNFO problem. Computation experiments using real-world air traffic network data associated with simulated weather data show that presented model has far less constraints compared to stochastic model with nonanticipative constraints, which means our proposed model reduces the computation complexity.

  7. Multi-objective optimal design of sandwich panels using a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Xu, Xiaomei; Jiang, Yiping; Pueh Lee, Heow

    2017-10-01

    In this study, an optimization problem concerning sandwich panels is investigated by simultaneously considering the two objectives of minimizing the panel mass and maximizing the sound insulation performance. First of all, the acoustic model of sandwich panels is discussed, which provides a foundation to model the acoustic objective function. Then the optimization problem is formulated as a bi-objective programming model, and a solution algorithm based on the non-dominated sorting genetic algorithm II (NSGA-II) is provided to solve the proposed model. Finally, taking an example of a sandwich panel that is expected to be used as an automotive roof panel, numerical experiments are carried out to verify the effectiveness of the proposed model and solution algorithm. Numerical results demonstrate in detail how the core material, geometric constraints and mechanical constraints impact the optimal designs of sandwich panels.

  8. (abstract) A Test of the Theoretical Models of Bipolar Outflows: The Bipolar Outflow in Mon R2

    NASA Technical Reports Server (NTRS)

    Xie, Taoling; Goldsmith, Paul; Patel, Nimesh

    1993-01-01

    We report some results of a study of the massive bipolar outflow in the central region of the relatively nearby giant molecular cloud Monoceros R2. We make a quantative comparison of our results with the Shu et al. outflow model which incorporates a radially directed wind sweeping up the ambient material into a shell. We find that this simple model naturally explains the shape of this thin shell. Although Shu's model in its simplest form predicts with reasonable parameters too much mass at very small polar angles, as previously pointed out by Masson and Chernin, it provides a reasonable good fit to the mass distribution at larger polar angles. It is possible that this discrepancy is due to inhomogeneities of the ambient molecular gas which is not considered by the model. We also discuss the constraints imposed by these results on recent jet-driven outflow models.

  9. Benchmarking Defmod, an open source FEM code for modeling episodic fault rupture

    NASA Astrophysics Data System (ADS)

    Meng, Chunfang

    2017-03-01

    We present Defmod, an open source (linear) finite element code that enables us to efficiently model the crustal deformation due to (quasi-)static and dynamic loadings, poroelastic flow, viscoelastic flow and frictional fault slip. Ali (2015) provides the original code introducing an implicit solver for (quasi-)static problem, and an explicit solver for dynamic problem. The fault constraint is implemented via Lagrange Multiplier. Meng (2015) combines these two solvers into a hybrid solver that uses failure criteria and friction laws to adaptively switch between the (quasi-)static state and dynamic state. The code is capable of modeling episodic fault rupture driven by quasi-static loadings, e.g. due to reservoir fluid withdraw or injection. Here, we focus on benchmarking the Defmod results against some establish results.

  10. Identification of potential compensatory muscle strategies in a breast cancer survivor population: A combined computational and experimental approach.

    PubMed

    Chopp-Hurley, Jaclyn N; Brookham, Rebecca L; Dickerson, Clark R

    2016-12-01

    Biomechanical models are often used to estimate the muscular demands of various activities. However, specific muscle dysfunctions typical of unique clinical populations are rarely considered. Due to iatrogenic tissue damage, pectoralis major capability is markedly reduced in breast cancer population survivors, which could influence arm internal and external rotation muscular strategies. Accordingly, an optimization-based muscle force prediction model was systematically modified to emulate breast cancer population survivors through adjusting pectoralis capability and enforcing an empirical muscular co-activation relationship. Model permutations were evaluated through comparisons between predicted muscle forces and empirically measured muscle activations in survivors. Similarities between empirical data and model outputs were influenced by muscle type, hand force, pectoralis major capability and co-activation constraints. Differences in magnitude were lower when the co-activation constraint was enforced (-18.4% [31.9]) than unenforced (-23.5% [27.6]) (p<0.0001). This research demonstrates that muscle dysfunction in breast cancer population survivors can be reflected through including a capability constraint for pectoralis major. Further refinement of the co-activation constraint for survivors could improve its generalizability across this population and activities. Improving biomechanical models to more accurately represent clinical populations can provide novel information that can help in the development of optimal treatment programs for breast cancer population survivors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. A Model-Driven Approach to Teaching Concurrency

    ERIC Educational Resources Information Center

    Carro, Manuel; Herranz, Angel; Marino, Julio

    2013-01-01

    We present an undergraduate course on concurrent programming where formal models are used in different stages of the learning process. The main practical difference with other approaches lies in the fact that the ability to develop correct concurrent software relies on a systematic transformation of formal models of inter-process interaction (so…

  12. Public reporting and pay-for-performance: safety-net hospital executives' concerns and policy suggestions.

    PubMed

    Goldman, L Elizabeth; Henderson, Stuart; Dohan, Daniel P; Talavera, Jason A; Dudley, R Adams

    2007-01-01

    Safety-net hospitals (SNHs) may gain little financial benefit from the rapidly spreading adoption of public reporting and pay-for-performance, but may feel compelled to participate (and bear the costs of data collection) to meet public expectations of transparency and accountability. To better understand the concerns that SNH administrators have regarding public reporting and pay-for-performance, we interviewed 37 executives at randomly selected California SNHs. The main concerns noted by SNH executives were that human and financial resource constraints made it difficult for SNHs to accurately measure their performance. Additionally, some executives felt that market-driven public reporting and pay-for-performance may focus on clinical areas and incentive structures that may not be high-priority clinical areas for SNHs. Executives at SNHs suggested several policy responses to these concerns-such as offering training programs for SNH data collectors-that could be relatively inexpensive and might improve the cost-benefit ratio of public reporting and pay-for-performance programs.

  13. Modeling global macroclimatic constraints on ectotherm energy budgets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grant, B.W.; Porter, W.P.

    1992-12-31

    The authors describe a mechanistic individual-based model of how global macroclimatic constraints affect the energy budgets of ectothermic animals. The model uses macroclimatic and biophysical characters of the habitat and organism and tenets of heat transfer theory to calculate hourly temperature availabilities over a year. Data on the temperature dependence of activity rate, metabolism, food consumption and food processing capacity are used to estimate the net rate of resource assimilation which is then integrated over time. They present a new test of this model in which they show that the predicted energy budget sizes for 11 populations of the lizardmore » Sceloporus undulates are in close agreement with observed results from previous field studies. This demonstrates that model tests rae feasible and the results are reasonable. Further, since the model represents an upper bound to the size of the energy budget, observed residual deviations form explicit predictions about the effects of environmental constraints on the bioenergetics of the study lizards within each site that may be tested by future field and laboratory studies. Three major new improvements to the modeling are discussed. They present a means to estimate microclimate thermal heterogeneity more realistically and include its effects on field rates of individual activity and food consumption. Second, they describe an improved model of digestive function involving batch processing of consumed food. Third, they show how optimality methods (specifically the methods of stochastic dynamic programming) may be included to model the fitness consequences of energy allocation decisions subject to food consumption and processing constraints which are predicted from the microclimate and physiological modeling.« less

  14. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony

    1990-01-01

    The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  15. Strategies for concurrent processing of complex algorithms in data driven architectures

    NASA Technical Reports Server (NTRS)

    Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.

    1990-01-01

    Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.

  16. Approaches to highly parameterized inversion: A guide to using PEST for model-parameter and predictive-uncertainty analysis

    USGS Publications Warehouse

    Doherty, John E.; Hunt, Randall J.; Tonkin, Matthew J.

    2010-01-01

    Analysis of the uncertainty associated with parameters used by a numerical model, and with predictions that depend on those parameters, is fundamental to the use of modeling in support of decisionmaking. Unfortunately, predictive uncertainty analysis with regard to models can be very computationally demanding, due in part to complex constraints on parameters that arise from expert knowledge of system properties on the one hand (knowledge constraints) and from the necessity for the model parameters to assume values that allow the model to reproduce historical system behavior on the other hand (calibration constraints). Enforcement of knowledge and calibration constraints on parameters used by a model does not eliminate the uncertainty in those parameters. In fact, in many cases, enforcement of calibration constraints simply reduces the uncertainties associated with a number of broad-scale combinations of model parameters that collectively describe spatially averaged system properties. The uncertainties associated with other combinations of parameters, especially those that pertain to small-scale parameter heterogeneity, may not be reduced through the calibration process. To the extent that a prediction depends on system-property detail, its postcalibration variability may be reduced very little, if at all, by applying calibration constraints; knowledge constraints remain the only limits on the variability of predictions that depend on such detail. Regrettably, in many common modeling applications, these constraints are weak. Though the PEST software suite was initially developed as a tool for model calibration, recent developments have focused on the evaluation of model-parameter and predictive uncertainty. As a complement to functionality that it provides for highly parameterized inversion (calibration) by means of formal mathematical regularization techniques, the PEST suite provides utilities for linear and nonlinear error-variance and uncertainty analysis in these highly parameterized modeling contexts. Availability of these utilities is particularly important because, in many cases, a significant proportion of the uncertainty associated with model parameters-and the predictions that depend on them-arises from differences between the complex properties of the real world and the simplified representation of those properties that is expressed by the calibrated model. This report is intended to guide intermediate to advanced modelers in the use of capabilities available with the PEST suite of programs for evaluating model predictive error and uncertainty. A brief theoretical background is presented on sources of parameter and predictive uncertainty and on the means for evaluating this uncertainty. Applications of PEST tools are then discussed for overdetermined and underdetermined problems, both linear and nonlinear. PEST tools for calculating contributions to model predictive uncertainty, as well as optimization of data acquisition for reducing parameter and predictive uncertainty, are presented. The appendixes list the relevant PEST variables, files, and utilities required for the analyses described in the document.

  17. Variable-Metric Algorithm For Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Frick, James D.

    1989-01-01

    Variable Metric Algorithm for Constrained Optimization (VMACO) is nonlinear computer program developed to calculate least value of function of n variables subject to general constraints, both equality and inequality. First set of constraints equality and remaining constraints inequalities. Program utilizes iterative method in seeking optimal solution. Written in ANSI Standard FORTRAN 77.

  18. Companies Targeting Low-Cost "Netbooks" Directly at Education

    ERIC Educational Resources Information Center

    Trotter, Andrew

    2008-01-01

    Computer companies are rolling out lower-priced laptops designed for education, claiming that the new "netbooks" are better tuned than past models to the needs of young learners--and to the constraints of school budgets. The new models may help revive confidence in 1-to-1 laptop programs, which some school districts have backed away from in recent…

  19. Operations Research techniques in the management of large-scale reforestation programs

    Treesearch

    Joseph Buongiorno; D.E. Teeguarden

    1978-01-01

    A reforestation planning system for the Douglas-fir region of the Western United States is described. Part of the system is a simulation model to predict plantation growth and to determine economic thinning regimes and rotation ages as a function of site characteristics, initial density, reforestation costs, and management constraints. A second model estimates the...

  20. A programing system for research and applications in structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Rogers, J. L., Jr.

    1981-01-01

    The flexibility necessary for such diverse utilizations is achieved by combining, in a modular manner, a state-of-the-art optimization program, a production level structural analysis program, and user supplied and problem dependent interface programs. Standard utility capabilities in modern computer operating systems are used to integrate these programs. This approach results in flexibility of the optimization procedure organization and versatility in the formulation of constraints and design variables. Features shown in numerical examples include: variability of structural layout and overall shape geometry, static strength and stiffness constraints, local buckling failure, and vibration constraints.

  1. [3-D finite element modeling of internal fixation of mandibular mental fracture and the design of boundary constraints].

    PubMed

    Luo, Xiaohui; Wang, Hang; Fan, Yubo

    2007-04-01

    This study was aimed to develop a 3-D finite element (3-D FE) model of the mental fractured mandible and design the boundary constrains. The CT images from a health volunteer were used as the original information and put into ANSYS program to build a 3-D FE model. The model of the miniplate and screw which were used for the internal fixation was established by Pro/E. The boundary constrains of different muscle loadings were used to simulate the 3 functional conditions of the mandible. A 3-D FE model of mental fractured mandible under the miniplate-screw internal fixation system was constructed. And by the boundary constraints, the 3 biting conditions were simulated and the model could serve as a foundation on which to analyze the biomechanical behavior of the fractured mandible.

  2. Solar Sail Models and Test Measurements Correspondence for Validation Requirements Definition

    NASA Technical Reports Server (NTRS)

    Ewing, Anthony; Adams, Charles

    2004-01-01

    Solar sails are being developed as a mission-enabling technology in support of future NASA science missions. Current efforts have advanced solar sail technology sufficient to justify a flight validation program. A primary objective of this activity is to test and validate solar sail models that are currently under development so that they may be used with confidence in future science mission development (e.g., scalable to larger sails). Both system and model validation requirements must be defined early in the program to guide design cycles and to ensure that relevant and sufficient test data will be obtained to conduct model validation to the level required. A process of model identification, model input/output documentation, model sensitivity analyses, and test measurement correspondence is required so that decisions can be made to satisfy validation requirements within program constraints.

  3. Downscaling of RCM outputs for representative catchments in the Mediterranean region, for the 1951-2100 time-frame

    NASA Astrophysics Data System (ADS)

    Deidda, Roberto; Marrocu, Marino; Pusceddu, Gabriella; Langousis, Andreas; Mascaro, Giuseppe; Caroletti, Giulio

    2013-04-01

    Within the activities of the EU FP7 CLIMB project (www.climb-fp7.eu), we developed downscaling procedures to reliably assess climate forcing at hydrologically relevant scales, and applied them to six representative hydrological basins located in the Mediterranean region: Riu Mannu and Noce in Italy, Chiba in Tunisia, Kocaeli in Turkey, Thau in France, and Gaza in Palestine. As a first step towards this aim, we used daily precipitation and temperature data from the gridded E-OBS project (www.ecad.eu/dailydata), as reference fields, to rank 14 Regional Climate Model (RCM) outputs from the ENSEMBLES project (http://ensembles-eu.metoffice.com). The four best performing model outputs were selected, with the additional constraint of maintaining 2 outputs obtained from running different RCMs driven by the same GCM, and 2 runs from the same RCM driven by different GCMs. For these four RCM-GCM model combinations, a set of downscaling techniques were developed and applied, for the period 1951-2100, to variables used in hydrological modeling (i.e. precipitation; mean, maximum and minimum daily temperatures; direct solar radiation, relative humidity, magnitude and direction of surface winds). The quality of the final products is discussed, together with the results obtained after applying a bias reduction procedure to daily temperature and precipitation fields.

  4. Model-driven discovery of underground metabolic functions in Escherichia coli.

    PubMed

    Guzmán, Gabriela I; Utrilla, José; Nurk, Sergey; Brunk, Elizabeth; Monk, Jonathan M; Ebrahim, Ali; Palsson, Bernhard O; Feist, Adam M

    2015-01-20

    Enzyme promiscuity toward substrates has been discussed in evolutionary terms as providing the flexibility to adapt to novel environments. In the present work, we describe an approach toward exploring such enzyme promiscuity in the space of a metabolic network. This approach leverages genome-scale models, which have been widely used for predicting growth phenotypes in various environments or following a genetic perturbation; however, these predictions occasionally fail. Failed predictions of gene essentiality offer an opportunity for targeting biological discovery, suggesting the presence of unknown underground pathways stemming from enzymatic cross-reactivity. We demonstrate a workflow that couples constraint-based modeling and bioinformatic tools with KO strain analysis and adaptive laboratory evolution for the purpose of predicting promiscuity at the genome scale. Three cases of genes that are incorrectly predicted as essential in Escherichia coli--aspC, argD, and gltA--are examined, and isozyme functions are uncovered for each to a different extent. Seven isozyme functions based on genetic and transcriptional evidence are suggested between the genes aspC and tyrB, argD and astC, gabT and puuE, and gltA and prpC. This study demonstrates how a targeted model-driven approach to discovery can systematically fill knowledge gaps, characterize underground metabolism, and elucidate regulatory mechanisms of adaptation in response to gene KO perturbations.

  5. Groundwater flow and its effect on salt dissolution in Gypsum Canyon watershed, Paradox Basin, southeast Utah, USA

    NASA Astrophysics Data System (ADS)

    Reitman, Nadine G.; Ge, Shemin; Mueller, Karl

    2014-09-01

    Groundwater flow is an important control on subsurface evaporite (salt) dissolution. Salt dissolution can drive faulting and associated subsidence on the land surface and increase salinity in groundwater. This study aims to understand the groundwater flow system of Gypsum Canyon watershed in the Paradox Basin, Utah, USA, and whether or not groundwater-driven dissolution affects surface deformation. The work characterizes the groundwater flow and solute transport systems of the watershed using a three-dimensional (3D) finite element flow and transport model, SUTRA. Spring samples were analyzed for stable isotopes of water and total dissolved solids. Spring water and hydraulic conductivity data provide constraints for model parameters. Model results indicate that regional groundwater flow is to the northwest towards the Colorado River, and shallow flow systems are influenced by topography. The low permeability obtained from laboratory tests is inconsistent with field observed discharges, supporting the notion that fracture permeability plays a significant role in controlling groundwater flow. Model output implies that groundwater-driven dissolution is small on average, and cannot account for volume changes in the evaporite deposits that could cause surface deformation, but it is speculated that dissolution may be highly localized and/or weaken evaporite deposits, and could lead to surface deformation over time.

  6. Constraint-based modeling in microbial food biotechnology

    PubMed Central

    Rau, Martin H.

    2018-01-01

    Genome-scale metabolic network reconstruction offers a means to leverage the value of the exponentially growing genomics data and integrate it with other biological knowledge in a structured format. Constraint-based modeling (CBM) enables both the qualitative and quantitative analyses of the reconstructed networks. The rapid advancements in these areas can benefit both the industrial production of microbial food cultures and their application in food processing. CBM provides several avenues for improving our mechanistic understanding of physiology and genotype–phenotype relationships. This is essential for the rational improvement of industrial strains, which can further be facilitated through various model-guided strain design approaches. CBM of microbial communities offers a valuable tool for the rational design of defined food cultures, where it can catalyze hypothesis generation and provide unintuitive rationales for the development of enhanced community phenotypes and, consequently, novel or improved food products. In the industrial-scale production of microorganisms for food cultures, CBM may enable a knowledge-driven bioprocess optimization by rationally identifying strategies for growth and stability improvement. Through these applications, we believe that CBM can become a powerful tool for guiding the areas of strain development, culture development and process optimization in the production of food cultures. Nevertheless, in order to make the correct choice of the modeling framework for a particular application and to interpret model predictions in a biologically meaningful manner, one should be aware of the current limitations of CBM. PMID:29588387

  7. Simulating water markets with transaction costs

    NASA Astrophysics Data System (ADS)

    Erfani, Tohid; Binions, Olga; Harou, Julien J.

    2014-06-01

    This paper presents an optimization model to simulate short-term pair-wise spot-market trading of surface water abstraction licenses (water rights). The approach uses a node-arc multicommodity formulation that tracks individual supplier-receiver transactions in a water resource network. This enables accounting for transaction costs between individual buyer-seller pairs and abstractor-specific rules and behaviors using constraints. Trades are driven by economic demand curves that represent each abstractor's time-varying water demand. The purpose of the proposed model is to assess potential hydrologic and economic outcomes of water markets and aid policy makers in designing water market regulations. The model is applied to the Great Ouse River basin in Eastern England. The model assesses the potential weekly water trades and abstractions that could occur in a normal and a dry year. Four sectors (public water supply, energy, agriculture, and industrial) are included in the 94 active licensed water diversions. Each license's unique environmental restrictions are represented and weekly economic water demand curves are estimated. Rules encoded as constraints represent current water management realities and plausible stakeholder-informed water market behaviors. Results show buyers favor sellers who can supply large volumes to minimize transactions. The energy plant cooling and agricultural licenses, often restricted from obtaining water at times when it generates benefits, benefit most from trades. Assumptions and model limitations are discussed. This article was corrected on 13 JUN 2014. See the end of the full text for details.

  8. Models of Sector Flows Under Local, Regional and Airport Weather Constraints

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak

    2017-01-01

    Recently, the ATM community has made important progress in collaborative trajectory management through the introduction of a new FAA traffic management initiative called a Collaborative Trajectory Options Program (CTOP). FAA can use CTOPs to manage air traffic under multiple constraints (manifested as flow constrained areas or FCAs) in the system, and it allows flight operators to indicate their preferences for routing and delay options. CTOPs also permits better management of the overall trajectory of flights by considering both routing and departure delay options simultaneously. However, adoption of CTOPs in airspace has been hampered by many factors that include challenges in how to identify constrained areas and how to set rates for the FCAs. Decision support tools providing assistance would be particularly helpful in effective use of CTOPs. Such DSTs tools would need models of demand and capacity in the presence of multiple constraints. This study examines different approaches to using historical data to create and validate models of maximum flows in sectors and other airspace regions in the presence of multiple constraints. A challenge in creating an empirical model of flows under multiple constraints is a lack of sufficient historical data that captures diverse situations involving combinations of multiple constraints especially those with severe weather. The approach taken here to deal with this is two-fold. First, we create a generalized sector model encompassing multiple sectors rather than individual sectors in order to increase the amount of data used for creating the model by an order of magnitude. Secondly, we decompose the problem so that the amount of data needed is reduced. This involves creating a baseline demand model plus a separate weather constrained flow reduction model and then composing these into a single integrated model. A nominal demand model is a flow model (gdem) in the presence of clear local weather. This defines the flow as a function of weather constraints in neighboring regions, airport constraints and weather in locations that can cause re-routes to the location of interest. A weather constrained flow reduction model (fwx-red) is a model of reduction in baseline counts as a function of local weather. Because the number of independent variables associated with each of the two decomposed models is smaller than that with a single model, need for amount of data is reduced. Finally, a composite model that combines these two can be represented as fwx-red (gdem(e), l) where e represents non-local constraints and l represents local weather. The approaches studied to developing these models are divided into three categories: (1) Point estimation models (2) Empirical models (3) Theoretical models. Errors in predictions of these different types of models have been estimated. In situations when there is abundant data, point estimation models tend to be very accurate. In contrast, empirical models do better than theoretical models when there is some data available. The biggest benefit of theoretical models is their general applicability in wider range situations once the degree of accuracy of these has been established.

  9. Models of Sector Aircraft Counts in the Presence of Local, Regional and Airport Constraints

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak

    2017-01-01

    Recently, the ATM community has made important progress in collaborative trajectory management through the introduction of a new FAA traffic management initiative called a Collaborative Trajectory Options Program (CTOP). FAA can use CTOPs to manage air traffic under multiple constraints (manifested as flow constrained areas or FCAs) in the system, and it allows flight operators to indicate their preferences for routing and delay options. CTOPs also permits better management of the overall trajectory of flights by considering both routing and departure delay options simultaneously. However, adoption of CTOPs in airspace has been hampered by many factors that include challenges in how to identify constrained areas and how to set rates for the FCAs. Decision support tools providing assistance would be particularly helpful in effective use of CTOPs. Such DSTs tools would need models of demand and capacity in the presence of multiple constraints. This study examines different approaches to using historical data to create and validate models of maximum flows in sectors and other airspace regions in the presence of multiple constraints. A challenge in creating an empirical model of flows under multiple constraints is a lack of sufficient historical data that captures diverse situations involving combinations of multiple constraints especially those with severe weather. The approach taken here to deal with this is two-fold. First, we create a generalized sector model encompassing multiple sectors rather than individual sectors in order to increase the amount of data used for creating the model by an order of magnitude. Secondly, we decompose the problem so that the amount of data needed is reduced. This involves creating a baseline demand model plus a separate weather constrained flow reduction model and then composing these into a single integrated model. A nominal demand model is a flow model (gdem) in the presence of clear local weather. This defines the flow as a function of weather constraints in neighboring regions, airport constraints and weather in locations that can cause re-routes to the location of interest. A weather constrained flow reduction model (fwx-red) is a model of reduction in baseline counts as a function of local weather. Because the number of independent variables associated with each of the two decomposed models is smaller than that with a single model, need for amount of data is reduced. Finally, a composite model that combines these two can be represented as fwx-red (gdem(e), l) where e represents non-local constraints and l represents local weather. The approaches studied to developing these models are divided into three categories: (1) Point estimation models (2) Empirical models (3) Theoretical models. Errors in predictions of these different types of models have been estimated. In situations when there is abundant data, point estimation models tend to be very accurate. In contrast, empirical models do better than theoretical models when there is some data available. The biggest benefit of theoretical models is their general applicability in wider range situations once the degree of accuracy of these has been established.

  10. Complexity in modeling of residual stresses and strains during polymerization of bone cement: effects of conversion, constraint, heat transfer, and viscoelastic property changes.

    PubMed

    Gilbert, Jeremy L

    2006-12-15

    Aseptic loosening of cemented joint prostheses remains a significant concern in orthopedic biomaterials. One possible contributor to cement loosening is the development of porosity, residual stresses, and local fracture of the cement that may arise from the in-situ polymerization of the cement. In-situ polymerization of acrylic bone cement is a complex set of interacting processes that involve polymerization reactions, heat generation and transfer, full or partial mechanical constraint, evolution of conversion- and temperature-dependent viscoelastic material properties, and thermal and conversion-driven changes in the density of the cement. Interactions between heat transfer and polymerization can lead to polymerization fronts moving through the material. Density changes during polymerization can, in the presence of mechanical constraint, lead to the development of locally high residual strain energy and residual stresses. This study models the interactions during bone cement polymerization and determines how residual stresses develop in cement and incorporates temperature and conversion-dependent viscoelastic behavior. The results show that the presence of polymerization fronts in bone cement result in locally high residual strain energies. A novel heredity integral approach is presented to track residual stresses incorporating conversion and temperature dependent material property changes. Finally, the relative contribution of thermal- and conversion-dependent strains to residual stresses is evaluated and it is found that the conversion-based strains are the major contributor to the overall behavior. This framework provides the basis for understanding the complex development of residual stresses and can be used as the basis for developing more complex models of cement behavior.

  11. A fuzzy goal programming model for biodiesel production

    NASA Astrophysics Data System (ADS)

    Lutero, D. S.; Pangue, EMU; Tubay, J. M.; Lubag, S. P.

    2016-02-01

    A fuzzy goal programming (FGP) model for biodiesel production in the Philippines was formulated with Coconut (Cocos nucifera) and Jatropha (Jatropha curcas) as sources of biodiesel. Objectives were maximization of feedstock production and overall revenue and, minimization of energy used in production and working capital for farming subject to biodiesel and non-biodiesel requirements, and availability of land, labor, water and machine time. All these objectives and constraints were assumed to be fuzzy. Model was tested for different sets of weights. Results for all sets of weights showed the same optimal allocation. Coconut alone can satisfy the biodiesel requirement of 2% per volume.

  12. A Tool for Model-Based Generation of Scenario-driven Electric Power Load Profiles

    NASA Technical Reports Server (NTRS)

    Rozek, Matthew L.; Donahue, Kenneth M.; Ingham, Michel D.; Kaderka, Justin D.

    2015-01-01

    Power consumption during all phases of spacecraft flight is of great interest to the aerospace community. As a result, significant analysis effort is exerted to understand the rates of electrical energy generation and consumption under many operational scenarios of the system. Previously, no standard tool existed for creating and maintaining a power equipment list (PEL) of spacecraft components that consume power, and no standard tool existed for generating power load profiles based on this PEL information during mission design phases. This paper presents the Scenario Power Load Analysis Tool (SPLAT) as a model-based systems engineering tool aiming to solve those problems. SPLAT is a plugin for MagicDraw (No Magic, Inc.) that aids in creating and maintaining a PEL, and also generates a power and temporal variable constraint set, in Maple language syntax, based on specified operational scenarios. The constraint set can be solved in Maple to show electric load profiles (i.e. power consumption from loads over time). SPLAT creates these load profiles from three modeled inputs: 1) a list of system components and their respective power modes, 2) a decomposition hierarchy of the system into these components, and 3) the specification of at least one scenario, which consists of temporal constraints on component power modes. In order to demonstrate how this information is represented in a system model, a notional example of a spacecraft planetary flyby is introduced. This example is also used to explain the overall functionality of SPLAT, and how this is used to generate electric power load profiles. Lastly, a cursory review of the usage of SPLAT on the Cold Atom Laboratory project is presented to show how the tool was used in an actual space hardware design application.

  13. Probing the Magnetic Field Structure in Sgr A* on Black Hole Horizon Scales with Polarized Radiative Transfer Simulations

    NASA Astrophysics Data System (ADS)

    Gold, Roman; McKinney, Jonathan C.; Johnson, Michael D.; Doeleman, Sheperd S.

    2017-03-01

    Magnetic fields are believed to drive accretion and relativistic jets in black hole accretion systems, but the magnetic field structure that controls these phenomena remains uncertain. We perform general relativistic (GR) polarized radiative transfer of time-dependent three-dimensional GR magnetohydrodynamical simulations to model thermal synchrotron emission from the Galactic Center source Sagittarius A* (Sgr A*). We compare our results to new polarimetry measurements by the Event Horizon Telescope (EHT) and show how polarization in the visibility (Fourier) domain distinguishes and constrains accretion flow models with different magnetic field structures. These include models with small-scale fields in disks driven by the magnetorotational instability as well as models with large-scale ordered fields in magnetically arrested disks. We also consider different electron temperature and jet mass-loading prescriptions that control the brightness of the disk, funnel-wall jet, and Blandford-Znajek-driven funnel jet. Our comparisons between the simulations and observations favor models with ordered magnetic fields near the black hole event horizon in Sgr A*, though both disk- and jet-dominated emission can satisfactorily explain most of the current EHT data. We also discuss how the black hole shadow can be filled-in by jet emission or mimicked by the absence of funnel jet emission. We show that stronger model constraints should be possible with upcoming circular polarization and higher frequency (349 GHz) measurements.

  14. Incorporating deliverable monitor unit constraints into spot intensity optimization in intensity modulated proton therapy treatment planning

    PubMed Central

    Cao, Wenhua; Lim, Gino; Li, Xiaoqiang; Li, Yupeng; Zhu, X. Ronald; Zhang, Xiaodong

    2014-01-01

    The purpose of this study is to investigate the feasibility and impact of incorporating deliverable monitor unit (MU) constraints into spot intensity optimization in intensity modulated proton therapy (IMPT) treatment planning. The current treatment planning system (TPS) for IMPT disregards deliverable MU constraints in the spot intensity optimization (SIO) routine. It performs a post-processing procedure on an optimized plan to enforce deliverable MU values that are required by the spot scanning proton delivery system. This procedure can create a significant dose distribution deviation between the optimized and post-processed deliverable plans, especially when small spot spacings are used. In this study, we introduce a two-stage linear programming (LP) approach to optimize spot intensities and constrain deliverable MU values simultaneously, i.e., a deliverable spot intensity optimization (DSIO) model. Thus, the post-processing procedure is eliminated and the associated optimized plan deterioration can be avoided. Four prostate cancer cases at our institution were selected for study and two parallel opposed beam angles were planned for all cases. A quadratic programming (QP) based model without MU constraints, i.e., a conventional spot intensity optimization (CSIO) model, was also implemented to emulate the commercial TPS. Plans optimized by both the DSIO and CSIO models were evaluated for five different settings of spot spacing from 3 mm to 7 mm. For all spot spacings, the DSIO-optimized plans yielded better uniformity for the target dose coverage and critical structure sparing than did the CSIO-optimized plans. With reduced spot spacings, more significant improvements in target dose uniformity and critical structure sparing were observed in the DSIO- than in the CSIO-optimized plans. Additionally, better sparing of the rectum and bladder was achieved when reduced spacings were used for the DSIO-optimized plans. The proposed DSIO approach ensures the deliverability of optimized IMPT plans that take into account MU constraints. This eliminates the post-processing procedure required by the TPS as well as the resultant deteriorating effect on ultimate dose distributions. This approach therefore allows IMPT plans to adopt all possible spot spacings optimally. Moreover, dosimetric benefits can be achieved using smaller spot spacings. PMID:23835656

  15. A new implementation of the programming system for structural synthesis (PROSSS-2)

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.

    1984-01-01

    This new implementation of the PROgramming System for Structural Synthesis (PROSSS-2) combines a general-purpose finite element computer program for structural analysis, a state-of-the-art optimization program, and several user-supplied, problem-dependent computer programs. The results are flexibility of the optimization procedure, organization, and versatility of the formulation of constraints and design variables. The analysis-optimization process results in a minimized objective function, typically the mass. The analysis and optimization programs are executed repeatedly by looping through the system until the process is stopped by a user-defined termination criterion. However, some of the analysis, such as model definition, need only be one time and the results are saved for future use. The user must write some small, simple FORTRAN programs to interface between the analysis and optimization programs. One of these programs, the front processor, converts the design variables output from the optimizer into the suitable format for input into the analyzer. Another, the end processor, retrieves the behavior variables and, optionally, their gradients from the analysis program and evaluates the objective function and constraints and optionally their gradients. These quantities are output in a format suitable for input into the optimizer. These user-supplied programs are problem-dependent because they depend primarily upon which finite elements are being used in the model. PROSSS-2 differs from the original PROSSS in that the optimizer and front and end processors have been integrated into the finite element computer program. This was done to reduce the complexity and increase portability of the system, and to take advantage of the data handling features found in the finite element program.

  16. Geologic Mapping of the NW Rim of Hellas Basin, Mars

    NASA Astrophysics Data System (ADS)

    Crown, D. A.; Bleamaster, L. F.; Mest, S. C.; Mustard, J. F.

    2009-03-01

    Geologic mapping of the NW rim of Hellas basin is providing new constraints on the magnitudes, extents, and history of volatile-driven processes as well as a geologic context for mineralogic identifications.

  17. Computer studies of baroclinic flow. [Atmospheric General Circulation Experiment

    NASA Technical Reports Server (NTRS)

    Gall, R.

    1985-01-01

    Programs necessary for computing the transition curve on the regime diagram for the atmospheric general circulation experiment (AGOE) were completed and used to determine the regime diagram for the rotating annulus and some axisymmetric flows for one possible AGOE configuration. The effect of geometrical constraints on the size of eddies developing from a basic state is being examined. In AGOE, the geometric constraint should be the width of the shear zone or the baroclinic zone. Linear and nonlinear models are to be used to examine both barotropic and baroclinic flows. The results should help explain the scale selection mechanism of baroclinic eddies in the atmosphere experimental models such as AGOE, and the multiple vortex phenomenon in tornadoes.

  18. International Space Station Electric Power System Performance Code-SPACE

    NASA Technical Reports Server (NTRS)

    Hojnicki, Jeffrey; McKissock, David; Fincannon, James; Green, Robert; Kerslake, Thomas; Delleur, Ann; Follo, Jeffrey; Trudell, Jeffrey; Hoffman, David J.; Jannette, Anthony; hide

    2005-01-01

    The System Power Analysis for Capability Evaluation (SPACE) software analyzes and predicts the minute-by-minute state of the International Space Station (ISS) electrical power system (EPS) for upcoming missions as well as EPS power generation capacity as a function of ISS configuration and orbital conditions. In order to complete the Certification of Flight Readiness (CoFR) process in which the mission is certified for flight each ISS System must thoroughly assess every proposed mission to verify that the system will support the planned mission operations; SPACE is the sole tool used to conduct these assessments for the power system capability. SPACE is an integrated power system model that incorporates a variety of modules tied together with integration routines and graphical output. The modules include orbit mechanics, solar array pointing/shadowing/thermal and electrical, battery performance, and power management and distribution performance. These modules are tightly integrated within a flexible architecture featuring data-file-driven configurations, source- or load-driven operation, and event scripting. SPACE also predicts the amount of power available for a given system configuration, spacecraft orientation, solar-array-pointing conditions, orbit, and the like. In the source-driven mode, the model must assure that energy balance is achieved, meaning that energy removed from the batteries must be restored (or balanced) each and every orbit. This entails an optimization scheme to ensure that energy balance is maintained without violating any other constraints.

  19. The Interior and Orbital Evolution of Charon as Preserved in Its Geologic Record

    NASA Technical Reports Server (NTRS)

    Rhoden, Alyssa Rose; Henning, Wade; Hurford, Terry A.; Hamilton, Douglas P.

    2014-01-01

    Pluto and its largest satellite, Charon, currently orbit in a mutually synchronous state; both bodies continuously show the same face to one another. This orbital configuration is a natural end-state for bodies that have undergone tidal dissipation. In order to achieve this state, both bodies would have experienced tidal heating and stress, with the extent of tidal activity controlled by the orbital evolution of Pluto and Charon and by the interior structure and rheology of each body. As the secondary, Charon would have experienced a larger tidal response than Pluto, which may have manifested as observable tectonism. Unfortunately, there are few constraints on the interiors of Pluto and Charon. In addition, the pathway by which Charon came to occupy its present orbital state is uncertain. If Charon's orbit experienced a high-eccentricity phase, as suggested by some orbital evolution models, tidal effects would have likely been more significant. Therefore, we determine the conditions under which Charon could have experienced tidally-driven geologic activity and the extent to which upcoming New Horizons spacecraft observations could be used to constrain Charon's internal structure and orbital evolution. Using plausible interior structure models that include an ocean layer, we find that tidally-driven tensile fractures would likely have formed on Charon if its eccentricity were on the order of 0.01, especially if Charon were orbiting closer to Pluto than at present. Such fractures could display a variety of azimuths near the equator and near the poles, with the range of azimuths in a given region dependent on longitude; east-west-trending fractures should dominate at mid-latitudes. The fracture patterns we predict indicate that Charon's surface geology could provide constraints on the thickness and viscosity of Charon's ice shell at the time of fracture formation.

  20. Capacitated set-covering model considering the distance objective and dependency of alternative facilities

    NASA Astrophysics Data System (ADS)

    Wayan Suletra, I.; Priyandari, Yusuf; Jauhari, Wakhid A.

    2018-03-01

    We propose a new model of facility location to solve a kind of problem that belong to a class of set-covering problem using an integer programming formulation. Our model contains a single objective function, but it represents two goals. The first is to minimize the number of facilities, and the other is to minimize the total distance of customers to facilities. The first goal is a mandatory goal, and the second is an improvement goal that is very useful when alternate optimum solutions for the first goal exist. We use a big number as a weight on the first goal to force the solution algorithm to give first priority to the first goal. Besides considering capacity constraints, our model accommodates a kind of either-or constraints representing facilities dependency. The either-or constraints will prevent the solution algorithm to select two or more facilities from the same set of facility with mutually exclusive properties. A real location selection problem to locate a set of wastewater treatment facility (IPAL) in Surakarta city, Indonesia, will describe the implementation of our model. A numerical example is given using the data of that real problem.

  1. An engineering approach to automatic programming

    NASA Technical Reports Server (NTRS)

    Rubin, Stuart H.

    1990-01-01

    An exploratory study of the automatic generation and optimization of symbolic programs using DECOM - a prototypical requirement specification model implemented in pure LISP was undertaken. It was concluded, on the basis of this study, that symbolic processing languages such as LISP can support a style of programming based upon formal transformation and dependent upon the expression of constraints in an object-oriented environment. Such languages can represent all aspects of the software generation process (including heuristic algorithms for effecting parallel search) as dynamic processes since data and program are represented in a uniform format.

  2. Application of CFE/POST2 for Simulation of Launch Vehicle Stage Separation

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Tartabini, Paul V.; Toniolo, Matthew D.; Roithmayr, Carlos M.; Karlgaard, Christopher D.; Samareh, Jamshid A.

    2009-01-01

    The constraint force equation (CFE) methodology provides a framework for modeling constraint forces and moments acting at joints that connect multiple vehicles. With implementation in Program to Optimize Simulated Trajectories II (POST 2), the CFE provides a capability to simulate end-to-end trajectories of launch vehicles, including stage separation. In this paper, the CFE/POST2 methodology is applied to the Shuttle-SRB separation problem as a test and validation case. The CFE/POST2 results are compared with STS-1 flight test data.

  3. Optimized production planning model for a multi-plant cultivation system under uncertainty

    NASA Astrophysics Data System (ADS)

    Ke, Shunkui; Guo, Doudou; Niu, Qingliang; Huang, Danfeng

    2015-02-01

    An inexact multi-constraint programming model under uncertainty was developed by incorporating a production plan algorithm into the crop production optimization framework under the multi-plant collaborative cultivation system. In the production plan, orders from the customers are assigned to a suitable plant under the constraints of plant capabilities and uncertainty parameters to maximize profit and achieve customer satisfaction. The developed model and solution method were applied to a case study of a multi-plant collaborative cultivation system to verify its applicability. As determined in the case analysis involving different orders from customers, the period of plant production planning and the interval between orders can significantly affect system benefits. Through the analysis of uncertain parameters, reliable and practical decisions can be generated using the suggested model of a multi-plant collaborative cultivation system.

  4. Varying execution discipline to increase performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, P.L.; Maccabe, A.B.

    1993-12-22

    This research investigates the relationship between execution discipline and performance. The hypothesis has two parts: 1. Different execution disciplines exhibit different performance for different computations, and 2. These differences can be effectively predicted by heuristics. A machine model is developed that can vary its execution discipline. That is, the model can execute a given program using either the control-driven, data-driven or demand-driven execution discipline. This model is referred to as a ``variable-execution-discipline`` machine. The instruction set for the model is the Program Dependence Web (PDW). The first part of the hypothesis will be tested by simulating the execution of themore » machine model on a suite of computations, based on the Livermore Fortran Kernel (LFK) Test (a.k.a. the Livermore Loops), using all three execution disciplines. Heuristics are developed to predict relative performance. These heuristics predict (a) the execution time under each discipline for one iteration of each loop and (b) the number of iterations taken by that loop; then the heuristics use those predictions to develop a prediction for the execution of the entire loop. Similar calculations are performed for branch statements. The second part of the hypothesis will be tested by comparing the results of the simulated execution with the predictions produced by the heuristics. If the hypothesis is supported, then the door is open for the development of machines that can vary execution discipline to increase performance.« less

  5. Multi-gene genetic programming based predictive models for municipal solid waste gasification in a fluidized bed gasifier.

    PubMed

    Pandey, Daya Shankar; Pan, Indranil; Das, Saptarshi; Leahy, James J; Kwapinski, Witold

    2015-03-01

    A multi-gene genetic programming technique is proposed as a new method to predict syngas yield production and the lower heating value for municipal solid waste gasification in a fluidized bed gasifier. The study shows that the predicted outputs of the municipal solid waste gasification process are in good agreement with the experimental dataset and also generalise well to validation (untrained) data. Published experimental datasets are used for model training and validation purposes. The results show the effectiveness of the genetic programming technique for solving complex nonlinear regression problems. The multi-gene genetic programming are also compared with a single-gene genetic programming model to show the relative merits and demerits of the technique. This study demonstrates that the genetic programming based data-driven modelling strategy can be a good candidate for developing models for other types of fuels as well. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Using Parent and Teacher Voices in the Creation of a Western-Based Early Childhood English-Language Program in China

    ERIC Educational Resources Information Center

    Shimpi, Priya M.; Paik, Jae H.; Wanerman, Todd; Johnson, Rebecca; Li, Hui; Duh, Shinchieh

    2015-01-01

    The current English-language research and educational program was driven by an initiative to create a more interactive, theme-based bilingual language education model for preschools in Chengdu, China. During a 2-week teacher education program centered at the Experimental Kindergarten of the Chinese Academy of Sciences in Chengdu, China, a team of…

  7. Towards a Theory-Based Design Framework for an Effective E-Learning Computer Programming Course

    ERIC Educational Resources Information Center

    McGowan, Ian S.

    2016-01-01

    Built on Dabbagh (2005), this paper presents a four component theory-based design framework for an e-learning session in introductory computer programming. The framework, driven by a body of exemplars component, emphasizes the transformative interaction between the knowledge building community (KBC) pedagogical model, a mixed instructional…

  8. Community College Dual Enrollment Faculty Orientation: A Utilization-Focused Approach

    ERIC Educational Resources Information Center

    Charlier, Hara D.; Duggan, Molly H.

    2010-01-01

    The current climate of accountability demands that institutions engage in data-driven program evaluation. In order to promote quality dual enrollment (DE) programs, institutions must support the adjunct faculty teaching college courses in high schools. This study uses Patton's utilization-focused model (1997) to conduct a formative evaluation of a…

  9. Evaluating Performance Measurement Systems in Nonprofit Agencies: The Program Accountability Quality Scale (PAQS).

    ERIC Educational Resources Information Center

    Poole, Dennis L.; Nelson, Joan; Carnahan, Sharon; Chepenik, Nancy G.; Tubiak, Christine

    2000-01-01

    Developed and field tested the Performance Accountability Quality Scale (PAQS) on 191 program performance measurement systems developed by nonprofit agencies in central Florida. Preliminary findings indicate that the PAQS provides a structure for obtaining expert opinions based on a theory-driven model about the quality of proposed measurement…

  10. IESIP - AN IMPROVED EXPLORATORY SEARCH TECHNIQUE FOR PURE INTEGER LINEAR PROGRAMMING PROBLEMS

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1994-01-01

    IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.

  11. Scenario and modelling uncertainty in global mean temperature change derived from emission-driven global climate models

    NASA Astrophysics Data System (ADS)

    Booth, B. B. B.; Bernie, D.; McNeall, D.; Hawkins, E.; Caesar, J.; Boulton, C.; Friedlingstein, P.; Sexton, D. M. H.

    2013-04-01

    We compare future changes in global mean temperature in response to different future scenarios which, for the first time, arise from emission-driven rather than concentration-driven perturbed parameter ensemble of a global climate model (GCM). These new GCM simulations sample uncertainties in atmospheric feedbacks, land carbon cycle, ocean physics and aerosol sulphur cycle processes. We find broader ranges of projected temperature responses arising when considering emission rather than concentration-driven simulations (with 10-90th percentile ranges of 1.7 K for the aggressive mitigation scenario, up to 3.9 K for the high-end, business as usual scenario). A small minority of simulations resulting from combinations of strong atmospheric feedbacks and carbon cycle responses show temperature increases in excess of 9 K (RCP8.5) and even under aggressive mitigation (RCP2.6) temperatures in excess of 4 K. While the simulations point to much larger temperature ranges for emission-driven experiments, they do not change existing expectations (based on previous concentration-driven experiments) on the timescales over which different sources of uncertainty are important. The new simulations sample a range of future atmospheric concentrations for each emission scenario. Both in the case of SRES A1B and the Representative Concentration Pathways (RCPs), the concentration scenarios used to drive GCM ensembles, lies towards the lower end of our simulated distribution. This design decision (a legacy of previous assessments) is likely to lead concentration-driven experiments to under-sample strong feedback responses in future projections. Our ensemble of emission-driven simulations span the global temperature response of the CMIP5 emission-driven simulations, except at the low end. Combinations of low climate sensitivity and low carbon cycle feedbacks lead to a number of CMIP5 responses to lie below our ensemble range. The ensemble simulates a number of high-end responses which lie above the CMIP5 carbon cycle range. These high-end simulations can be linked to sampling a number of stronger carbon cycle feedbacks and to sampling climate sensitivities above 4.5 K. This latter aspect highlights the priority in identifying real-world climate-sensitivity constraints which, if achieved, would lead to reductions on the upper bound of projected global mean temperature change. The ensembles of simulations presented here provides a framework to explore relationships between present-day observables and future changes, while the large spread of future-projected changes highlights the ongoing need for such work.

  12. Simulating water markets with transaction costs

    PubMed Central

    Erfani, Tohid; Binions, Olga; Harou, Julien J

    2014-01-01

    This paper presents an optimization model to simulate short-term pair-wise spot-market trading of surface water abstraction licenses (water rights). The approach uses a node-arc multicommodity formulation that tracks individual supplier-receiver transactions in a water resource network. This enables accounting for transaction costs between individual buyer-seller pairs and abstractor-specific rules and behaviors using constraints. Trades are driven by economic demand curves that represent each abstractor's time-varying water demand. The purpose of the proposed model is to assess potential hydrologic and economic outcomes of water markets and aid policy makers in designing water market regulations. The model is applied to the Great Ouse River basin in Eastern England. The model assesses the potential weekly water trades and abstractions that could occur in a normal and a dry year. Four sectors (public water supply, energy, agriculture, and industrial) are included in the 94 active licensed water diversions. Each license's unique environmental restrictions are represented and weekly economic water demand curves are estimated. Rules encoded as constraints represent current water management realities and plausible stakeholder-informed water market behaviors. Results show buyers favor sellers who can supply large volumes to minimize transactions. The energy plant cooling and agricultural licenses, often restricted from obtaining water at times when it generates benefits, benefit most from trades. Assumptions and model limitations are discussed. Key Points Transaction tracking hydro-economic optimization models simulate water markets Proposed model formulation incorporates transaction costs and trading behavior Water markets benefit users with the most restricted water access PMID:25598558

  13. Simulating water markets with transaction costs.

    PubMed

    Erfani, Tohid; Binions, Olga; Harou, Julien J

    2014-06-01

    This paper presents an optimization model to simulate short-term pair-wise spot-market trading of surface water abstraction licenses (water rights). The approach uses a node-arc multicommodity formulation that tracks individual supplier-receiver transactions in a water resource network. This enables accounting for transaction costs between individual buyer-seller pairs and abstractor-specific rules and behaviors using constraints. Trades are driven by economic demand curves that represent each abstractor's time-varying water demand. The purpose of the proposed model is to assess potential hydrologic and economic outcomes of water markets and aid policy makers in designing water market regulations. The model is applied to the Great Ouse River basin in Eastern England. The model assesses the potential weekly water trades and abstractions that could occur in a normal and a dry year. Four sectors (public water supply, energy, agriculture, and industrial) are included in the 94 active licensed water diversions. Each license's unique environmental restrictions are represented and weekly economic water demand curves are estimated. Rules encoded as constraints represent current water management realities and plausible stakeholder-informed water market behaviors. Results show buyers favor sellers who can supply large volumes to minimize transactions. The energy plant cooling and agricultural licenses, often restricted from obtaining water at times when it generates benefits, benefit most from trades. Assumptions and model limitations are discussed. Transaction tracking hydro-economic optimization models simulate water marketsProposed model formulation incorporates transaction costs and trading behaviorWater markets benefit users with the most restricted water access.

  14. Just-in-time Database-Driven Web Applications

    PubMed Central

    2003-01-01

    "Just-in-time" database-driven Web applications are inexpensive, quickly-developed software that can be put to many uses within a health care organization. Database-driven Web applications garnered 73873 hits on our system-wide intranet in 2002. They enabled collaboration and communication via user-friendly Web browser-based interfaces for both mission-critical and patient-care-critical functions. Nineteen database-driven Web applications were developed. The application categories that comprised 80% of the hits were results reporting (27%), graduate medical education (26%), research (20%), and bed availability (8%). The mean number of hits per application was 3888 (SD = 5598; range, 14-19879). A model is described for just-in-time database-driven Web application development and an example given with a popular HTML editor and database program. PMID:14517109

  15. Large eddy simulations of time-dependent and buoyancy-driven channel flows

    NASA Technical Reports Server (NTRS)

    Cabot, William H.

    1993-01-01

    The primary goal of this work has been to assess the performance of the dynamic SGS model in the large eddy simulation (LES) of channel flows in a variety of situations, viz., in temporal development of channel flow turned by a transverse pressure gradient and especially in buoyancy-driven turbulent flows such as Rayleigh-Benard and internally heated channel convection. For buoyancy-driven flows, there are additional buoyant terms that are possible in the base models, and one objective has been to determine if the dynamic SGS model results are sensitive to such terms. The ultimate goal is to determine the minimal base model needed in the dynamic SGS model to provide accurate results in flows with more complicated physical features. In addition, a program of direct numerical simulation (DNS) of fully compressible channel convection has been undertaken to determine stratification and compressibility effects. These simulations are intended to provide a comparative base for performing the LES of compressible (or highly stratified, pseudo-compressible) convection at high Reynolds number in the future.

  16. Ontology development for provenance tracing in National Climate Assessment of the US Global Change Research Program

    NASA Astrophysics Data System (ADS)

    Fu, Linyun; Ma, Xiaogang; Zheng, Jin; Goldstein, Justin; Duggan, Brian; West, Patrick; Aulenbach, Steve; Tilmes, Curt; Fox, Peter

    2014-05-01

    This poster will show how we used a case-driven iterative methodology to develop an ontology to represent the content structure and the associated provenance information in a National Climate Assessment (NCA) report of the US Global Change Research Program (USGCRP). We applied the W3C PROV-O ontology to implement a formal representation of provenance. We argue that the use case-driven, iterative development process and the application of a formal provenance ontology help efficiently incorporate domain knowledge from earth and environmental scientists in a well-structured model interoperable in the context of the Web of Data.

  17. From Data to Improved Decisions: Operations Research in Healthcare Delivery.

    PubMed

    Capan, Muge; Khojandi, Anahita; Denton, Brian T; Williams, Kimberly D; Ayer, Turgay; Chhatwal, Jagpreet; Kurt, Murat; Lobo, Jennifer Mason; Roberts, Mark S; Zaric, Greg; Zhang, Shengfan; Schwartz, J Sanford

    2017-11-01

    The Operations Research Interest Group (ORIG) within the Society of Medical Decision Making (SMDM) is a multidisciplinary interest group of professionals that specializes in taking an analytical approach to medical decision making and healthcare delivery. ORIG is interested in leveraging mathematical methods associated with the field of Operations Research (OR) to obtain data-driven solutions to complex healthcare problems and encourage collaborations across disciplines. This paper introduces OR for the non-expert and draws attention to opportunities where OR can be utilized to facilitate solutions to healthcare problems. Decision making is the process of choosing between possible solutions to a problem with respect to certain metrics. OR concepts can help systematically improve decision making through efficient modeling techniques while accounting for relevant constraints. Depending on the problem, methods that are part of OR (e.g., linear programming, Markov Decision Processes) or methods that are derived from related fields (e.g., regression from statistics) can be incorporated into the solution approach. This paper highlights the characteristics of different OR methods that have been applied to healthcare decision making and provides examples of emerging research opportunities. We illustrate OR applications in healthcare using previous studies, including diagnosis and treatment of diseases, organ transplants, and patient flow decisions. Further, we provide a selection of emerging areas for utilizing OR. There is a timely need to inform practitioners and policy makers of the benefits of using OR techniques in solving healthcare problems. OR methods can support the development of sustainable long-term solutions across disease management, service delivery, and health policies by optimizing the performance of system elements and analyzing their interaction while considering relevant constraints.

  18. Empirical Performance Model-Driven Data Layout Optimization and Library Call Selection for Tensor Contraction Expressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Qingda; Gao, Xiaoyang; Krishnamoorthy, Sriram

    Empirical optimizers like ATLAS have been very effective in optimizing computational kernels in libraries. The best choice of parameters such as tile size and degree of loop unrolling is determined by executing different versions of the computation. In contrast, optimizing compilers use a model-driven approach to program transformation. While the model-driven approach of optimizing compilers is generally orders of magnitude faster than ATLAS-like library generators, its effectiveness can be limited by the accuracy of the performance models used. In this paper, we describe an approach where a class of computations is modeled in terms of constituent operations that are empiricallymore » measured, thereby allowing modeling of the overall execution time. The performance model with empirically determined cost components is used to perform data layout optimization together with the selection of library calls and layout transformations in the context of the Tensor Contraction Engine, a compiler for a high-level domain-specific language for expressing computational models in quantum chemistry. The effectiveness of the approach is demonstrated through experimental measurements on representative computations from quantum chemistry.« less

  19. A Discussion of Issues in Integrity Constraint Monitoring

    NASA Technical Reports Server (NTRS)

    Fernandez, Francisco G.; Gates, Ann Q.; Cooke, Daniel E.

    1998-01-01

    In the development of large-scale software systems, analysts, designers, and programmers identify properties of data objects in the system. The ability to check those assertions during runtime is desirable as a means of verifying the integrity of the program. Typically, programmers ensure the satisfaction of such properties through the use of some form of manually embedded assertion check. The disadvantage to this approach is that these assertions become entangled within the program code. The goal of the research is to develop an integrity constraint monitoring mechanism whereby a repository of software system properties (called integrity constraints) are automatically inserted into the program by the mechanism to check for incorrect program behaviors. Such a mechanism would overcome many of the deficiencies of manually embedded assertion checks. This paper gives an overview of the preliminary work performed toward this goal. The manual instrumentation of constraint checking on a series of test programs is discussed, This review then is used as the basis for a discussion of issues to be considered in developing an automated integrity constraint monitor.

  20. Constraints on Wave Drag Parameterization Schemes for Simulating the Quasi-Biennial Oscillation. Part II: Combined Effects of Gravity Waves and Equatorial Planetary Waves.

    NASA Astrophysics Data System (ADS)

    Campbell, Lucy J.; Shepherd, Theodore G.

    2005-12-01

    This study examines the effect of combining equatorial planetary wave drag and gravity wave drag in a one-dimensional zonal mean model of the quasi-biennial oscillation (QBO). Several different combinations of planetary wave and gravity wave drag schemes are considered in the investigations, with the aim being to assess which aspects of the different schemes affect the nature of the modeled QBO. Results show that it is possible to generate a realistic-looking QBO with various combinations of drag from the two types of waves, but there are some constraints on the wave input spectra and amplitudes. For example, if the phase speeds of the gravity waves in the input spectrum are large relative to those of the equatorial planetary waves, critical level absorption of the equatorial planetary waves may occur. The resulting mean-wind oscillation, in that case, is driven almost exclusively by the gravity wave drag, with only a small contribution from the planetary waves at low levels. With an appropriate choice of wave input parameters, it is possible to obtain a QBO with a realistic period and to which both types of waves contribute. This is the regime in which the terrestrial QBO appears to reside. There may also be constraints on the initial strength of the wind shear, and these are similar to the constraints that apply when gravity wave drag is used without any planetary wave drag.In recent years, it has been observed that, in order to simulate the QBO accurately, general circulation models require parameterized gravity wave drag, in addition to the drag from resolved planetary-scale waves, and that even if the planetary wave amplitudes are incorrect, the gravity wave drag can be adjusted to compensate. This study provides a basis for knowing that such a compensation is possible.

  1. Radio emission from embryonic superluminous supernova remnants

    NASA Astrophysics Data System (ADS)

    Omand, Conor M. B.; Kashiyama, Kazumi; Murase, Kohta

    2018-02-01

    It has been widely argued that Type-I superluminous supernovae (SLSNe-I) are driven by powerful central engines with a long-lasting energy injection after the core-collapse of massive progenitors. One of the popular hypotheses is that the hidden engines are fast-rotating pulsars with a magnetic field of B ˜ 1013-1015 G. Murase, Kashiyama & Mészáros proposed that quasi-steady radio/submm emission from non-thermal electron-positron pairs in nascent pulsar wind nebulae can be used as a relevant counterpart of such pulsar-driven supernovae (SNe). In this work, focusing on the nascent SLSN-I remnants, we examine constraints that can be placed by radio emission. We show that the Atacama Large Millimeter/submillimetre Array can detect the radio nebula from SNe at DL ˜ 1 Gpc in a few years after the explosion, while the Jansky Very Large Array can also detect the counterpart in a few decades. The proposed radio follow-up observation could solve the parameter degeneracy in the pulsar-driven SN model for optical/UV light curves, and could also give us clues to young neutron star scenarios for SLSNe-I and fast radio bursts.

  2. Transportation impacts of the Chicago River closure to prevent an asian carp infestation.

    DOT National Transportation Integrated Search

    2012-07-01

    This project develops a simple linear programming model of the Upper Midwest regions rail transportation network to test : whether a closure of the Chicago River to freight traffic would impact the capacity constraint of the rail system. The result :...

  3. Reconciling Pairs of Concurrently Used Clinical Practice Guidelines Using Constraint Logic Programming

    PubMed Central

    Wilk, Szymon; Michalowski, Martin; Michalowski, Wojtek; Hing, Marisela Mainegra; Farion, Ken

    2011-01-01

    This paper describes a new methodological approach to reconciling adverse and contradictory activities (called points of contention) occurring when a patient is managed according to two or more concurrently used clinical practice guidelines (CPGs). The need to address these inconsistencies occurs when a patient with more than one disease, each of which is a comorbid condition, has to be managed according to different treatment regimens. We propose an automatic procedure that constructs a mathematical guideline model using the Constraint Logic Programming (CLP) methodology, uses this model to identify and mitigate encountered points of contention, and revises the considered CPGs accordingly. The proposed procedure is used as an alerting mechanism and coupled with a guideline execution engine warns the physician about potential problems with the concurrent application of two or more guidelines. We illustrate the operation of our procedure in a clinical scenario describing simultaneous use of CPGs for duodenal ulcer and transient ischemic attack. PMID:22195153

  4. Water resources planning and management : A stochastic dual dynamic programming approach

    NASA Astrophysics Data System (ADS)

    Goor, Q.; Pinte, D.; Tilmant, A.

    2008-12-01

    Allocating water between different users and uses, including the environment, is one of the most challenging task facing water resources managers and has always been at the heart of Integrated Water Resources Management (IWRM). As water scarcity is expected to increase over time, allocation decisions among the different uses will have to be found taking into account the complex interactions between water and the economy. Hydro-economic optimization models can capture those interactions while prescribing efficient allocation policies. Many hydro-economic models found in the literature are formulated as large-scale non linear optimization problems (NLP), seeking to maximize net benefits from the system operation while meeting operational and/or institutional constraints, and describing the main hydrological processes. However, those models rarely incorporate the uncertainty inherent to the availability of water, essentially because of the computational difficulties associated stochastic formulations. The purpose of this presentation is to present a stochastic programming model that can identify economically efficient allocation policies in large-scale multipurpose multireservoir systems. The model is based on stochastic dual dynamic programming (SDDP), an extension of traditional SDP that is not affected by the curse of dimensionality. SDDP identify efficient allocation policies while considering the hydrologic uncertainty. The objective function includes the net benefits from the hydropower and irrigation sectors, as well as penalties for not meeting operational and/or institutional constraints. To be able to implement the efficient decomposition scheme that remove the computational burden, the one-stage SDDP problem has to be a linear program. Recent developments improve the representation of the non-linear and mildly non- convex hydropower function through a convex hull approximation of the true hydropower function. This model is illustrated on a cascade of 14 reservoirs on the Nile river basin.

  5. Leptogenesis scenarios for natural SUSY with mixed axion-higgsino dark matter

    NASA Astrophysics Data System (ADS)

    Bae, Kyu Jung; Baer, Howard; Serce, Hasan; Zhang, Yi-Fan

    2016-01-01

    Supersymmetric models with radiatively-driven electroweak naturalness require light higgsinos of mass ~ 100-300 GeV . Naturalness in the QCD sector is invoked via the Peccei-Quinn (PQ) axion leading to mixed axion-higgsino dark matter. The SUSY DFSZ axion model provides a solution to the SUSY μ problem and the Little Hierarchy μll m3/2 may emerge as a consequence of a mismatch between PQ and hidden sector mass scales. The traditional gravitino problem is now augmented by the axino and saxion problems, since these latter particles can also contribute to overproduction of WIMPs or dark radiation, or violation of BBN constraints. We compute regions of the TR vs. m3/2 plane allowed by BBN, dark matter and dark radiation constraints for various PQ scale choices fa. These regions are compared to the values needed for thermal leptogenesis, non-thermal leptogenesis, oscillating sneutrino leptogenesis and Affleck-Dine leptogenesis. The latter three are allowed in wide regions of parameter space for PQ scale fa~ 1010-1012 GeV which is also favored by naturalness: fa ~ √μMP/λμ ~ 1010-1012 GeV . These fa values correspond to axion masses somewhat above the projected ADMX search regions.

  6. The Nursing Leadership Institute program evaluation: a critique

    PubMed Central

    Havaei, Farinaz; MacPhee, Maura

    2015-01-01

    A theory-driven program evaluation was conducted for a nursing leadership program, as a collaborative project between university faculty, the nurses’ union, the provincial Ministry of Health, and its chief nursing officers. A collaborative logic model process was used to engage stakeholders, and mixed methods approaches were used to answer evaluation questions. Despite demonstrated, successful outcomes, the leadership program was not supported with continued funding. This paper examines what happened during the evaluation process: What factors failed to sustain this program? PMID:29355180

  7. An information maximization model of eye movements

    NASA Technical Reports Server (NTRS)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  8. Gravity-driven groundwater flow and slope failure potential: 1. Elastic effective-stress model

    USGS Publications Warehouse

    Iverson, Richard M.; Reid, Mark E.

    1992-01-01

    Hilly or mountainous topography influences gravity-driven groundwater flow and the consequent distribution of effective stress in shallow subsurface environments. Effective stress, in turn, influences the potential for slope failure. To evaluate these influences, we formulate a two-dimensional, steady state, poroelastic model. The governing equations incorporate groundwater effects as body forces, and they demonstrate that spatially uniform pore pressure changes do not influence effective stresses. We implement the model using two finite element codes. As an illustrative case, we calculate the groundwater flow field, total body force field, and effective stress field in a straight, homogeneous hillslope. The total body force and effective stress fields show that groundwater flow can influence shear stresses as well as effective normal stresses. In most parts of the hillslope, groundwater flow significantly increases the Coulomb failure potential Φ, which we define as the ratio of maximum shear stress to mean effective normal stress. Groundwater flow also shifts the locus of greatest failure potential toward the slope toe. However, the effects of groundwater flow on failure potential are less pronounced than might be anticipated on the basis of a simpler, one-dimensional, limit equilibrium analysis. This is a consequence of continuity, compatibility, and boundary constraints on the two-dimensional flow and stress fields, and it points to important differences between our elastic continuum model and limit equilibrium models commonly used to assess slope stability.

  9. Introductory Geophysics at Colorado College: A Research-Driven Course

    NASA Astrophysics Data System (ADS)

    Bank, C.

    2003-12-01

    Doing research during an undergraduate course provides stimulus for students and instructor. Students learn to appreciate the scientific method and get hands-on experience, while the instructor remains thrilled about teaching her/his discipline. The introductory geophysics course taught at Colorado College is made up of four units (gravity, seismic, resistivity, and magnetic) using available geophysical equipment. Within each unit students learn the physical background of the method, and then tackle a small research project selected by the instructor. Students pose a research question (or formulate a hypothesis), collect near-surface data in the field, process it using personal computers, and analyse it by creating computer models and running simple inversions. Computer work is done using the programming language Matlab, with several pre-coded scripts to make the programming experience more comfortable. Students then interpret the data and answer the question posed at the beginning. The unit ends with students writing a summary report, creating a poster, or presenting their findings orally. First evaluations of the course show that students appreciate the emphasis on field work and applications to real problems, as well as developing and testing their own hypotheses. The main challenge for the instructor is to find feasible projects, given the time constraints of a course and availability of field sites with new questions to answer. My presentation will feature a few projects done by students during the course and will discuss the experience students and I have had with this approach.

  10. Using genetic algorithm to solve a new multi-period stochastic optimization model

    NASA Astrophysics Data System (ADS)

    Zhang, Xin-Li; Zhang, Ke-Cun

    2009-09-01

    This paper presents a new asset allocation model based on the CVaR risk measure and transaction costs. Institutional investors manage their strategic asset mix over time to achieve favorable returns subject to various uncertainties, policy and legal constraints, and other requirements. One may use a multi-period portfolio optimization model in order to determine an optimal asset mix. Recently, an alternative stochastic programming model with simulated paths was proposed by Hibiki [N. Hibiki, A hybrid simulation/tree multi-period stochastic programming model for optimal asset allocation, in: H. Takahashi, (Ed.) The Japanese Association of Financial Econometrics and Engineering, JAFFE Journal (2001) 89-119 (in Japanese); N. Hibiki A hybrid simulation/tree stochastic optimization model for dynamic asset allocation, in: B. Scherer (Ed.), Asset and Liability Management Tools: A Handbook for Best Practice, Risk Books, 2003, pp. 269-294], which was called a hybrid model. However, the transaction costs weren't considered in that paper. In this paper, we improve Hibiki's model in the following aspects: (1) The risk measure CVaR is introduced to control the wealth loss risk while maximizing the expected utility; (2) Typical market imperfections such as short sale constraints, proportional transaction costs are considered simultaneously. (3) Applying a genetic algorithm to solve the resulting model is discussed in detail. Numerical results show the suitability and feasibility of our methodology.

  11. A dynamic spar numerical model for passive shape change

    NASA Astrophysics Data System (ADS)

    Calogero, J. P.; Frecker, M. I.; Hasnain, Z.; Hubbard, J. E., Jr.

    2016-10-01

    A three-dimensional constraint-driven dynamic rigid-link numerical model of a flapping wing structure with compliant joints (CJs) called the dynamic spar numerical model is introduced and implemented. CJs are modeled as spherical joints with distributed mass and spring-dampers with coupled nonlinear spring and damping coefficients, which models compliant mechanisms spatially distributed in the structure while greatly reducing computation time compared to a finite element model. The constraints are established, followed by the formulation of a state model used in conjunction with a forward time integrator, an experiment to verify a rigid-link assumption and determine a flapping angle function, and finally several example runs. Modeling the CJs as coupled bi-linear springs shows the wing is able to flex more during upstroke than downstroke. Coupling the spring stiffnesses allows an angular deformation about one axis to induce an angular deformation about another axis, where the magnitude is proportional to the coupling term. Modeling both the leading edge and diagonal spars shows that the diagonal spar changes the kinematics of the leading edge spar verses only considering the leading edge spar, causing much larger axial rotations in the leading edge spar. The kinematics are very sensitive to CJ location, where moving the CJ toward the wing root causes a stronger response, and adding multiple CJs on the leading edge spar with a CJ on the diagonal spar allows the wing to deform with larger magnitude in all directions. This model lays a framework for a tool which can be used to understand flapping wing flight.

  12. Glass Property Models and Constraints for Estimating the Glass to be Produced at Hanford by Implementing Current Advanced Glass Formulation Efforts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vienna, John D.; Kim, Dong-Sang; Skorski, Daniel C.

    2013-07-01

    Recent glass formulation and melter testing data have suggested that significant increases in waste loading in HLW and LAW glasses are possible over current system planning estimates. The data (although limited in some cases) were evaluated to determine a set of constraints and models that could be used to estimate the maximum loading of specific waste compositions in glass. It is recommended that these models and constraints be used to estimate the likely HLW and LAW glass volumes that would result if the current glass formulation studies are successfully completed. It is recognized that some of the models are preliminarymore » in nature and will change in the coming years. Plus the models do not currently address the prediction uncertainties that would be needed before they could be used in plant operations. The models and constraints are only meant to give an indication of rough glass volumes and are not intended to be used in plant operation or waste form qualification activities. A current research program is in place to develop the data, models, and uncertainty descriptions for that purpose. A fundamental tenet underlying the research reported in this document is to try to be less conservative than previous studies when developing constraints for estimating the glass to be produced by implementing current advanced glass formulation efforts. The less conservative approach documented herein should allow for the estimate of glass masses that may be realized if the current efforts in advanced glass formulations are completed over the coming years and are as successful as early indications suggest they may be. Because of this approach there is an unquantifiable uncertainty in the ultimate glass volume projections due to model prediction uncertainties that has to be considered along with other system uncertainties such as waste compositions and amounts to be immobilized, split factors between LAW and HLW, etc.« less

  13. Advancing the climate data driven crop-modeling studies in the dry areas of Northern Syria and Lebanon: an important first step for assessing impact of future climate.

    PubMed

    Dixit, Prakash N; Telleria, Roberto

    2015-04-01

    Inter-annual and seasonal variability in climatic parameters, most importantly rainfall, have potential to cause climate-induced risk in long-term crop production. Short-term field studies do not capture the full nature of such risk and the extent to which modifications to crop, soil and water management recommendations may be made to mitigate the extent of such risk. Crop modeling studies driven by long-term daily weather data can predict the impact of climate-induced risk on crop growth and yield however, the availability of long-term daily weather data can present serious constraints to the use of crop models. To tackle this constraint, two weather generators namely, LARS-WG and MarkSim, were evaluated in order to assess their capabilities of reproducing frequency distributions, means, variances, dry spell and wet chains of observed daily precipitation, maximum and minimum temperature, and solar radiation for the eight locations across cropping areas of Northern Syria and Lebanon. Further, the application of generated long-term daily weather data, with both weather generators, in simulating barley growth and yield was also evaluated. We found that overall LARS-WG performed better than MarkSim in generating daily weather parameters and in 50 years continuous simulation of barley growth and yield. Our findings suggest that LARS-WG does not necessarily require long-term e.g., >30 years observed weather data for calibration as generated results proved to be satisfactory with >10 years of observed data except in area with higher altitude. Evaluating these weather generators and the ability of generated weather data to perform long-term simulation of crop growth and yield is an important first step to assess the impact of future climate on yields, and to identify promising technologies to make agricultural systems more resilient in the given region. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Multi-Objective Trajectory Optimization of a Hypersonic Reconnaissance Vehicle with Temperature Constraints

    NASA Astrophysics Data System (ADS)

    Masternak, Tadeusz J.

    This research determines temperature-constrained optimal trajectories for a scramjet-based hypersonic reconnaissance vehicle by developing an optimal control formulation and solving it using a variable order Gauss-Radau quadrature collocation method with a Non-Linear Programming (NLP) solver. The vehicle is assumed to be an air-breathing reconnaissance aircraft that has specified takeoff/landing locations, airborne refueling constraints, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom scramjet aircraft model is adapted from previous work and includes flight dynamics, aerodynamics, and thermal constraints. Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and coverage of high-value targets. To solve the optimal control formulation, a MATLAB-based package called General Pseudospectral Optimal Control Software (GPOPS-II) is used, which transcribes continuous time optimal control problems into an NLP problem. In addition, since a mission profile can have varying vehicle dynamics and en-route imposed constraints, the optimal control problem formulation can be broken up into several "phases" with differing dynamics and/or varying initial/final constraints. Optimal trajectories are developed using several different performance costs in the optimal control formulation: minimum time, minimum time with control penalties, and maximum range. The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for larger-scale operational and campaign planning and execution.

  15. Country specific predictions of the cost-effectiveness of malaria vaccine RTS,S/AS01 in endemic Africa.

    PubMed

    Galactionova, Katya; Tediosi, Fabrizio; Camponovo, Flavia; Smith, Thomas A; Gething, Peter W; Penny, Melissa A

    2017-01-03

    RTS,S/AS01 is a safe and moderately efficacious vaccine considered for implementation in endemic Africa. Model predictions of impact and cost-effectiveness of this new intervention could aid in country adoption decisions. The impact of RTS,S was assessed in 43 countries using an ensemble of models of Plasmodium falciparum epidemiology. Informed by the 32months follow-up data from the phase 3 trial, vaccine effectiveness was evaluated at country levels of malaria parasite prevalence, coverage of control interventions and immunization. Benefits and costs of the program incremental to routine malaria control were evaluated for a four dose schedule: first dose administered at six months, second and third - before 9months, and fourth dose at 27months of age. Sensitivity analyses around vaccine properties, transmission, and economic inputs were conducted. If implemented in all 43 countries the vaccine has the potential to avert 123 (117;129) million malaria episodes over the first 10years. Burden averted averages 18,413 (range of country median estimates 156-40,054) DALYs per 100,000 fully vaccinated children with much variation across settings primarily driven by differences in transmission intensity. At a price of $5 per dose program costs average $39.8 per fully vaccinated child with a median cost-effectiveness ratio of $188 (range $78-$22,448) per DALY averted; the ratio is lower by one third - $136 (range $116-$220) - in settings where parasite prevalence in children aged 2-10years is at or above 10%. RTS,S/AS01has the potential to substantially reduce malaria burden in children across Africa. Conditional on assumptions on price, coverage, and vaccine properties, adding RTS,S to routine malaria control interventions would be highly cost-effective. Implementation decisions will need to further consider feasibility of scaling up existing control programs, and operational constraints in reaching children at risk with the schedule. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  16. Validation of buoyancy driven spectral tensor model using HATS data

    NASA Astrophysics Data System (ADS)

    Chougule, A.; Mann, J.; Kelly, M.; Larsen, G. C.

    2016-09-01

    We present a homogeneous spectral tensor model for wind velocity and temperature fluctuations, driven by mean vertical shear and mean temperature gradient. Results from the model, including one-dimensional velocity and temperature spectra and the associated co-spectra, are shown in this paper. The model also reproduces two-point statistics, such as coherence and phases, via cross-spectra between two points separated in space. Model results are compared with observations from the Horizontal Array Turbulence Study (HATS) field program (Horst et al. 2004). The spectral velocity tensor in the model is described via five parameters: the dissipation rate (ɛ), length scale of energy-containing eddies (L), a turbulence anisotropy parameter (Γ), gradient Richardson number (Ri) representing the atmospheric stability and the rate of destruction of temperature variance (ηθ).

  17. Efficient pairwise RNA structure prediction using probabilistic alignment constraints in Dynalign

    PubMed Central

    2007-01-01

    Background Joint alignment and secondary structure prediction of two RNA sequences can significantly improve the accuracy of the structural predictions. Methods addressing this problem, however, are forced to employ constraints that reduce computation by restricting the alignments and/or structures (i.e. folds) that are permissible. In this paper, a new methodology is presented for the purpose of establishing alignment constraints based on nucleotide alignment and insertion posterior probabilities. Using a hidden Markov model, posterior probabilities of alignment and insertion are computed for all possible pairings of nucleotide positions from the two sequences. These alignment and insertion posterior probabilities are additively combined to obtain probabilities of co-incidence for nucleotide position pairs. A suitable alignment constraint is obtained by thresholding the co-incidence probabilities. The constraint is integrated with Dynalign, a free energy minimization algorithm for joint alignment and secondary structure prediction. The resulting method is benchmarked against the previous version of Dynalign and against other programs for pairwise RNA structure prediction. Results The proposed technique eliminates manual parameter selection in Dynalign and provides significant computational time savings in comparison to prior constraints in Dynalign while simultaneously providing a small improvement in the structural prediction accuracy. Savings are also realized in memory. In experiments over a 5S RNA dataset with average sequence length of approximately 120 nucleotides, the method reduces computation by a factor of 2. The method performs favorably in comparison to other programs for pairwise RNA structure prediction: yielding better accuracy, on average, and requiring significantly lesser computational resources. Conclusion Probabilistic analysis can be utilized in order to automate the determination of alignment constraints for pairwise RNA structure prediction methods in a principled fashion. These constraints can reduce the computational and memory requirements of these methods while maintaining or improving their accuracy of structural prediction. This extends the practical reach of these methods to longer length sequences. The revised Dynalign code is freely available for download. PMID:17445273

  18. Relating constrained motion to force through Newton's second law

    NASA Astrophysics Data System (ADS)

    Roithmayr, Carlos M.

    When a mechanical system is subject to constraints its motion is in some way restricted. In accordance with Newton's second law, motion is a direct result of forces acting on a system; hence, constraint is inextricably linked to force. The presence of a constraint implies the application of particular forces needed to compel motion in accordance with the constraint; absence of a constraint implies the absence of such forces. The objective of this thesis is to formulate a comprehensive, consistent, and concise method for identifying a set of forces needed to constrain the behavior of a mechanical system modeled as a set of particles and rigid bodies. The goal is accomplished in large part by expressing constraint equations in vector form rather than entirely in terms of scalars. The method developed here can be applied whenever constraints can be described at the acceleration level by a set of independent equations that are linear in acceleration. Hence, the range of applicability extends to servo-constraints or program constraints described at the velocity level with relationships that are nonlinear in velocity. All configuration constraints, and an important class of classical motion constraints, can be expressed at the velocity level by using equations that are linear in velocity; therefore, the associated constraint equations are linear in acceleration when written at the acceleration level. Two new approaches are presented for deriving equations governing motion of a system subject to constraints expressed at the velocity level with equations that are nonlinear in velocity. By using partial accelerations instead of the partial velocities normally employed with Kane's method, it is possible to form dynamical equations that either do or do not contain evidence of the constraint forces, depending on the analyst's interests.

  19. Program manual for ASTOP, an Arbitrary space trajectory optimization program

    NASA Technical Reports Server (NTRS)

    Horsewood, J. L.

    1974-01-01

    The ASTOP program (an Arbitrary Space Trajectory Optimization Program) designed to generate optimum low-thrust trajectories in an N-body field while satisfying selected hardware and operational constraints is presented. The trajectory is divided into a number of segments or arcs over which the control is held constant. This constant control over each arc is optimized using a parameter optimization scheme based on gradient techniques. A modified Encke formulation of the equations of motion is employed. The program provides a wide range of constraint, end conditions, and performance index options. The basic approach is conducive to future expansion of features such as the incorporation of new constraints and the addition of new end conditions.

  20. Development of a New Data Tool for Computing Launch and Landing Availability with Respect to Surface Weather

    NASA Technical Reports Server (NTRS)

    Burns, K. Lee; Altino, Karen

    2008-01-01

    The Marshall Space Flight Center Natural Environments Branch has a long history of expertise in the modeling and computation of statistical launch availabilities with respect to weather conditions. Their existing data analysis product, the Atmospheric Parametric Risk Assessment (APRA) tool, computes launch availability given an input set of vehicle hardware and/or operational weather constraints by calculating the climatological probability of exceeding the specified constraint limits, APRA has been used extensively to provide the Space Shuttle program the ability to estimate impacts that various proposed design modifications would have to overall launch availability. The model accounts for both seasonal and diurnal variability at a single geographic location and provides output probabilities for a single arbitrary launch attempt. Recently, the Shuttle program has shown interest in having additional capabilities added to the APRA model, including analysis of humidity parameters, inclusion of landing site weather to produce landing availability, and concurrent analysis of multiple sites, to assist in operational landing site selection. In addition, the Constellation program has also expressed interest in the APRA tool, and has requested several additional capabilities to address some Constellation-specific issues, both in the specification and verification of design requirements and in the development of operations concepts. The combined scope of the requested capability enhancements suggests an evolution of the model beyond a simple revision process. Development has begun for a new data analysis tool that will satisfy the requests of both programs. This new tool, Probabilities of Atmospheric Conditions and Environmental Risk (PACER), will provide greater flexibility and significantly enhanced functionality compared to the currently existing tool.

  1. The reverse evolution from multicellularity to unicellularity during carcinogenesis.

    PubMed

    Chen, Han; Lin, Fangqin; Xing, Ke; He, Xionglei

    2015-03-09

    Theoretical reasoning suggests that cancer may result from a knockdown of the genetic constraints that evolved for the maintenance of metazoan multicellularity. By characterizing the whole-life history of a xenograft tumour, here we show that metastasis is driven by positive selection for general loss-of-function mutations on multicellularity-related genes. Expression analyses reveal mainly downregulation of multicellularity-related genes and an evolving expression profile towards that of embryonic stem cells, the cell type resembling unicellular life in its capacity of unlimited clonal proliferation. Also, the emergence of metazoan multicellularity ~600 Myr ago is accompanied by an elevated birth rate of cancer genes, and there are more loss-of-function tumour suppressors than activated oncogenes in a typical tumour. These data collectively suggest that cancer represents a loss-of-function-driven reverse evolution back to the unicellular 'ground state'. This cancer evolution model may account for inter-/intratumoural genetic heterogeneity, could explain distant-organ metastases and hold implications for cancer therapy.

  2. Contraction driven flow in the extended vein networks of Physarum polycephalum

    NASA Astrophysics Data System (ADS)

    Alim, Karen; Amselem, Gabriel; Peaudecerf, Francois; Pringle, Anne; Brenner, Michael P.

    2011-11-01

    The true slime mold Physarum polycephalum is a basal organism that forms an extended network of veins to forage for food. P. polycephalum is renown for its adaptive changes of vein structure and morphology in response to food sources. These rearrangements presumably occur to establish an efficient transport and mixing of resources throughout the networks thus presenting a prototype to design transport networks under the constraints of laminar flow. The physical flows of cytoplasmic fluid enclosed by the veins exhibit an oscillatory flow termed ``shuttle streaming.'' The flow exceed by far the volume required for growth at the margins suggesting that the additional energy cost for generating the flow is spent for efficient and/or targeted redistribution of resources. We show that the viscous shuttle flow is driven by the radial contractions of the veins that accompany the streaming. We present a model for the fluid flow and resource dispersion arising due to radial contractions. The transport and mixing properties of the flow are discussed.

  3. Expanding the generation and use of economic and financial data to improve HIV program planning and efficiency: a global perspective.

    PubMed

    Holmes, Charles B; Atun, Rifat; Avila, Carlos; Blandford, John M

    2011-08-01

    Cost information is needed at multiple levels of health care systems to inform the public health response to HIV. To date, most attention has been paid to identifying the cost drivers of providing antiretroviral treatment, and these data have driven interventions that have been successful in reducing drug and human resource costs. The need for further cost information, especially for less well-studied areas such as HIV prevention, is particularly acute given global budget constraints and ongoing efforts to extract the greatest possible value from money spent on the response. Cost information can be collected from multiple perspectives and levels of the health care system (site, program, and national levels), and it is critical to choose the appropriate methodology in order to generate the appropriate information for decision-making. Organizations such as United States President's Emergency Plan for AIDS Relief, the Global Fund to Fight AIDS, Tuberculosis, and Malaria, and other organizations are working together to bridge the divide between the fields of economics and HIV program implementation by accelerating the collection of cost data and building further local demand and capacity for their use.

  4. Aircraft Turbofan Engine Health Estimation Using Constrained Kalman Filtering

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2003-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman filter. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results obtained from application to a turbofan engine model. This model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.

  5. Optimal assignment of workers to supporting services in a hospital

    NASA Astrophysics Data System (ADS)

    Sawik, Bartosz; Mikulik, Jerzy

    2008-01-01

    Supporting services play an important role in health care institutions such as hospitals. This paper presents an application of operations research model for optimal allocation of workers among supporting services in a public hospital. The services include logistics, inventory management, financial management, operations management, medical analysis, etc. The optimality criterion of the problem is to minimize operations costs of supporting services subject to some specific constraints. The constraints represent specific conditions for resource allocation in a hospital. The overall problem is formulated as an integer program in the literature known as the assignment problem, where the decision variables represent the assignment of people to various jobs. The results of some computational experiments modeled on a real data from a selected Polish hospital are reported.

  6. Modeling of tool path for the CNC sheet cutting machines

    NASA Astrophysics Data System (ADS)

    Petunin, Aleksandr A.

    2015-11-01

    In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.

  7. Logic integer programming models for signaling networks.

    PubMed

    Haus, Utz-Uwe; Niermann, Kathrin; Truemper, Klaus; Weismantel, Robert

    2009-05-01

    We propose a static and a dynamic approach to model biological signaling networks, and show how each can be used to answer relevant biological questions. For this, we use the two different mathematical tools of Propositional Logic and Integer Programming. The power of discrete mathematics for handling qualitative as well as quantitative data has so far not been exploited in molecular biology, which is mostly driven by experimental research, relying on first-order or statistical models. The arising logic statements and integer programs are analyzed and can be solved with standard software. For a restricted class of problems the logic models reduce to a polynomial-time solvable satisfiability algorithm. Additionally, a more dynamic model enables enumeration of possible time resolutions in poly-logarithmic time. Computational experiments are included.

  8. It Takes a Village: Network Effects on Rural Education in Afghanistan. PRGS Dissertation

    ERIC Educational Resources Information Center

    Hoover, Matthew Amos

    2014-01-01

    Often, development organizations confront a tradeoff between program priorities and operational constraints. These constraints may be financial, capacity, or logistical; regardless, the tradeoff often requires sacrificing portions of a program. This work is concerned with figuring out how, when constrained, an organization or program manager can…

  9. Representing climate change on public service television: A case study.

    PubMed

    Debrett, Mary

    2017-05-01

    Publicly funded broadcasters with a track record in science programming would appear ideally placed to represent climate change to the lay public. Free from the constraints of vested interests and the economic imperative, public service providers are better equipped to represent the scientific, social and economic aspects of climate change than commercial media, where ownership conglomeration, corporate lobbyists and online competition have driven increasingly tabloid coverage with an emphasis on controversy. This prime-time snapshot of the Australian Broadcasting Corporation's main television channel explores how the structural/rhetorical conventions of three established public service genres - a science programme, a documentary and a live public affairs talk show - impact on the representation of anthropogenic climate change. The study findings note implications for public trust, and discuss possibilities for innovation in the interests of better public understanding of climate change.

  10. The Primary Prevention of PTSD in Firefighters: Preliminary Results of an RCT with 12-Month Follow-Up.

    PubMed

    Skeffington, Petra M; Rees, Clare S; Mazzucchelli, Trevor G; Kane, Robert T

    2016-01-01

    To develop and evaluate an evidence-based and theory driven program for the primary prevention of Post-traumatic Stress Disorder (PTSD). A pre-intervention / post-intervention / follow up control group design with clustered random allocation of participants to groups was used. The "control" group received "Training as Usual" (TAU). Participants were 45 career recruits within the recruit school at the Department of Fire and Emergency Services (DFES) in Western Australia. The intervention group received a four-hour resilience training intervention (Mental Agility and Psychological Strength training) as part of their recruit training school curriculum. Data was collected at baseline and at 6- and 12-months post intervention. We found no evidence that the intervention was effective in the primary prevention of mental health issues, nor did we find any significant impact of MAPS training on social support or coping strategies. A significant difference across conditions in trauma knowledge is indicative of some impact of the MAPS program. While the key hypotheses were not supported, this study is the first randomised control trial investigating the primary prevention of PTSD. Practical barriers around the implementation of this program, including constraints within the recruit school, may inform the design and implementation of similar programs in the future. Australian New Zealand Clinical Trials Registry (ANZCTR) ACTRN12615001362583.

  11. ACARA - AVAILABILITY, COST AND RESOURCE ALLOCATION

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1994-01-01

    ACARA is a program for analyzing availability, lifecycle cost, and resource scheduling. It uses a statistical Monte Carlo method to simulate a system's capacity states as well as component failure and repair. Component failures are modelled using a combination of exponential and Weibull probability distributions. ACARA schedules component replacement to achieve optimum system performance. The scheduling will comply with any constraints on component production, resupply vehicle capacity, on-site spares, or crew manpower and equipment. ACARA is capable of many types of analyses and trade studies because of its integrated approach. It characterizes the system performance in terms of both state availability and equivalent availability (a weighted average of state availability). It can determine the probability of exceeding a capacity state to assess reliability and loss of load probability. It can also evaluate the effect of resource constraints on system availability and lifecycle cost. ACARA interprets the results of a simulation and displays tables and charts for: (1) performance, i.e., availability and reliability of capacity states, (2) frequency of failure and repair, (3) lifecycle cost, including hardware, transportation, and maintenance, and (4) usage of available resources, including mass, volume, and maintenance man-hours. ACARA incorporates a user-friendly, menu-driven interface with full screen data entry. It provides a file management system to store and retrieve input and output datasets for system simulation scenarios. ACARA is written in APL2 using the APL2 interpreter for IBM PC compatible systems running MS-DOS. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. A dot matrix printer is required if the user wishes to print a graph from a results table. A sample MS-DOS executable is provided on the distribution medium. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The standard distribution medium for this program is a set of three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ACARA was developed in 1992.

  12. Safety analysis of discrete event systems using a simplified Petri net controller.

    PubMed

    Zareiee, Meysam; Dideban, Abbas; Asghar Orouji, Ali

    2014-01-01

    This paper deals with the problem of forbidden states in discrete event systems based on Petri net models. So, a method is presented to prevent the system from entering these states by constructing a small number of generalized mutual exclusion constraints. This goal is achieved by solving three types of Integer Linear Programming problems. The problems are designed to verify the constraints that some of them are related to verifying authorized states and the others are related to avoiding forbidden states. The obtained constraints can be enforced on the system using a small number of control places. Moreover, the number of arcs related to these places is small, and the controller after connecting them is maximally permissive. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Space Shuttle processing - A case study in artificial intelligence

    NASA Technical Reports Server (NTRS)

    Mollikarimi, Cindy; Gargan, Robert; Zweben, Monte

    1991-01-01

    A scheduling system incorporating AI is described and applied to the automated processing of the Space Shuttle. The unique problem of addressing the temporal, resource, and orbiter-configuration requirements of shuttle processing is described with comparisons to traditional project management for manufacturing processes. The present scheduling system is developed to handle the late inputs and complex programs that characterize shuttle processing by incorporating fixed preemptive scheduling, constraint-based simulated annealing, and the characteristics of an 'anytime' algorithm. The Space-Shuttle processing environment is modeled with 500 activities broken down into 4000 subtasks and with 1600 temporal constraints, 8000 resource constraints, and 3900 state requirements. The algorithm is shown to scale to very large problems and maintain anytime characteristics suggesting that an automated scheduling process is achievable and potentially cost-effective.

  14. Fuel Optimal, Finite Thrust Guidance Methods to Circumnavigate with Lighting Constraints

    NASA Astrophysics Data System (ADS)

    Prince, E. R.; Carr, R. W.; Cobb, R. G.

    This paper details improvements made to the authors' most recent work to find fuel optimal, finite-thrust guidance to inject an inspector satellite into a prescribed natural motion circumnavigation (NMC) orbit about a resident space object (RSO) in geosynchronous orbit (GEO). Better initial guess methodologies are developed for the low-fidelity model nonlinear programming problem (NLP) solver to include using Clohessy- Wiltshire (CW) targeting, a modified particle swarm optimization (PSO), and MATLAB's genetic algorithm (GA). These initial guess solutions may then be fed into the NLP solver as an initial guess, where a different NLP solver, IPOPT, is used. Celestial lighting constraints are taken into account in addition to the sunlight constraint, ensuring that the resulting NMC also adheres to Moon and Earth lighting constraints. The guidance is initially calculated given a fixed final time, and then solutions are also calculated for fixed final times before and after the original fixed final time, allowing mission planners to choose the lowest-cost solution in the resulting range which satisfies all constraints. The developed algorithms provide computationally fast and highly reliable methods for determining fuel optimal guidance for NMC injections while also adhering to multiple lighting constraints.

  15. First-Stage Development and Validation of a Web-Based Automated Dietary Modeling Tool: Using Constraint Optimization Techniques to Streamline Food Group and Macronutrient Focused Dietary Prescriptions for Clinical Trials.

    PubMed

    Probst, Yasmine; Morrison, Evan; Sullivan, Emma; Dam, Hoa Khanh

    2016-07-28

    Standardizing the background diet of participants during a dietary randomized controlled trial is vital to trial outcomes. For this process, dietary modeling based on food groups and their target servings is employed via a dietary prescription before an intervention, often using a manual process. Partial automation has employed the use of linear programming. Validity of the modeling approach is critical to allow trial outcomes to be translated to practice. This paper describes the first-stage development of a tool to automatically perform dietary modeling using food group and macronutrient requirements as a test case. The Dietary Modeling Tool (DMT) was then compared with existing approaches to dietary modeling (manual and partially automated), which were previously available to dietitians working within a dietary intervention trial. Constraint optimization techniques were implemented to determine whether nonlinear constraints are best suited to the development of the automated dietary modeling tool using food composition and food consumption data. Dietary models were produced and compared with a manual Microsoft Excel calculator, a partially automated Excel Solver approach, and the automated DMT that was developed. The web-based DMT was produced using nonlinear constraint optimization, incorporating estimated energy requirement calculations, nutrition guidance systems, and the flexibility to amend food group targets for individuals. Percentage differences between modeling tools revealed similar results for the macronutrients. Polyunsaturated fatty acids and monounsaturated fatty acids showed greater variation between tools (practically equating to a 2-teaspoon difference), although it was not considered clinically significant when the whole diet, as opposed to targeted nutrients or energy requirements, were being addressed. Automated modeling tools can streamline the modeling process for dietary intervention trials ensuring consistency of the background diets, although appropriate constraints must be used in their development to achieve desired results. The DMT was found to be a valid automated tool producing similar results to tools with less automation. The results of this study suggest interchangeability of the modeling approaches used, although implementation should reflect the requirements of the dietary intervention trial in which it is used.

  16. First-Stage Development and Validation of a Web-Based Automated Dietary Modeling Tool: Using Constraint Optimization Techniques to Streamline Food Group and Macronutrient Focused Dietary Prescriptions for Clinical Trials

    PubMed Central

    Morrison, Evan; Sullivan, Emma; Dam, Hoa Khanh

    2016-01-01

    Background Standardizing the background diet of participants during a dietary randomized controlled trial is vital to trial outcomes. For this process, dietary modeling based on food groups and their target servings is employed via a dietary prescription before an intervention, often using a manual process. Partial automation has employed the use of linear programming. Validity of the modeling approach is critical to allow trial outcomes to be translated to practice. Objective This paper describes the first-stage development of a tool to automatically perform dietary modeling using food group and macronutrient requirements as a test case. The Dietary Modeling Tool (DMT) was then compared with existing approaches to dietary modeling (manual and partially automated), which were previously available to dietitians working within a dietary intervention trial. Methods Constraint optimization techniques were implemented to determine whether nonlinear constraints are best suited to the development of the automated dietary modeling tool using food composition and food consumption data. Dietary models were produced and compared with a manual Microsoft Excel calculator, a partially automated Excel Solver approach, and the automated DMT that was developed. Results The web-based DMT was produced using nonlinear constraint optimization, incorporating estimated energy requirement calculations, nutrition guidance systems, and the flexibility to amend food group targets for individuals. Percentage differences between modeling tools revealed similar results for the macronutrients. Polyunsaturated fatty acids and monounsaturated fatty acids showed greater variation between tools (practically equating to a 2-teaspoon difference), although it was not considered clinically significant when the whole diet, as opposed to targeted nutrients or energy requirements, were being addressed. Conclusions Automated modeling tools can streamline the modeling process for dietary intervention trials ensuring consistency of the background diets, although appropriate constraints must be used in their development to achieve desired results. The DMT was found to be a valid automated tool producing similar results to tools with less automation. The results of this study suggest interchangeability of the modeling approaches used, although implementation should reflect the requirements of the dietary intervention trial in which it is used. PMID:27471104

  17. On financial markets trading

    NASA Astrophysics Data System (ADS)

    Matassini, Lorenzo; Franci, Fabio

    2001-01-01

    Starting from the observation of the real trading activity, we propose a model of a stockmarket simulating all the typical phases taking place in a stock exchange. We show that there is no need of several classes of agents once one has introduced realistic constraints in order to confine money, time, gain and loss within an appropriate range. The main ingredients are local and global coupling, randomness, Zipf distribution of resources and price formation when inserting an order. The simulation starts with the initial public offer and comprises the broadcasting of news/advertisements and the building of the book, where all the selling and buying orders are stored. The model is able to reproduce fat tails and clustered volatility, the two most significant characteristics of a real stockmarket, being driven by very intuitive parameters.

  18. Search for new phenomena in final states with large jet multiplicities and missing transverse momentum using $$\\sqrt {s}$$ = 7 TeV pp collisions with the ATLAS detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aad, G.; Abbott, B.; Abdallah, J.

    2011-11-21

    Results are presented of a search for any particle(s) decaying to six or more jets in association with missing transverse momentum. The search is performed using 1.34 fb -1 ofmore » $$\\sqrt {s}$$ = 7 TeV proton-proton collisions recorded by the ATLAS detector during 2011. Data-driven techniques are used to determine the backgrounds in kinematic regions that require at least six, seven or eight jets, well beyond the multiplicities required in previous analyses. No evidence is found for physics beyond the Standard Model. The results are interpreted in the context of a supersymmetry model (MSUGRA/CMSSM) where they extend previous constraints.« less

  19. Using Technology to Promote Active and Social Learning Experiences in Health Professions Education

    ERIC Educational Resources Information Center

    Ruckert, Elizabeth; McDonald, Paige L.; Birkmeier, Marissa; Walker, Bryan; Cotton, Linda; Lyons, Laurie B.; Straker, Howard O.; Plack, Margaret M.

    2014-01-01

    Time and space constraints, large class sizes, competition for clinical internships, and geographic separation between classroom and clinical rotations for student interaction with peers and faculty pose challenges for health professions educational programs. This article presents a model for effectively incorporating technology to overcome these…

  20. Can Interdistrict Choice Boost Student Achievement? The Case of Connecticut's Interdistrict Magnet School Program

    ERIC Educational Resources Information Center

    Bifulco, Robert; Cobb, Casey D.; Bell, Courtney

    2009-01-01

    Connecticut's interdistrict magnet schools offer a model of choice-based desegregation that appears to satisfy current legal constraints. This study presents evidence that interdistrict magnet schools have provided students from Connecticut's central cities access to less racially and economically isolated educational environments and estimates…

  1. Why don't zebras have machine guns? Adaptation, selection, and constraints in evolutionary theory.

    PubMed

    Shanahan, Timothy

    2008-03-01

    In an influential paper, Stephen Jay Gould and Richard Lewontin (1979) contrasted selection-driven adaptation with phylogenetic, architectural, and developmental constraints as distinct causes of phenotypic evolution. In subsequent publications Gould (e.g., 1997a,b, 2002) has elaborated this distinction into one between a narrow "Darwinian Fundamentalist" emphasis on "external functionalist" processes, and a more inclusive "pluralist" emphasis on "internal structuralist" principles. Although theoretical integration of functionalist and structuralist explanations is the ultimate aim, natural selection and internal constraints are treated as distinct causes of evolutionary change. This distinction is now routinely taken for granted in the literature in evolutionary biology. I argue that this distinction is problematic because the effects attributed to non-selective constraints are more parsimoniously explained as the ordinary effects of selection itself. Although it may still be a useful shorthand to speak of phylogenetic, architectural, and developmental constraints on phenotypic evolution, it is important to understand that such "constraints" do not constitute an alternative set of causes of evolutionary change. The result of this analysis is a clearer understanding of the relationship between adaptation, selection and constraints as explanatory concepts in evolutionary theory.

  2. Partitioning the Outburst Energy of a Low Eddington Accretion Rate AGN at the Center of an Elliptical Galaxy: The Recent 12 Myr History of the Supermassive Black Hole in M87

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forman, W.; Jones, C.; Kraft, R.

    M87, the active galaxy at the center of the Virgo cluster, is ideal for studying the interaction of a supermassive black hole (SMBH) with a hot, gas-rich environment. A deep Chandra observation of M87 exhibits an approximately circular shock front (13 kpc radius, in projection) driven by the expansion of the central cavity (filled by the SMBH with relativistic radio-emitting plasma) with projected radius ∼1.9 kpc. We combine constraints from X-ray and radio observations of M87 with a shock model to derive the properties of the outburst that created the 13 kpc shock. Principal constraints for the model are (1)more » the measured Mach number ( M ∼ 1.2), (2) the radius of the 13 kpc shock, and (3) the observed size of the central cavity/bubble (the radio-bright cocoon) that serves as the piston to drive the shock. We find that an outburst of ∼5 × 10{sup 57} erg that began about 12 Myr ago and lasted ∼2 Myr matches all the constraints. In this model, ∼22% of the energy is carried by the shock as it expands. The remaining ∼80% of the outburst energy is available to heat the core gas. More than half the total outburst energy initially goes into the enthalpy of the central bubble, the radio cocoon. As the buoyant bubble rises, much of its energy is transferred to the ambient thermal gas. For an outburst repetition rate of about 12 Myr (the age of the outburst), 80% of the outburst energy is sufficient to balance the radiative cooling.« less

  3. Partitioning the Outburst Energy of a Low Eddington Accretion Rate AGN at the Center of an Elliptical Galaxy: The Recent 12 Myr History of the Supermassive Black Hole in M87

    NASA Astrophysics Data System (ADS)

    Forman, W.; Churazov, E.; Jones, C.; Heinz, S.; Kraft, R.; Vikhlinin, A.

    2017-08-01

    M87, the active galaxy at the center of the Virgo cluster, is ideal for studying the interaction of a supermassive black hole (SMBH) with a hot, gas-rich environment. A deep Chandra observation of M87 exhibits an approximately circular shock front (13 kpc radius, in projection) driven by the expansion of the central cavity (filled by the SMBH with relativistic radio-emitting plasma) with projected radius ˜1.9 kpc. We combine constraints from X-ray and radio observations of M87 with a shock model to derive the properties of the outburst that created the 13 kpc shock. Principal constraints for the model are (1) the measured Mach number (M ˜ 1.2), (2) the radius of the 13 kpc shock, and (3) the observed size of the central cavity/bubble (the radio-bright cocoon) that serves as the piston to drive the shock. We find that an outburst of ˜5 × 1057 erg that began about 12 Myr ago and lasted ˜2 Myr matches all the constraints. In this model, ˜22% of the energy is carried by the shock as it expands. The remaining ˜80% of the outburst energy is available to heat the core gas. More than half the total outburst energy initially goes into the enthalpy of the central bubble, the radio cocoon. As the buoyant bubble rises, much of its energy is transferred to the ambient thermal gas. For an outburst repetition rate of about 12 Myr (the age of the outburst), 80% of the outburst energy is sufficient to balance the radiative cooling.

  4. Structural tailoring of advanced turboprops

    NASA Technical Reports Server (NTRS)

    Brown, K. W.; Hopkins, Dale A.

    1988-01-01

    The Structural Tailoring of Advanced Turboprops (STAT) computer program was developed to perform numerical optimization on highly swept propfan blades. The optimization procedure seeks to minimize an objective function defined as either: (1) direct operating cost of full scale blade or, (2) aeroelastic differences between a blade and its scaled model, by tuning internal and external geometry variables that must satisfy realistic blade design constraints. The STAT analysis system includes an aerodynamic efficiency evaluation, a finite element stress and vibration analysis, an acoustic analysis, a flutter analysis, and a once-per-revolution forced response life prediction capability. STAT includes all relevant propfan design constraints.

  5. Rocket ascent G-limited moment-balanced optimization program (RAGMOP)

    NASA Technical Reports Server (NTRS)

    Lyons, J. T.; Woltosz, W. S.; Abercrombie, G. E.; Gottlieb, R. G.

    1972-01-01

    This document describes the RAGMOP (Rocket Ascent G-limited Momentbalanced Optimization Program) computer program for parametric ascent trajectory optimization. RAGMOP computes optimum polynomial-form attitude control histories, launch azimuth, engine burn-time, and gross liftoff weight for space shuttle type vehicles using a search-accelerated, gradient projection parameter optimization technique. The trajectory model available in RAGMOP includes a rotating oblate earth model, the option of input wind tables, discrete and/or continuous throttling for the purposes of limiting the thrust acceleration and/or the maximum dynamic pressure, limitation of the structural load indicators (the product of dynamic pressure with angle-of-attack and sideslip angle), and a wide selection of intermediate and terminal equality constraints.

  6. Thermochemical Constraints of the Old Faithful Model for Radiation-Driven Cryovolcanism on Enceladus

    NASA Astrophysics Data System (ADS)

    Cooper, Paul; Franzel, C. J.; Cooper, J. F.

    2010-10-01

    We have used a combination of thermochemical data, plume composition, and the estimated surface power flux to constrain the Old Faithful model for radiation-driven cryovolcanism on Enceladus (1). This model proposes episodic cryovolcanic activity brought about by the chemical reaction between reductants that are primordially present within Enceladus's ice, and oxidants produced by energetic particles impacting the icy surface. Assuming no limit on accumulation of oxidants in the ice crust in the billions of years since formation and subsequent magnetospheric irradiation of Enceladus, this new work extends (1) by examining limits on activity from reductant abundances. Our calculations show that an almost negligible amount of methane or ammonia, compared with the mass of Enceladus, would potentially be needed to account for the surface power flux of the gas plume over 10 million years of activity, consistent with geologic models for episodic overturn of the ice crust and heat flow (2). Limiting the permanently ejected fluid mass during this time by the volume of the topographical depression in the SPT of Enceladus, we have constrained the number ratio of reductant-to-water. Results are in support of our model. In addition, using the measured abundances of CO2 and N2 (products of CH4 and NH3 oxidation) in the plume, we have further constrained the amounts of CH4 and NH3 that could be present and these are also in line with our predictions. These calculations fully support the Old Faithful model (1). 1) Cooper, J. F., Cooper, P. D. Sittler, E. C., Sturner, S. J., Rymer, A. M., "Old Faithful Model for Radiolytic Gas-Driven Cryovolcanism at Enceladus", Planet. Space Sci., 57, 1607-1620,2009. 2) O'Neill, C., F. Nimmo, "The Role of Episodic Overturn in Generating the Surface Geology and Heat Flow on Enceladus, Nature Geosci., 3, 88-91. 2010.

  7. Open data models for smart health interconnected applications: the example of openEHR.

    PubMed

    Demski, Hans; Garde, Sebastian; Hildebrand, Claudia

    2016-10-22

    Smart Health is known as a concept that enhances networking, intelligent data processing and combining patient data with other parameters. Open data models can play an important role in creating a framework for providing interoperable data services that support the development of innovative Smart Health applications profiting from data fusion and sharing. This article describes a model-driven engineering approach based on standardized clinical information models and explores its application for the development of interoperable electronic health record systems. The following possible model-driven procedures were considered: provision of data schemes for data exchange, automated generation of artefacts for application development and native platforms that directly execute the models. The applicability of the approach in practice was examined using the openEHR framework as an example. A comprehensive infrastructure for model-driven engineering of electronic health records is presented using the example of the openEHR framework. It is shown that data schema definitions to be used in common practice software development processes can be derived from domain models. The capabilities for automatic creation of implementation artefacts (e.g., data entry forms) are demonstrated. Complementary programming libraries and frameworks that foster the use of open data models are introduced. Several compatible health data platforms are listed. They provide standard based interfaces for interconnecting with further applications. Open data models help build a framework for interoperable data services that support the development of innovative Smart Health applications. Related tools for model-driven application development foster semantic interoperability and interconnected innovative applications.

  8. Mission Driven Scene Understanding: Candidate Model Training and Validation

    DTIC Science & Technology

    2016-09-01

    driven scene understanding. One of the candidate engines that we are evaluating is a convolutional neural network (CNN) program installed on a Windows 10...Theano-AlexNet6,7) installed on a Windows 10 notebook computer. To the best of our knowledge, an implementation of the open-source, Python-based...AlexNet CNN on a Windows notebook computer has not been previously reported. In this report, we present progress toward the proof-of-principle testing

  9. RADC Multi-Dimensional Signal-Processing Research Program.

    DTIC Science & Technology

    1980-09-30

    Formulation 7 3.2.2 Methods of Accelerating Convergence 8 3.2.3 Application to Image Deblurring 8 3.2.4 Extensions 11 3.3 Convergence of Iterative Signal... noise -driven linear filters, permit development of the joint probability density function oz " kelihood function for the image. With an expression...spatial linear filter driven by white noise (see Fig. i). If the probability density function for the white noise is known, Fig. t. Model for image

  10. Kalman Filtering with Inequality Constraints for Turbofan Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Dan; Simon, Donald L.

    2003-01-01

    Kalman filters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman filters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not fit easily into the structure of the Kalman filter. This paper develops two analytic methods of incorporating state variable inequality constraints in the Kalman filter. The first method is a general technique of using hard constraints to enforce inequalities on the state variable estimates. The resultant filter is a combination of a standard Kalman filter and a quadratic programming problem. The second method uses soft constraints to estimate state variables that are known to vary slowly with time. (Soft constraints are constraints that are required to be approximately satisfied rather than exactly satisfied.) The incorporation of state variable constraints increases the computational effort of the filter but significantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results. The use of the algorithm is demonstrated on a linearized simulation of a turbofan engine to estimate health parameters. The turbofan engine model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman filtering.

  11. Lazy evaluation of FP programs: A data-flow approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Y.H.; Gaudiot, J.L.

    1988-12-31

    This paper presents a lazy evaluation system for the list-based functional language, Backus` FP in data-driven environment. A superset language of FP, called DFP (Demand-driven FP), is introduced. FP eager programs are transformed into DFP lazy programs which contain the notions of demands. The data-driven execution of DFP programs has the same effects of lazy evaluation. DFP lazy programs have the property of always evaluating a sufficient and necessary result. The infinite sequence generator is used to demonstrate the eager-lazy program transformation and the execution of the lazy programs.

  12. Multi-objective trajectory optimization for the space exploration vehicle

    NASA Astrophysics Data System (ADS)

    Qin, Xiaoli; Xiao, Zhen

    2016-07-01

    The research determines temperature-constrained optimal trajectory for the space exploration vehicle by developing an optimal control formulation and solving it using a variable order quadrature collocation method with a Non-linear Programming(NLP) solver. The vehicle is assumed to be the space reconnaissance aircraft that has specified takeoff/landing locations, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom aircraft model is adapted from previous work and includes flight dynamics, and thermal constraints.Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and exploration of space targets. In addition, the vehicle models include the environmental models(gravity and atmosphere). How these models are appropriately employed is key to gaining confidence in the results and conclusions of the research. Optimal trajectories are developed using several performance costs in the optimal control formation,minimum time,minimum time with control penalties,and maximum distance.The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for large-scale space exloration.

  13. Model-based metabolism design: constraints for kinetic and stoichiometric models

    PubMed Central

    Stalidzans, Egils; Seiman, Andrus; Peebo, Karl; Komasilovs, Vitalijs; Pentjuss, Agris

    2018-01-01

    The implementation of model-based designs in metabolic engineering and synthetic biology may fail. One of the reasons for this failure is that only a part of the real-world complexity is included in models. Still, some knowledge can be simplified and taken into account in the form of optimization constraints to improve the feasibility of model-based designs of metabolic pathways in organisms. Some constraints (mass balance, energy balance, and steady-state assumption) serve as a basis for many modelling approaches. There are others (total enzyme activity constraint and homeostatic constraint) proposed decades ago, but which are frequently ignored in design development. Several new approaches of cellular analysis have made possible the application of constraints like cell size, surface, and resource balance. Constraints for kinetic and stoichiometric models are grouped according to their applicability preconditions in (1) general constraints, (2) organism-level constraints, and (3) experiment-level constraints. General constraints are universal and are applicable for any system. Organism-level constraints are applicable for biological systems and usually are organism-specific, but these constraints can be applied without information about experimental conditions. To apply experimental-level constraints, peculiarities of the organism and the experimental set-up have to be taken into account to calculate the values of constraints. The limitations of applicability of particular constraints for kinetic and stoichiometric models are addressed. PMID:29472367

  14. Comprehension-Driven Program Analysis (CPA) for Malware Detection in Android Phones

    DTIC Science & Technology

    2015-07-01

    COMPREHENSION-DRIVEN PROGRAM ANALYSIS (CPA) FOR MALWARE DETECTION IN ANDROID PHONES IOWA STATE UNIVERSITY JULY 2015 FINAL...DRIVEN PROGRAM ANALYSIS (CPA) FOR MALWARE DETECTION IN ANDROID PHONES Sb. GRANT NUMBER N/A Sc. PROGRAM ELEMENT NUMBER 6 1101E 6. AUTHOR(S) Sd...machine analysis system to detect novel, sophisticated Android malware. (c) An innovative library summarization technique and its incorporation in

  15. Probing the Magnetic Field Structure in Sgr A* on Black Hole Horizon Scales with Polarized Radiative Transfer Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gold, Roman; McKinney, Jonathan C.; Johnson, Michael D.

    Magnetic fields are believed to drive accretion and relativistic jets in black hole accretion systems, but the magnetic field structure that controls these phenomena remains uncertain. We perform general relativistic (GR) polarized radiative transfer of time-dependent three-dimensional GR magnetohydrodynamical simulations to model thermal synchrotron emission from the Galactic Center source Sagittarius A* (Sgr A*). We compare our results to new polarimetry measurements by the Event Horizon Telescope (EHT) and show how polarization in the visibility (Fourier) domain distinguishes and constrains accretion flow models with different magnetic field structures. These include models with small-scale fields in disks driven by the magnetorotationalmore » instability as well as models with large-scale ordered fields in magnetically arrested disks. We also consider different electron temperature and jet mass-loading prescriptions that control the brightness of the disk, funnel-wall jet, and Blandford–Znajek-driven funnel jet. Our comparisons between the simulations and observations favor models with ordered magnetic fields near the black hole event horizon in Sgr A*, though both disk- and jet-dominated emission can satisfactorily explain most of the current EHT data. We also discuss how the black hole shadow can be filled-in by jet emission or mimicked by the absence of funnel jet emission. We show that stronger model constraints should be possible with upcoming circular polarization and higher frequency (349 GHz) measurements.« less

  16. A Higher Harmonic Optimal Controller to Optimise Rotorcraft Aeromechanical Behaviour

    NASA Technical Reports Server (NTRS)

    Leyland, Jane Anne

    1996-01-01

    Three methods to optimize rotorcraft aeromechanical behavior for those cases where the rotorcraft plant can be adequately represented by a linear model system matrix were identified and implemented in a stand-alone code. These methods determine the optimal control vector which minimizes the vibration metric subject to constraints at discrete time points, and differ from the commonly used non-optimal constraint penalty methods such as those employed by conventional controllers in that the constraints are handled as actual constraints to an optimization problem rather than as just additional terms in the performance index. The first method is to use a Non-linear Programming algorithm to solve the problem directly. The second method is to solve the full set of non-linear equations which define the necessary conditions for optimality. The third method is to solve each of the possible reduced sets of equations defining the necessary conditions for optimality when the constraints are pre-selected to be either active or inactive, and then to simply select the best solution. The effects of maneuvers and aeroelasticity on the systems matrix are modelled by using a pseudo-random pseudo-row-dependency scheme to define the systems matrix. Cases run to date indicate that the first method of solution is reliable, robust, and easiest to use, and that it was superior to the conventional controllers which were considered.

  17. An integrated GIS-based interval-probabilistic programming model for land-use planning management under uncertainty--a case study at Suzhou, China.

    PubMed

    Lu, Shasha; Zhou, Min; Guan, Xingliang; Tao, Lizao

    2015-03-01

    A large number of mathematical models have been developed for supporting optimization of land-use allocation; however, few of them simultaneously consider land suitability (e.g., physical features and spatial information) and various uncertainties existing in many factors (e.g., land availabilities, land demands, land-use patterns, and ecological requirements). This paper incorporates geographic information system (GIS) technology into interval-probabilistic programming (IPP) for land-use planning management (IPP-LUPM). GIS is utilized to assemble data for the aggregated land-use alternatives, and IPP is developed for tackling uncertainties presented as discrete intervals and probability distribution. Based on GIS, the suitability maps of different land users are provided by the outcomes of land suitability assessment and spatial analysis. The maximum area of every type of land use obtained from the suitability maps, as well as various objectives/constraints (i.e., land supply, land demand of socioeconomic development, future development strategies, and environmental capacity), is used as input data for the optimization of land-use areas with IPP-LUPM model. The proposed model not only considers the outcomes of land suitability evaluation (i.e., topography, ground conditions, hydrology, and spatial location) but also involves economic factors, food security, and eco-environmental constraints, which can effectively reflect various interrelations among different aspects in a land-use planning management system. The case study results at Suzhou, China, demonstrate that the model can help to examine the reliability of satisfying (or risk of violating) system constraints under uncertainty. Moreover, it may identify the quantitative relationship between land suitability and system benefits. Willingness to arrange the land areas based on the condition of highly suitable land will not only reduce the potential conflicts on the environmental system but also lead to a lower economic benefit. However, a strong desire to develop lower suitable land areas will bring not only a higher economic benefit but also higher risks of violating environmental and ecological constraints. The land manager should make decisions through trade-offs between economic objectives and environmental/ecological objectives.

  18. Joint Chance-Constrained Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob

    2012-01-01

    This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.

  19. Delaying Mobility Disability in People With Parkinson Disease Using a Sensorimotor Agility Exercise Program

    PubMed Central

    King, Laurie A; Horak, Fay B

    2009-01-01

    This article introduces a new framework for therapists to develop an exercise program to delay mobility disability in people with Parkinson disease (PD). Mobility, or the ability to efficiently navigate and function in a variety of environments, requires balance, agility, and flexibility, all of which are affected by PD. This article summarizes recent research identifying how constraints on mobility specific to PD, such as rigidity, bradykinesia, freezing, poor sensory integration, inflexible program selection, and impaired cognitive processing, limit mobility in people with PD. Based on these constraints, a conceptual framework for exercises to maintain and improve mobility is presented. An example of a constraint-focused agility exercise program, incorporating movement principles from tai chi, kayaking, boxing, lunges, agility training, and Pilates exercises, is presented. This new constraint-focused agility exercise program is based on a strong scientific framework and includes progressive levels of sensorimotor, resistance, and coordination challenges that can be customized for each patient while maintaining fidelity. Principles for improving mobility presented here can be incorporated into an ongoing or long-term exercise program for people with PD. PMID:19228832

  20. Delaying mobility disability in people with Parkinson disease using a sensorimotor agility exercise program.

    PubMed

    King, Laurie A; Horak, Fay B

    2009-04-01

    This article introduces a new framework for therapists to develop an exercise program to delay mobility disability in people with Parkinson disease (PD). Mobility, or the ability to efficiently navigate and function in a variety of environments, requires balance, agility, and flexibility, all of which are affected by PD. This article summarizes recent research identifying how constraints on mobility specific to PD, such as rigidity, bradykinesia, freezing, poor sensory integration, inflexible program selection, and impaired cognitive processing, limit mobility in people with PD. Based on these constraints, a conceptual framework for exercises to maintain and improve mobility is presented. An example of a constraint-focused agility exercise program, incorporating movement principles from tai chi, kayaking, boxing, lunges, agility training, and Pilates exercises, is presented. This new constraint-focused agility exercise program is based on a strong scientific framework and includes progressive levels of sensorimotor, resistance, and coordination challenges that can be customized for each patient while maintaining fidelity. Principles for improving mobility presented here can be incorporated into an ongoing or long-term exercise program for people with PD.

  1. Hybrid supply chain model for material requirement planning under financial constraints: A case study

    NASA Astrophysics Data System (ADS)

    Curci, Vita; Dassisti, Michele; Josefa, Mula Bru; Manuel, Díaz Madroñero

    2014-10-01

    Supply chain model (SCM) are potentially capable to integrate different aspects in supporting decision making for enterprise management tasks. The aim of the paper is to propose an hybrid mathematical programming model for optimization of production requirements resources planning. The preliminary model was conceived bottom-up from a real industrial case analysed oriented to maximize cash flow. Despite the intense computational effort required to converge to a solution, optimisation done brought good result in solving the objective function.

  2. Using Program Theory-Driven Evaluation Science to Crack the Da Vinci Code

    ERIC Educational Resources Information Center

    Donaldson, Stewart I.

    2005-01-01

    Program theory-driven evaluation science uses substantive knowledge, as opposed to method proclivities, to guide program evaluations. It aspires to update, clarify, simplify, and make more accessible the evolving theory of evaluation practice commonly referred to as theory-driven or theory-based evaluation. The evaluator in this chapter provides a…

  3. An interval chance-constrained fuzzy modeling approach for supporting land-use planning and eco-environment planning at a watershed level.

    PubMed

    Ou, Guoliang; Tan, Shukui; Zhou, Min; Lu, Shasha; Tao, Yinghui; Zhang, Zuo; Zhang, Lu; Yan, Danping; Guan, Xingliang; Wu, Gang

    2017-12-15

    An interval chance-constrained fuzzy land-use allocation (ICCF-LUA) model is proposed in this study to support solving land resource management problem associated with various environmental and ecological constraints at a watershed level. The ICCF-LUA model is based on the ICCF (interval chance-constrained fuzzy) model which is coupled with interval mathematical model, chance-constrained programming model and fuzzy linear programming model and can be used to deal with uncertainties expressed as intervals, probabilities and fuzzy sets. Therefore, the ICCF-LUA model can reflect the tradeoff between decision makers and land stakeholders, the tradeoff between the economical benefits and eco-environmental demands. The ICCF-LUA model has been applied to the land-use allocation of Wujiang watershed, Guizhou Province, China. The results indicate that under highly land suitable conditions, optimized area of cultivated land, forest land, grass land, construction land, water land, unused land and landfill in Wujiang watershed will be [5015, 5648] hm 2 , [7841, 7965] hm 2 , [1980, 2056] hm 2 , [914, 1423] hm 2 , [70, 90] hm 2 , [50, 70] hm 2 and [3.2, 4.3] hm 2 , the corresponding system economic benefit will be between 6831 and 7219 billion yuan. Consequently, the ICCF-LUA model can effectively support optimized land-use allocation problem in various complicated conditions which include uncertainties, risks, economic objective and eco-environmental constraints. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Computational challenges in modeling gene regulatory events.

    PubMed

    Pataskar, Abhijeet; Tiwari, Vijay K

    2016-10-19

    Cellular transcriptional programs driven by genetic and epigenetic mechanisms could be better understood by integrating "omics" data and subsequently modeling the gene-regulatory events. Toward this end, computational biology should keep pace with evolving experimental procedures and data availability. This article gives an exemplified account of the current computational challenges in molecular biology.

  5. All Hands on Deck: A Comprehensive, Results-Driven Counseling Model

    ERIC Educational Resources Information Center

    Salina, Charles; Girtz, Suzann; Eppinga, Joanie; Martinez, David; Kilian, Diana Blumer; Lozano, Elizabeth; Martinez, Adrian P.; Crowe, Dustin; De La Barrera, Maria; Mendez, Maribel Madrigal; Shines, Terry

    2014-01-01

    A graduation rate of 49% alarmed Sunnyside High School in 2009. With graduation rates in the bottom 5% statewide, Sunnyside was awarded a federally funded School Improvement Grant. The "turnaround" principal and the school counselors aligned goals with the ASCA National Model through the program All Hands On Deck (AHOD), based on…

  6. MICRO-U 70.1: Training Model of an Instructional Institution, Users Manual.

    ERIC Educational Resources Information Center

    Springer, Colby H.

    MICRO-U is a student demand driven deterministic model. Student enrollment, by degree program, is used to develop an Instructional Work Load Matrix. Linear equations using Weekly Student Contact Hours (WSCH), Full Time Equivalent (FTE) students, FTE faculty, and number of disciplines determine library, central administration, and physical plant…

  7. A Model for Mapping Linkages between Health and Education Agencies To Improve School Health.

    ERIC Educational Resources Information Center

    St. Leger, Lawrence; Nutbeam, Don

    2000-01-01

    Reviews the evolution of efforts to develop effective, sustainable school health programs, arguing that efforts were significantly driven by public health priorities and have not adequately accounted for educational perspectives. A model illustrating linkages between different school-based inputs and strategies and long-term health and educational…

  8. A heuristic constraint programmed planner for deep space exploration problems

    NASA Astrophysics Data System (ADS)

    Jiang, Xiao; Xu, Rui; Cui, Pingyuan

    2017-10-01

    In recent years, the increasing numbers of scientific payloads and growing constraints on the probe have made constraint processing technology a hotspot in the deep space planning field. In the procedure of planning, the ordering of variables and values plays a vital role. This paper we present two heuristic ordering methods for variables and values. On this basis a graphplan-like constraint-programmed planner is proposed. In the planner we convert the traditional constraint satisfaction problem to a time-tagged form with different levels. Inspired by the most constrained first principle in constraint satisfaction problem (CSP), the variable heuristic is designed by the number of unassigned variables in the constraint and the value heuristic is designed by the completion degree of the support set. The simulation experiments show that the planner proposed is effective and its performance is competitive with other kind of planners.

  9. Inspiring Teaching: Preparing Teachers to Succeed in Mission-Driven Schools

    ERIC Educational Resources Information Center

    Feiman-Nemser, Sharon, Ed.; Tamir, Eran, Ed.; Hammerness, Karen, Ed.

    2014-01-01

    How can we best prepare pre-service teachers to succeed in the classroom--and to stay in teaching over time? The one-size-fits-all model of traditional teacher education programs has been widely criticized, yet the most popular alternative--fast-track programs--have at best a mixed record of success. An increasing number of districts and charter…

  10. The Programmers' Collective: Fostering Participatory Culture by Making Music Videos in a High School Scratch Coding Workshop

    ERIC Educational Resources Information Center

    Fields, Deborah; Vasudevan, Veena; Kafai, Yasmin B.

    2015-01-01

    We highlight ways to support interest-driven creation of digital media in Scratch, a visual-based programming language and community, within a high school programming workshop. We describe a collaborative approach, the programmers' collective, that builds on social models found in do-it-yourself and open source communities, but with scaffolding…

  11. Modeling 2D and 3D diffusion.

    PubMed

    Saxton, Michael J

    2007-01-01

    Modeling obstructed diffusion is essential to the understanding of diffusion-mediated processes in the crowded cellular environment. Simple Monte Carlo techniques for modeling obstructed random walks are explained and related to Brownian dynamics and more complicated Monte Carlo methods. Random number generation is reviewed in the context of random walk simulations. Programming techniques and event-driven algorithms are discussed as ways to speed simulations.

  12. A Unified Data-Driven Approach for Programming In Situ Analysis and Visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aiken, Alex

    The placement and movement of data is becoming the key limiting factor on both performance and energy efficiency of high performance computations. As systems generate more data, it is becoming increasingly difficult to actually move that data elsewhere for post-processing, as the rate of improvements in supporting I/O infrastructure is not keeping pace. Together, these trends are creating a shift in how we think about exascale computations, from a viewpoint that focuses on FLOPS to one that focuses on data and data-centric operations as fundamental to the reasoning about, and optimization of, scientific workflows on extreme-scale architectures. The overarching goalmore » of our effort was the study of a unified data-driven approach for programming applications and in situ analysis and visualization. Our work was to understand the interplay between data-centric programming model requirements at extreme-scale and the overall impact of those requirements on the design, capabilities, flexibility, and implementation details for both applications and the supporting in situ infrastructure. In this context, we made many improvements to the Legion programming system (one of the leading data-centric models today) and demonstrated in situ analyses on real application codes using these improvements.« less

  13. Waste management with recourse: an inexact dynamic programming model containing fuzzy boundary intervals in objectives and constraints.

    PubMed

    Tan, Q; Huang, G H; Cai, Y P

    2010-09-01

    The existing inexact optimization methods based on interval-parameter linear programming can hardly address problems where coefficients in objective functions are subject to dual uncertainties. In this study, a superiority-inferiority-based inexact fuzzy two-stage mixed-integer linear programming (SI-IFTMILP) model was developed for supporting municipal solid waste management under uncertainty. The developed SI-IFTMILP approach is capable of tackling dual uncertainties presented as fuzzy boundary intervals (FuBIs) in not only constraints, but also objective functions. Uncertainties expressed as a combination of intervals and random variables could also be explicitly reflected. An algorithm with high computational efficiency was provided to solve SI-IFTMILP. SI-IFTMILP was then applied to a long-term waste management case to demonstrate its applicability. Useful interval solutions were obtained. SI-IFTMILP could help generate dynamic facility-expansion and waste-allocation plans, as well as provide corrective actions when anticipated waste management plans are violated. It could also greatly reduce system-violation risk and enhance system robustness through examining two sets of penalties resulting from variations in fuzziness and randomness. Moreover, four possible alternative models were formulated to solve the same problem; solutions from them were then compared with those from SI-IFTMILP. The results indicate that SI-IFTMILP could provide more reliable solutions than the alternatives. 2010 Elsevier Ltd. All rights reserved.

  14. Determination of optimum values for maximizing the profit in bread production: Daily bakery Sdn Bhd

    NASA Astrophysics Data System (ADS)

    Muda, Nora; Sim, Raymond

    2015-02-01

    An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints (other than the integer constraints) are linear. An ILP has many applications in industrial production, including job-shop modelling. A possible objective is to maximize the total production, without exceeding the available resources. In some cases, this can be expressed in terms of a linear program, but variables must be constrained to be integer. It concerned with the optimization of a linear function while satisfying a set of linear equality and inequality constraints and restrictions. It has been used to solve optimization problem in many industries area such as banking, nutrition, agriculture, and bakery and so on. The main purpose of this study is to formulate the best combination of all ingredients in producing different type of bread in Daily Bakery in order to gain maximum profit. This study also focuses on the sensitivity analysis due to changing of the profit and the cost of each ingredient. The optimum result obtained from QM software is RM 65,377.29 per day. This study will be benefited for Daily Bakery and also other similar industries. By formulating a combination of all ingredients make up, they can easily know their total profit in producing bread everyday.

  15. Gradient-based optimization with B-splines on sparse grids for solving forward-dynamics simulations of three-dimensional, continuum-mechanical musculoskeletal system models.

    PubMed

    Valentin, J; Sprenger, M; Pflüger, D; Röhrle, O

    2018-05-01

    Investigating the interplay between muscular activity and motion is the basis to improve our understanding of healthy or diseased musculoskeletal systems. To be able to analyze the musculoskeletal systems, computational models are used. Albeit some severe modeling assumptions, almost all existing musculoskeletal system simulations appeal to multibody simulation frameworks. Although continuum-mechanical musculoskeletal system models can compensate for some of these limitations, they are essentially not considered because of their computational complexity and cost. The proposed framework is the first activation-driven musculoskeletal system model, in which the exerted skeletal muscle forces are computed using 3-dimensional, continuum-mechanical skeletal muscle models and in which muscle activations are determined based on a constraint optimization problem. Numerical feasibility is achieved by computing sparse grid surrogates with hierarchical B-splines, and adaptive sparse grid refinement further reduces the computational effort. The choice of B-splines allows the use of all existing gradient-based optimization techniques without further numerical approximation. This paper demonstrates that the resulting surrogates have low relative errors (less than 0.76%) and can be used within forward simulations that are subject to constraint optimization. To demonstrate this, we set up several different test scenarios in which an upper limb model consisting of the elbow joint, the biceps and triceps brachii, and an external load is subjected to different optimization criteria. Even though this novel method has only been demonstrated for a 2-muscle system, it can easily be extended to musculoskeletal systems with 3 or more muscles. Copyright © 2018 John Wiley & Sons, Ltd.

  16. An Improved Multi-Objective Programming with Augmented ε-Constraint Method for Hazardous Waste Location-Routing Problems

    PubMed Central

    Yu, Hao; Solvang, Wei Deng

    2016-01-01

    Hazardous waste location-routing problems are of importance due to the potential risk for nearby residents and the environment. In this paper, an improved mathematical formulation is developed based upon a multi-objective mixed integer programming approach. The model aims at assisting decision makers in selecting locations for different facilities including treatment plants, recycling plants and disposal sites, providing appropriate technologies for hazardous waste treatment, and routing transportation. In the model, two critical factors are taken into account: system operating costs and risk imposed on local residents, and a compensation factor is introduced to the risk objective function in order to account for the fact that the risk level imposed by one type of hazardous waste or treatment technology may significantly vary from that of other types. Besides, the policy instruments for promoting waste recycling are considered, and their influence on the costs and risk of hazardous waste management is also discussed. The model is coded and calculated in Lingo optimization solver, and the augmented ε-constraint method is employed to generate the Pareto optimal curve of the multi-objective optimization problem. The trade-off between different objectives is illustrated in the numerical experiment. PMID:27258293

  17. An Improved Multi-Objective Programming with Augmented ε-Constraint Method for Hazardous Waste Location-Routing Problems.

    PubMed

    Yu, Hao; Solvang, Wei Deng

    2016-05-31

    Hazardous waste location-routing problems are of importance due to the potential risk for nearby residents and the environment. In this paper, an improved mathematical formulation is developed based upon a multi-objective mixed integer programming approach. The model aims at assisting decision makers in selecting locations for different facilities including treatment plants, recycling plants and disposal sites, providing appropriate technologies for hazardous waste treatment, and routing transportation. In the model, two critical factors are taken into account: system operating costs and risk imposed on local residents, and a compensation factor is introduced to the risk objective function in order to account for the fact that the risk level imposed by one type of hazardous waste or treatment technology may significantly vary from that of other types. Besides, the policy instruments for promoting waste recycling are considered, and their influence on the costs and risk of hazardous waste management is also discussed. The model is coded and calculated in Lingo optimization solver, and the augmented ε-constraint method is employed to generate the Pareto optimal curve of the multi-objective optimization problem. The trade-off between different objectives is illustrated in the numerical experiment.

  18. On the utilization of engineering knowledge in design optimization

    NASA Technical Reports Server (NTRS)

    Papalambros, P.

    1984-01-01

    Some current research work conducted at the University of Michigan is described to illustrate efforts for incorporating knowledge in optimization in a nontraditional way. The incorporation of available knowledge in a logic structure is examined in two circumstances. The first examines the possibility of introducing global design information in a local active set strategy implemented during the iterations of projection-type algorithms for nonlinearly constrained problems. The technique used algorithms for nonlinearly constrained problems. The technique used combines global and local monotinicity analysis of the objective and constraint functions. The second examines a knowledge-based program which aids the user to create condigurations that are most desirable from the manufacturing assembly viewpoint. The data bank used is the classification scheme suggested by Boothroyd. The important aspect of this program is that it is an aid for synthesis intended for use in the design concept phase in a way similar to the so-called idea-triggers in creativity-enhancement techniques like brain-storming. The idea generation, however, is not random but it is driven by the goal of achieving the best acceptable configuration.

  19. SLS-SPEC-159 Cross-Program Design Specification for Natural Environments (DSNE) Revision E

    NASA Technical Reports Server (NTRS)

    Roberts, Barry C.

    2017-01-01

    The DSNE completes environment-related specifications for architecture, system-level, and lower-tier documents by specifying the ranges of environmental conditions that must be accounted for by NASA ESD Programs. To assure clarity and consistency, and to prevent requirements documents from becoming cluttered with extensive amounts of technical material, natural environment specifications have been compiled into this document. The intent is to keep a unified specification for natural environments that each Program calls out for appropriate application. This document defines the natural environments parameter limits (maximum and minimum values, energy spectra, or precise model inputs, assumptions, model options, etc.), for all ESD Programs. These environments are developed by the NASA Marshall Space Flight Center (MSFC) Natural Environments Branch (MSFC organization code: EV44). Many of the parameter limits are based on experience with previous programs, such as the Space Shuttle Program. The parameter limits contain no margin and are meant to be evaluated individually to ensure they are reasonable (i.e., do not apply unrealistic extreme-on-extreme conditions). The natural environments specifications in this document should be accounted for by robust design of the flight vehicle and support systems. However, it is understood that in some cases the Programs will find it more effective to account for portions of the environment ranges by operational mitigation or acceptance of risk in accordance with an appropriate program risk management plan and/or hazard analysis process. The DSNE is not intended as a definition of operational models or operational constraints, nor is it adequate, alone, for ground facilities which may have additional requirements (for example, building codes and local environmental constraints). "Natural environments," as the term is used here, refers to the environments that are not the result of intended human activity or intervention. It consists of a variety of external environmental factors (most of natural origin and a few of human origin) which impose restrictions or otherwise impact the development or operation of flight vehicles and destination surface systems.

  20. Autonomously Generating Operations Sequences for a Mars Rover Using Artificial Intelligence-Based Planning

    NASA Astrophysics Data System (ADS)

    Sherwood, R.; Mutz, D.; Estlin, T.; Chien, S.; Backes, P.; Norris, J.; Tran, D.; Cooper, B.; Rabideau, G.; Mishkin, A.; Maxwell, S.

    2001-07-01

    This article discusses a proof-of-concept prototype for ground-based automatic generation of validated rover command sequences from high-level science and engineering activities. This prototype is based on ASPEN, the Automated Scheduling and Planning Environment. This artificial intelligence (AI)-based planning and scheduling system will automatically generate a command sequence that will execute within resource constraints and satisfy flight rules. An automated planning and scheduling system encodes rover design knowledge and uses search and reasoning techniques to automatically generate low-level command sequences while respecting rover operability constraints, science and engineering preferences, environmental predictions, and also adhering to hard temporal constraints. This prototype planning system has been field-tested using the Rocky 7 rover at JPL and will be field-tested on more complex rovers to prove its effectiveness before transferring the technology to flight operations for an upcoming NASA mission. Enabling goal-driven commanding of planetary rovers greatly reduces the requirements for highly skilled rover engineering personnel. This in turn greatly reduces mission operations costs. In addition, goal-driven commanding permits a faster response to changes in rover state (e.g., faults) or science discoveries by removing the time-consuming manual sequence validation process, allowing rapid "what-if" analyses, and thus reducing overall cycle times.

  1. Ecological transition predictably associated with gene degeneration.

    PubMed

    Wessinger, Carolyn A; Rausher, Mark D

    2015-02-01

    Gene degeneration or loss can significantly contribute to phenotypic diversification, but may generate genetic constraints on future evolutionary trajectories, potentially restricting phenotypic reversal. Such constraints may manifest as directional evolutionary trends when parallel phenotypic shifts consistently involve gene degeneration or loss. Here, we demonstrate that widespread parallel evolution in Penstemon from blue to red flowers predictably involves the functional inactivation and degeneration of the enzyme flavonoid 3',5'-hydroxylase (F3'5'H), an anthocyanin pathway enzyme required for the production of blue floral pigments. Other types of genetic mutations do not consistently accompany this phenotypic shift. This pattern may be driven by the relatively large mutational target size of degenerative mutations to this locus and the apparent lack of associated pleiotropic effects. The consistent degeneration of F3'5'H may provide a mechanistic explanation for the observed asymmetry in the direction of flower color evolution in Penstemon: Blue to red transitions are common, but reverse transitions have not been observed. Although phenotypic shifts in this system are likely driven by natural selection, internal constraints may generate predictable genetic outcomes and may restrict future evolutionary trajectories. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Research Breathes New Life Into Senior Travel Program.

    ERIC Educational Resources Information Center

    Blazey, Michael

    1986-01-01

    A survey of older citizens concerning travel interests revealed constraints to participation in a travel program. A description is given of how research on attitudes and life styles indicated ways in which these constraints could be lessened. (JD)

  3. The research on thermal adaptability reinforcement technology for photovoltaic modules

    NASA Astrophysics Data System (ADS)

    Su, Nana; Zhou, Guozhong

    2015-10-01

    Nowadays, Photovoltaic module contains more high-performance components in smaller space. It is also demanded to work in severe temperature condition for special use, such as aerospace. As temperature rises, the failure rate will increase exponentially which makes reliability significantly reduce. In order to improve thermal adaptability of photovoltaic module, this paper makes a research on reinforcement technologies. Thermoelectric cooler is widely used in aerospace which has harsh working environment. So, theoretical formulas for computing refrigerating efficiency, refrigerating capacity and temperature difference are described in detail. The optimum operating current of three classical working condition is obtained which can be used to guide the design of driven circuit. Taken some equipment enclosure for example, we use thermoelectric cooler to reinforce its thermal adaptability. By building physical model and thermal model with the aid of physical dimension and constraint condition, the model is simulated by Flotherm. The temperature field cloud is shown to verify the effectiveness of reinforcement.

  4. Global-constrained hidden Markov model applied on wireless capsule endoscopy video segmentation

    NASA Astrophysics Data System (ADS)

    Wan, Yiwen; Duraisamy, Prakash; Alam, Mohammad S.; Buckles, Bill

    2012-06-01

    Accurate analysis of wireless capsule endoscopy (WCE) videos is vital but tedious. Automatic image analysis can expedite this task. Video segmentation of WCE into the four parts of the gastrointestinal tract is one way to assist a physician. The segmentation approach described in this paper integrates pattern recognition with statiscal analysis. Iniatially, a support vector machine is applied to classify video frames into four classes using a combination of multiple color and texture features as the feature vector. A Poisson cumulative distribution, for which the parameter depends on the length of segments, models a prior knowledge. A priori knowledge together with inter-frame difference serves as the global constraints driven by the underlying observation of each WCE video, which is fitted by Gaussian distribution to constrain the transition probability of hidden Markov model.Experimental results demonstrated effectiveness of the approach.

  5. Determining the Probability of Violating Upper-Level Wind Constraints for the Launch of Minuteman Ill Ballistic Missiles At Vandenberg Air Force Base

    NASA Technical Reports Server (NTRS)

    Shafer, Jaclyn A.; Brock, Tyler M.

    2013-01-01

    The 30th Operational Support Squadron Weather Flight (30 OSSWF) provides comprehensive weather services to the space program at Vandenberg Air Force Base (VAFB) in California. One of their responsibilities is to monitor upper-level winds to ensure safe launch operations of the Minuteman Ill ballistic missile. The 30 OSSWF requested the Applied Meteorology Unit (AMU) analyze VAFB sounding data to determine the probability of violating (PoV) upper-level thresholds for wind speed and shear constraints specific to this launch vehicle, and to develop a graphical user interface (GUI) that will calculate the PoV of each constraint on the day of launch. The AMU suggested also including forecast sounding data from the Rapid Refresh (RAP) model. This would provide further insight for the launch weather officers (LWOs) when determining if a wind constraint violation will occur over the next few hours, and help to improve the overall upper winds forecast on launch day.

  6. Automatic data partitioning on distributed memory multicomputers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gupta, Manish

    1992-01-01

    Distributed-memory parallel computers are increasingly being used to provide high levels of performance for scientific applications. Unfortunately, such machines are not very easy to program. A number of research efforts seek to alleviate this problem by developing compilers that take over the task of generating communication. The communication overheads and the extent of parallelism exploited in the resulting target program are determined largely by the manner in which data is partitioned across different processors of the machine. Most of the compilers provide no assistance to the programmer in the crucial task of determining a good data partitioning scheme. A novel approach is presented, the constraints-based approach, to the problem of automatic data partitioning for numeric programs. In this approach, the compiler identifies some desirable requirements on the distribution of various arrays being referenced in each statement, based on performance considerations. These desirable requirements are referred to as constraints. For each constraint, the compiler determines a quality measure that captures its importance with respect to the performance of the program. The quality measure is obtained through static performance estimation, without actually generating the target data-parallel program with explicit communication. Each data distribution decision is taken by combining all the relevant constraints. The compiler attempts to resolve any conflicts between constraints such that the overall execution time of the parallel program is minimized. This approach has been implemented as part of a compiler called Paradigm, that accepts Fortran 77 programs, and specifies the partitioning scheme to be used for each array in the program. We have obtained results on some programs taken from the Linpack and Eispack libraries, and the Perfect Benchmarks. These results are quite promising, and demonstrate the feasibility of automatic data partitioning for a significant class of scientific application programs with regular computations.

  7. An Optimization Model for the Selection of Bus-Only Lanes in a City.

    PubMed

    Chen, Qun

    2015-01-01

    The planning of urban bus-only lane networks is an important measure to improve bus service and bus priority. To determine the effective arrangement of bus-only lanes, a bi-level programming model for urban bus lane layout is developed in this study that considers accessibility and budget constraints. The goal of the upper-level model is to minimize the total travel time, and the lower-level model is a capacity-constrained traffic assignment model that describes the passenger flow assignment on bus lines, in which the priority sequence of the transfer times is reflected in the passengers' route-choice behaviors. Using the proposed bi-level programming model, optimal bus lines are selected from a set of candidate bus lines; thus, the corresponding bus lane network on which the selected bus lines run is determined. The solution method using a genetic algorithm in the bi-level programming model is developed, and two numerical examples are investigated to demonstrate the efficacy of the proposed model.

  8. Constraints on Nonlinear and Stochastic Growth Theories for Type 3 Solar Radio Bursts from the Corona to 1 AU

    NASA Technical Reports Server (NTRS)

    Cairns, Iver H.; Robinson, P. A.

    1998-01-01

    Existing, competing theories for coronal and interplanetary type III solar radio bursts appeal to one or more of modulational instability, electrostatic (ES) decay processes, or stochastic growth physics to preserve the electron beam, limit the levels of Langmuir-like waves driven by the beam, and produce wave spectra capable of coupling nonlinearly to generate the observed radio emission. Theoretical constraints exist on the wavenumbers and relative sizes of the wave bandwidth and nonlinear growth rate for which Langmuir waves are subject to modulational instability and the parametric and random phase versions of ES decay. A constraint also exists on whether stochastic growth theory (SGT) is appropriate. These constraints are evaluated here using the beam, plasma, and wave properties (1) observed in specific interplanetary type III sources, (2) predicted nominally for the corona, and (3) predicted at heliocentric distances greater than a few solar radii by power-law models based on interplanetary observations. It is found that the Langmuir waves driven directly by the beam have wavenumbers that are almost always too large for modulational instability but are appropriate to ES decay. Even for waves scattered to lower wavenumbers (by ES decay, for instance), the wave bandwidths are predicted to be too large and the nonlinear growth rates too small for modulational instability to occur for the specific interplanetary events studied or the great majority of Langmuir wave packets in type III sources at arbitrary heliocentric distances. Possible exceptions are for very rare, unusually intense, narrowband wave packets, predominantly close to the Sun, and for the front portion of very fast beams traveling through unusually dilute, cold solar wind plasmas. Similar arguments demonstrate that the ES decay should proceed almost always as a random phase process rather than a parametric process, with similar exceptions. These results imply that it is extremely rare for modulational instability or parametric decay to proceed in type III sources at any heliocentric distance: theories for type III bursts based on modulational instability or parametric decay are therefore not viable in general. In contrast, the constraint on SGT can be satisfied and random phase ES decay can proceed at all heliocentric distances under almost all circumstances. (The contrary circumstances involve unusually slow, broad beams moving through unusually hot regions of the Corona.) The analyses presented here strongly justify extending the existing SGT-based model for interplanetary type III bursts (which includes SGT physics, random phase ES decay, and specific electromagnetic emission mechanisms) into a general theory for type III bursts from the corona to beyond 1 AU. This extended theory enjoys strong theoretical support, explains the characteristics of specific interplanetary type III bursts very well, and can account for the detailed dynamic spectra of type III bursts from the lower corona and solar wind.

  9. Current-driven plasma acceleration versus current-driven energy dissipation. I - Wave stability theory

    NASA Technical Reports Server (NTRS)

    Kelly, A. J.; Jahn, R. G.; Choueiri, E. Y.

    1990-01-01

    The dominant unstable electrostatic wave modes of an electromagnetically accelerated plasma are investigated. The study is the first part of a three-phase program aimed at characterizing the current-driven turbulent dissipation degrading the efficiency of Lorentz force plasma accelerators such as the MPD thruster. The analysis uses a kinetic theory that includes magnetic and thermal effects as well as those of an electron current transverse to the magnetic field and collisions, thus combining all the features of previous models. Analytical and numerical solutions allow a detailed description of threshold criteria, finite growth behavior, destabilization mechanisms and maximized-growth characteristics of the dominant unstable modes. The lower hybrid current-driven instability is implicated as dominant and was found to preserve its character in the collisional plasma regime.

  10. s-Processing from MHD-induced mixing and isotopic abundances in presolar SiC grains

    NASA Astrophysics Data System (ADS)

    Palmerini, S.; Trippella, O.; Busso, M.; Vescovi, D.; Petrelli, M.; Zucchini, A.; Frondini, F.

    2018-01-01

    In the past years the observational evidence that s-process elements from Sr to Pb are produced by stars ascending the so-called Asymptotic Giant Branch (or "AGB") could not be explained by self-consistent models, forcing researchers to extensive parameterizations. The crucial point is to understand how protons can be injected from the envelope into the He-rich layers, yielding the formation of 13C and then the activation of the 13C (α,n)16O reaction. Only recently, attempts to solve this problem started to consider quantitatively physically-based mixing mechanisms. Among them, MHD processes in the plasma were suggested to yield mass transport through magnetic buoyancy. In this framework, we compare results of nucleosynthesis models for Low Mass AGB Stars (M≲ 3M⊙), developed from the MHD scenario, with the record of isotopic abundance ratios of s-elements in presolar SiC grains, which were shown to offer precise constraints on the 13C reservoir. We find that n-captures driven by magnetically-induced mixing can indeed account for the SiC data quite well and that this is due to the fact that our 13C distribution fulfils the above constraints rather accurately. We suggest that similar tests should be now performed using different physical models for mixing. Such comparisons would indeed improve decisively our understanding of the formation of the neutron source.

  11. Duality in non-linear programming

    NASA Astrophysics Data System (ADS)

    Jeyalakshmi, K.

    2018-04-01

    In this paper we consider duality and converse duality for a programming problem involving convex objective and constraint functions with finite dimensional range. We do not assume any constraint qualification. The dual is presented by reducing the problem to a standard Lagrange multiplier problem.

  12. Inverse constraints for emission fluxes of atmospheric tracers estimated from concentration measurements and Lagrangian transport

    NASA Astrophysics Data System (ADS)

    Pisso, Ignacio; Patra, Prabir; Breivik, Knut

    2015-04-01

    Lagrangian transport models based on times series of Eulerian fields provide a computationally affordable way of achieving very high resolution for limited areas and time periods. This makes them especially suitable for the analysis of point-wise measurements of atmospheric tracers. We present an application illustrated with examples of greenhouse gases from anthropogenic emissions in urban areas and biogenic emissions in Japan and of pollutants in the Arctic. We asses the algorithmic complexity of the numerical implementation as well as the use of non-procedural techniques such as Object-Oriented programming. We discuss aspects related to the quantification of uncertainty from prior information in the presence of model error and limited number of observations. The case of non-linear constraints is explored using direct numerical optimisation methods.

  13. PATSTAGS: PATRAN-To-STAGSC-1 Translator

    NASA Technical Reports Server (NTRS)

    Otte, Neil

    1993-01-01

    PATSTAGS computer program translates data from PATRAN finite-element mathematical model into STAGS input records used for engineering analysis. Reads data from PATRAN neutral file and writes STAGS input records into STAGS input file and UPRESS data file. Supports translations of nodal constraints, and of nodal, element, force, and pressure data. Written in FORTRAN 77.

  14. Modular constraints on conformal field theories with currents

    NASA Astrophysics Data System (ADS)

    Bae, Jin-Beom; Lee, Sungjay; Song, Jaewon

    2017-12-01

    We study constraints coming from the modular invariance of the partition function of two-dimensional conformal field theories. We constrain the spectrum of CFTs in the presence of holomorphic and anti-holomorphic currents using the semi-definite programming. In particular, we find the bounds on the twist gap for the non-current primaries depend dramatically on the presence of holomorphic currents, showing numerous kinks and peaks. Various rational CFTs are realized at the numerical boundary of the twist gap, saturating the upper limits on the degeneracies. Such theories include Wess-Zumino-Witten models for the Deligne's exceptional series, the Monster CFT and the Baby Monster CFT. We also study modular constraints imposed by W -algebras of various type and observe that the bounds on the gap depend on the choice of W -algebra in the small central charge region.

  15. Trajectory optimization and guidance for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Mease, Kenneth D.; Vanburen, Mark A.

    1989-01-01

    The first step in the approach to developing guidance laws for a horizontal take-off, air breathing single-stage-to-orbit vehicle is to characterize the minimum-fuel ascent trajectories. The capability to generate constrained, minimum fuel ascent trajectories for a single-stage-to-orbit vehicle was developed. A key component of this capability is the general purpose trajectory optimization program OTIS. The pre-production version, OTIS 0.96 was installed and run on a Convex C-1. A propulsion model was developed covering the entire flight envelope of a single-stage-to-orbit vehicle. Three separate propulsion modes, corresponding to an after burning turbojet, a ramjet and a scramjet, are used in the air breathing propulsion phase. The Generic Hypersonic Aerodynamic Model Example aerodynamic model of a hypersonic air breathing single-stage-to-orbit vehicle was obtained and implemented. Preliminary results pertaining to the effects of variations in acceleration constraints, available thrust level and fuel specific impulse on the shape of the minimum-fuel ascent trajectories were obtained. The results show that, if the air breathing engines are sized for acceleration to orbital velocity, it is the acceleration constraint rather than the dynamic pressure constraint that is active during ascent.

  16. Viscous relaxation of impact crater relief on Venus - Constraints on crustal thickness and thermal gradient

    NASA Technical Reports Server (NTRS)

    Grimm, Robert E.; Solomon, Sean C.

    1988-01-01

    Models for the viscous relaxation of impact crater topography are used to constrain the crustal thickness (H) and the mean lithospheric thermal gradient beneath the craters on Venus. A general formulation for gravity-driven flow in a linearly viscous fluid has been obtained which incorporates the densities and temperature-dependent effective viscosities of distinct crust and mantle layers. An upper limit to the crustal volume of Venus of 10 to the 10th cu km is obtained which implies either that the average rate of crustal generation has been much smaller on Venus than on earth or that some form of crustal recycling has occurred on Venus.

  17. Teaching People to Manage Constraints: Effects on Creative Problem-Solving

    ERIC Educational Resources Information Center

    Peterson, David R.; Barrett, Jamie D.; Hester, Kimberly S.; Robledo, Issac C.; Hougen, Dean F.; Day, Eric A.; Mumford, Michael D.

    2013-01-01

    Constraints often inhibit creative problem-solving. This study examined the impact of training strategies for managing constraints on creative problem-solving. Undergraduates, 218 in all, were asked to work through 1 to 4 self-paced instructional programs focused on constraint management strategies. The quality, originality, and elegance of…

  18. Developing community-driven quality improvement initiatives to enhance chronic disease care in Indigenous communities in Canada: the FORGE AHEAD program protocol.

    PubMed

    Naqshbandi Hayward, Mariam; Paquette-Warren, Jann; Harris, Stewart B

    2016-07-26

    Given the dramatic rise and impact of chronic diseases and gaps in care in Indigenous peoples in Canada, a shift from the dominant episodic and responsive healthcare model most common in First Nations communities to one that places emphasis on proactive prevention and chronic disease management is urgently needed. The Transformation of Indigenous Primary Healthcare Delivery (FORGE AHEAD) Program partners with 11 First Nations communities across six provinces in Canada to develop and evaluate community-driven quality improvement (QI) initiatives to enhance chronic disease care. FORGE AHEAD is a 5-year research program (2013-2017) that utilizes a pre-post mixed-methods observational design rooted in participatory research principles to work with communities in developing culturally relevant innovations and improved access to available services. This intensive program incorporates a series of 10 inter-related and progressive program activities designed to foster community-driven initiatives with type 2 diabetes mellitus as the action disease. Preparatory activities include a national community profile survey, best practice and policy literature review, and readiness tool development. Community-level intervention activities include community and clinical readiness consultations, development of a diabetes registry and surveillance system, and QI activities. With a focus on capacity building, all community-level activities are driven by trained community members who champion QI initiatives in their community. Program wrap-up activities include readiness tool validation, cost-analysis and process evaluation. In collaboration with Health Canada and the Aboriginal Diabetes Initiative, scale-up toolkits will be developed in order to build on lessons-learned, tools and methods, and to fuel sustainability and spread of successful innovations. The outcomes of this research program, its related cost and the subsequent policy recommendations, will have the potential to significantly affect future policy decisions pertaining to chronic disease care in First Nations communities in Canada. Current ClinicalTrial.gov protocol ID NCT02234973 . Date of Registration: July 30, 2014.

  19. Optimisation des trajectoires d'un systeme de gestion de vol d'avions pour la reduction des couts de vol

    NASA Astrophysics Data System (ADS)

    Sidibe, Souleymane

    The implementation and monitoring of operational flight plans is a major occupation for a crew of commercial flights. The purpose of this operation is to set the vertical and lateral trajectories followed by airplane during phases of flight: climb, cruise, descent, etc. These trajectories are subjected to conflicting economical constraints: minimization of flight time and minimization of fuel consumed and environmental constraints. In its task of mission planning, the crew is assisted by the Flight Management System (FMS) which is used to construct the path to follow and to predict the behaviour of the aircraft along the flight plan. The FMS considered in our research, particularly includes an optimization model of flight only by calculating the optimal speed profile that minimizes the overall cost of flight synthesized by a criterion of cost index following a steady cruising altitude. However, the model based solely on optimization of the speed profile is not sufficient. It is necessary to expand the current optimization for simultaneous optimization of the speed and altitude in order to determine an optimum cruise altitude that minimizes the overall cost when the path is flown with the optimal speed profile. Then, a new program was developed. The latter is based on the method of dynamic programming invented by Bellman to solve problems of optimal paths. In addition, the improvement passes through research new patterns of trajectories integrating ascendant cruises and using the lateral plane with the effect of the weather: wind and temperature. Finally, for better optimization, the program takes into account constraint of flight domain of aircrafts which utilize the FMS.

  20. Obstacle avoidance handling and mixed integer predictive control for space robots

    NASA Astrophysics Data System (ADS)

    Zong, Lijun; Luo, Jianjun; Wang, Mingming; Yuan, Jianping

    2018-04-01

    This paper presents a novel obstacle avoidance constraint and a mixed integer predictive control (MIPC) method for space robots avoiding obstacles and satisfying physical limits during performing tasks. Firstly, a novel kind of obstacle avoidance constraint of space robots, which needs the assumption that the manipulator links and the obstacles can be represented by convex bodies, is proposed by limiting the relative velocity between two closest points which are on the manipulator and the obstacle, respectively. Furthermore, the logical variables are introduced into the obstacle avoidance constraint, which have realized the constraint form is automatically changed to satisfy different obstacle avoidance requirements in different distance intervals between the space robot and the obstacle. Afterwards, the obstacle avoidance constraint and other system physical limits, such as joint angle ranges, the amplitude boundaries of joint velocities and joint torques, are described as inequality constraints of a quadratic programming (QP) problem by using the model predictive control (MPC) method. To guarantee the feasibility of the obtained multi-constraint QP problem, the constraints are treated as soft constraints and assigned levels of priority based on the propositional logic theory, which can realize that the constraints with lower priorities are always firstly violated to recover the feasibility of the QP problem. Since the logical variables have been introduced, the optimization problem including obstacle avoidance and system physical limits as prioritized inequality constraints is termed as MIPC method of space robots, and its computational complexity as well as possible strategies for reducing calculation amount are analyzed. Simulations of the space robot unfolding its manipulator and tracking the end-effector's desired trajectories with the existence of obstacles and physical limits are presented to demonstrate the effectiveness of the proposed obstacle avoidance strategy and MIPC control method of space robots.

  1. Consideration in selecting crops for the human-rated life support system: a Linear Programming model

    NASA Technical Reports Server (NTRS)

    Wheeler, E. F.; Kossowski, J.; Goto, E.; Langhans, R. W.; White, G.; Albright, L. D.; Wilcox, D.; Henninger, D. L. (Principal Investigator)

    1996-01-01

    A Linear Programming model has been constructed which aids in selecting appropriate crops for CELSS (Controlled Environment Life Support System) food production. A team of Controlled Environment Agriculture (CEA) faculty, staff, graduate students and invited experts representing more than a dozen disciplines, provided a wide range of expertise in developing the model and the crop production program. The model incorporates nutritional content and controlled-environment based production yields of carefully chosen crops into a framework where a crop mix can be constructed to suit the astronauts' needs. The crew's nutritional requirements can be adequately satisfied with only a few crops (assuming vitamin mineral supplements are provided) but this will not be satisfactory from a culinary standpoint. This model is flexible enough that taste and variety driven food choices can be built into the model.

  2. Consideration in selecting crops for the human-rated life support system: a linear programming model

    NASA Astrophysics Data System (ADS)

    Wheeler, E. F.; Kossowski, J.; Goto, E.; Langhans, R. W.; White, G.; Albright, L. D.; Wilcox, D.

    A Linear Programming model has been constructed which aids in selecting appropriate crops for CELSS (Controlled Environment Life Support System) food production. A team of Controlled Environment Agriculture (CEA) faculty, staff, graduate students and invited experts representing more than a dozen disciplines, provided a wide range of expertise in developing the model and the crop production program. The model incorporates nutritional content and controlled-environment based production yields of carefully chosen crops into a framework where a crop mix can be constructed to suit the astronauts' needs. The crew's nutritional requirements can be adequately satisfied with only a few crops (assuming vitamin mineral supplements are provided) but this will not be satisfactory from a culinary standpoint. This model is flexible enough that taste and variety driven food choices can be built into the model.

  3. Decision Modeling Framework to Minimize Arrival Delays from Ground Delay Programs

    NASA Astrophysics Data System (ADS)

    Mohleji, Nandita

    Convective weather and other constraints create uncertainty in air transportation, leading to costly delays. A Ground Delay Program (GDP) is a strategy to mitigate these effects. Systematic decision support can increase GDP efficacy, reduce delays, and minimize direct operating costs. In this study, a decision analysis (DA) model is constructed by combining a decision tree and Bayesian belief network. Through a study of three New York region airports, the DA model demonstrates that larger GDP scopes that include more flights in the program, along with longer lead times that provide stakeholders greater notice of a pending program, trigger the fewest average arrival delays. These findings are demonstrated to result in a savings of up to $1,850 per flight. Furthermore, when convective weather is predicted, forecast weather confidences remain the same level or greater at least 70% of the time, supporting more strategic decision making. The DA model thus enables quantification of uncertainties and insights on causal relationships, providing support for future GDP decisions.

  4. Computer program documentation for a subcritical wing design code using higher order far-field drag minimization

    NASA Technical Reports Server (NTRS)

    Kuhlman, J. M.; Shu, J. Y.

    1981-01-01

    A subsonic, linearized aerodynamic theory, wing design program for one or two planforms was developed which uses a vortex lattice near field model and a higher order panel method in the far field. The theoretical development of the wake model and its implementation in the vortex lattice design code are summarized and sample results are given. Detailed program usage instructions, sample input and output data, and a program listing are presented in the Appendixes. The far field wake model assumes a wake vortex sheet whose strength varies piecewise linearly in the spanwise direction. From this model analytical expressions for lift coefficient, induced drag coefficient, pitching moment coefficient, and bending moment coefficient were developed. From these relationships a direct optimization scheme is used to determine the optimum wake vorticity distribution for minimum induced drag, subject to constraints on lift, and pitching or bending moment. Integration spanwise yields the bound circulation, which is interpolated in the near field vortex lattice to obtain the design camber surface(s).

  5. Request-Driven Schedule Automation for the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.; Tran, Daniel; Arroyo, Belinda; Call, Jared; Mercado, Marisol

    2010-01-01

    The DSN Scheduling Engine (DSE) has been developed to increase the level of automated scheduling support available to users of NASA s Deep Space Network (DSN). We have adopted a request-driven approach to DSN scheduling, in contrast to the activity-oriented approach used up to now. Scheduling requests allow users to declaratively specify patterns and conditions on their DSN service allocations, including timing, resource requirements, gaps, overlaps, time linkages among services, repetition, priorities, and a wide range of additional factors and preferences. The DSE incorporates a model of the key constraints and preferences of the DSN scheduling domain, along with algorithms to expand scheduling requests into valid resource allocations, to resolve schedule conflicts, and to repair unsatisfied requests. We use time-bounded systematic search with constraint relaxation to return nearby solutions if exact ones cannot be found, where the relaxation options and order are under user control. To explore the usability aspects of our approach we have developed a graphical user interface incorporating some crucial features to make it easier to work with complex scheduling requests. Among these are: progressive revelation of relevant detail, immediate propagation and visual feedback from a user s decisions, and a meeting calendar metaphor for repeated patterns of requests. Even as a prototype, the DSE has been deployed and adopted as the initial step in building the operational DSN schedule, thus representing an important initial validation of our overall approach. The DSE is a core element of the DSN Service Scheduling Software (S(sup 3)), a web-based collaborative scheduling system now under development for deployment to all DSN users.

  6. Strategic Technology Investment Analysis: An Integrated System Approach

    NASA Technical Reports Server (NTRS)

    Adumitroaie, V.; Weisbin, C. R.

    2010-01-01

    Complex technology investment decisions within NASA are increasingly difficult to make such that the end results are satisfying the technical objectives and all the organizational constraints. Due to a restricted science budget environment and numerous required technology developments, the investment decisions need to take into account not only the functional impact on the program goals, but also development uncertainties and cost variations along with maintaining a healthy workforce. This paper describes an approach for optimizing and qualifying technology investment portfolios from the perspective of an integrated system model. The methodology encompasses multi-attribute decision theory elements and sensitivity analysis. The evaluation of the degree of robustness of the recommended portfolio provides the decision-maker with an array of viable selection alternatives, which take into account input uncertainties and possibly satisfy nontechnical constraints. The methodology is presented in the context of assessing capability development portfolios for NASA technology programs.

  7. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  8. Subsonic aircraft: Evolution and the matching of size to performance

    NASA Technical Reports Server (NTRS)

    Loftin, L. K., Jr.

    1980-01-01

    Methods for estimating the approximate size, weight, and power of aircraft intended to meet specified performance requirements are presented for both jet-powered and propeller-driven aircraft. The methods are simple and require only the use of a pocket computer for rapid application to specific sizing problems. Application of the methods is illustrated by means of sizing studies of a series of jet-powered and propeller-driven aircraft with varying design constraints. Some aspects of the technical evolution of the airplane from 1918 to the present are also briefly discussed.

  9. Plasticity-Driven Self-Organization under Topological Constraints Accounts for Non-random Features of Cortical Synaptic Wiring

    PubMed Central

    Miner, Daniel; Triesch, Jochen

    2016-01-01

    Understanding the structure and dynamics of cortical connectivity is vital to understanding cortical function. Experimental data strongly suggest that local recurrent connectivity in the cortex is significantly non-random, exhibiting, for example, above-chance bidirectionality and an overrepresentation of certain triangular motifs. Additional evidence suggests a significant distance dependency to connectivity over a local scale of a few hundred microns, and particular patterns of synaptic turnover dynamics, including a heavy-tailed distribution of synaptic efficacies, a power law distribution of synaptic lifetimes, and a tendency for stronger synapses to be more stable over time. Understanding how many of these non-random features simultaneously arise would provide valuable insights into the development and function of the cortex. While previous work has modeled some of the individual features of local cortical wiring, there is no model that begins to comprehensively account for all of them. We present a spiking network model of a rodent Layer 5 cortical slice which, via the interactions of a few simple biologically motivated intrinsic, synaptic, and structural plasticity mechanisms, qualitatively reproduces these non-random effects when combined with simple topological constraints. Our model suggests that mechanisms of self-organization arising from a small number of plasticity rules provide a parsimonious explanation for numerous experimentally observed non-random features of recurrent cortical wiring. Interestingly, similar mechanisms have been shown to endow recurrent networks with powerful learning abilities, suggesting that these mechanism are central to understanding both structure and function of cortical synaptic wiring. PMID:26866369

  10. Future equivalent of 2010 Russian heatwave intensified by weakening soil moisture constraints

    NASA Astrophysics Data System (ADS)

    Rasmijn, L. M.; van der Schrier, G.; Bintanja, R.; Barkmeijer, J.; Sterl, A.; Hazeleger, W.

    2018-05-01

    The 2010 heatwave in eastern Europe and Russia ranks among the hottest events ever recorded in the region1,2. The excessive summer warmth was related to an anomalously widespread and intense quasi-stationary anticyclonic circulation anomaly over western Russia, reinforced by depletion of spring soil moisture1,3-5. At present, high soil moisture levels and strong surface evaporation generally tend to cap maximum summer temperatures6-8, but these constraints may weaken under future warming9,10. Here, we use a data assimilation technique in which future climate model simulations are nudged to realistically represent the persistence and strength of the 2010 blocked atmospheric flow. In the future, synoptically driven extreme warming under favourable large-scale atmospheric conditions will no longer be suppressed by abundant soil moisture, leading to a disproportional intensification of future heatwaves. This implies that future mid-latitude heatwaves analogous to the 2010 event will become even more extreme than previously thought, with temperature extremes increasing by 8.4 °C over western Russia. Thus, the socioeconomic impacts of future heatwaves will probably be amplified beyond current estimates.

  11. Prognostics of Proton Exchange Membrane Fuel Cells stack using an ensemble of constraints based connectionist networks

    NASA Astrophysics Data System (ADS)

    Javed, Kamran; Gouriveau, Rafael; Zerhouni, Noureddine; Hissel, Daniel

    2016-08-01

    Proton Exchange Membrane Fuel Cell (PEMFC) is considered the most versatile among available fuel cell technologies, which qualify for diverse applications. However, the large-scale industrial deployment of PEMFCs is limited due to their short life span and high exploitation costs. Therefore, ensuring fuel cell service for a long duration is of vital importance, which has led to Prognostics and Health Management of fuel cells. More precisely, prognostics of PEMFC is major area of focus nowadays, which aims at identifying degradation of PEMFC stack at early stages and estimating its Remaining Useful Life (RUL) for life cycle management. This paper presents a data-driven approach for prognostics of PEMFC stack using an ensemble of constraint based Summation Wavelet- Extreme Learning Machine (SW-ELM) models. This development aim at improving the robustness and applicability of prognostics of PEMFC for an online application, with limited learning data. The proposed approach is applied to real data from two different PEMFC stacks and compared with ensembles of well known connectionist algorithms. The results comparison on long-term prognostics of both PEMFC stacks validates our proposition.

  12. Programs for analysis and resizing of complex structures. [computerized minimum weight design

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Prasad, B.

    1978-01-01

    The paper describes the PARS (Programs for Analysis and Resizing of Structures) system. PARS is a user oriented system of programs for the minimum weight design of structures modeled by finite elements and subject to stress, displacement, flutter and thermal constraints. The system is built around SPAR - an efficient and modular general purpose finite element program, and consists of a series of processors that communicate through the use of a data base. An efficient optimizer based on the Sequence of Unconstrained Minimization Technique (SUMT) with an extended interior penalty function and Newton's method is used. Several problems are presented for demonstration of the system capabilities.

  13. Continued development and correlation of analytically based weight estimation codes for wings and fuselages

    NASA Technical Reports Server (NTRS)

    Mullen, J., Jr.

    1978-01-01

    The implementation of the changes to the program for Wing Aeroelastic Design and the development of a program to estimate aircraft fuselage weights are described. The equations to implement the modified planform description, the stiffened panel skin representation, the trim loads calculation, and the flutter constraint approximation are presented. A comparison of the wing model with the actual F-5A weight material distributions and loads is given. The equations and program techniques used for the estimation of aircraft fuselage weights are described. These equations were incorporated as a computer code. The weight predictions of this program are compared with data from the C-141.

  14. Measuring and Understanding Authentic Youth Engagement: The Youth-Adult Partnership Rubric

    ERIC Educational Resources Information Center

    Wu, Heng-Chieh Jamie; Kornbluh, Mariah; Weiss, John; Roddy, Lori

    2016-01-01

    Commonly described as youth-led or youth-driven, the youth-adult partnership (Y-AP) model has gained increasing popularity in out-of-school time (OST) programs in the past two decades (Larson, Walker, & Pearce, 2005; Zeldin, Christens, & Powers, 2013). The Y-AP model is defined as "the practice of (a) multiple youth and multiple…

  15. Computational challenges in modeling gene regulatory events

    PubMed Central

    Pataskar, Abhijeet; Tiwari, Vijay K.

    2016-01-01

    ABSTRACT Cellular transcriptional programs driven by genetic and epigenetic mechanisms could be better understood by integrating “omics” data and subsequently modeling the gene-regulatory events. Toward this end, computational biology should keep pace with evolving experimental procedures and data availability. This article gives an exemplified account of the current computational challenges in molecular biology. PMID:27390891

  16. From Status to Power: New Models at the Intersection of Two Theories

    ERIC Educational Resources Information Center

    Thye, Shane R.; Willer, David; Markovsky, Barry

    2006-01-01

    The study of group processes has benefited from longstanding programs of theory-driven research on status and power. The present work constructs a bridge between two formal theories of status and power: Status Characteristics Theory and Network Exchange Theory. Two theoretical models, one for "status value" and one for "status influence,"…

  17. Explanation Constraint Programming for Model-based Diagnosis of Engineered Systems

    NASA Technical Reports Server (NTRS)

    Narasimhan, Sriram; Brownston, Lee; Burrows, Daniel

    2004-01-01

    We can expect to see an increase in the deployment of unmanned air and land vehicles for autonomous exploration of space. In order to maintain autonomous control of such systems, it is essential to track the current state of the system. When the system includes safety-critical components, failures or faults in the system must be diagnosed as quickly as possible, and their effects compensated for so that control and safety are maintained under a variety of fault conditions. The Livingstone fault diagnosis and recovery kernel and its temporal extension L2 are examples of model-based reasoning engines for health management. Livingstone has been shown to be effective, it is in demand, and it is being further developed. It was part of the successful Remote Agent demonstration on Deep Space One in 1999. It has been and is being utilized by several projects involving groups from various NASA centers, including the In Situ Propellant Production (ISPP) simulation at Kennedy Space Center, the X-34 and X-37 experimental reusable launch vehicle missions, Techsat-21, and advanced life support projects. Model-based and consistency-based diagnostic systems like Livingstone work only with discrete and finite domain models. When quantitative and continuous behaviors are involved, these are abstracted to discrete form using some mapping. This mapping from the quantitative domain to the qualitative domain is sometimes very involved and requires the design of highly sophisticated and complex monitors. We propose a diagnostic methodology that deals directly with quantitative models and behaviors, thereby mitigating the need for these sophisticated mappings. Our work brings together ideas from model-based diagnosis systems like Livingstone and concurrent constraint programming concepts. The system uses explanations derived from the propagation of quantitative constraints to generate conflicts. Fast conflict generation algorithms are used to generate and maintain multiple candidates whose consistency can be tracked across multiple time steps.

  18. Influences of granular constraints and surface effects on the heterogeneity of elastic, superelastic, and plastic responses of polycrystalline shape memory alloys

    DOE PAGES

    Paranjape, Harshad M.; Paul, Partha P.; Sharma, Hemant; ...

    2017-02-16

    Deformation heterogeneities at the microstructural length-scale developed in polycrystalline shape memory alloys (SMAs) during superelastic loading are studied using both experiments and simulations. In situ X-ray diffraction, specifically the far-field high energy diffraction microscopy (ff-HEDM) technique, was used to non-destructively measure the grain-averaged statistics of position, crystal orientation, elastic strain tensor, and volume for hundreds of austenite grains in a superelastically loaded nickel-titanium (NiTi) SMA. These experimental data were also used to create a synthetic microstructure within a finite element model. The development of intragranular stresses were then simulated during tensile loading of the model using anisotropic elasticity. Driving forcesmore » for phase transformation and slip were calculated from these stresses. The grain-average responses of individual austenite crystals examined before and after multiple stress-induced transformation events showed that grains in the specimen interior carry more axial stress than the surface grains as the superelastic response "shakes down". Examination of the heterogeneity within individual grains showed that regions near grain boundaries exhibit larger stress variation compared to the grain interiors. As a result, this intragranular heterogeneity is more strongly driven by the constraints of neighboring grains than the initial stress state and orientation of the individual grains.« less

  19. Influences of granular constraints and surface effects on the heterogeneity of elastic, superelastic, and plastic responses of polycrystalline shape memory alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paranjape, Harshad M.; Paul, Partha P.; Sharma, Hemant

    Deformation heterogeneities at the microstructural length-scale developed in polycrystalline shape memory alloys (SMAs) during superelastic loading are studied using both experiments and simulations. In situ X-ray diffraction, specifically the far-field high energy diffraction microscopy (ff-HEDM) technique, was used to non-destructively measure the grain-averaged statistics of position, crystal orientation, elastic strain tensor, and volume for hundreds of austenite grains in a superelastically loaded nickel-titanium (NiTi) SMA. These experimental data were also used to create a synthetic microstructure within a finite element model. The development of intragranular stresses were then simulated during tensile loading of the model using anisotropic elasticity. Driving forcesmore » for phase transformation and slip were calculated from these stresses. The grain-average responses of individual austenite crystals examined before and after multiple stress-induced transformation events showed that grains in the specimen interior carry more axial stress than the surface grains as the superelastic response "shakes down". Examination of the heterogeneity within individual grains showed that regions near grain boundaries exhibit larger stress variation compared to the grain interiors. As a result, this intragranular heterogeneity is more strongly driven by the constraints of neighboring grains than the initial stress state and orientation of the individual grains.« less

  20. Comprehensive evaluation of long-term hydrological data sets: Constraints of the Budyko framework

    NASA Astrophysics Data System (ADS)

    Greve, Peter; Orlowsky, Boris; Seneviratne, Sonia I.

    2013-04-01

    An accurate estimate of the climatological land water balance is essential for a wide range of socio-economical issues. Despite the simplicity of the underlying water balance equation, its individual variables are of complex nature. Global estimates, either derived from observations or from models, of precipitation (P ) and especially evapotranspiration (ET) are characterized by high uncertainties. This leads to inconsistent results in determining conditions related to the land water balance and its components. In this study, we consider the Budyko framework as a constraint to evaluate long-term hydrological data sets within the period from 1984 to 2005. The Budyko framework is a well established empirically based relationsship between ET-P and Ep-P , with Ep being the potential evaporation. We use estimates of ET associated with the LandFlux-EVAL initiative (Mueller et. al., 2012), either derived from observations, CMIP5 models or land-surface models (LSMs) driven with observation-based forcing or atmospheric reanalyses. Data sets of P comprise all commonly used global observation-based estimates. Ep is determined by methods of differing complexity with recent global temperature and radiation data sets. Based on this comprehensive synthesis of data sets and methods to determine Ep, more than 2000 possible combinations for ET-P in conjunction with Ep-P are created. All combinations are validated against the Budyko curve and against physical limits within the Budyko phase space. For this purpose we develop an error measure based on the root mean square error which combines both constraints. We find that uncertainties are mainly induced by the ET data sets. In particular, reanalysis and CMIP5 data sets are characterized by low realism. The realism of LSMs is further not primarily controlled by the forcing, as different LSMs driven with the same forcing show significantly different error measures. Our comprehensive approach is thus suitable to detect uncertainties associated with individual data sets. Furthermore, combinations performing well within the Budyko phase space are identified and could be used for future studies, like e.g. to investigate decadal changes of the land water balance. B. MUELLER, M. HIRSCHI, C. JIMENEZ, P. CIAIS, P.A. DIRMEYER, A.J. DOLMAN, J.B. FISHER, Z. GUO, M. JUNG, F. LUDWIG, F. MAIGNAN, D. MIRALLES, M.F. MCCABE, M. REICHSTEIN, J. SHEELD, K. WANG, E.F.WOOD, Y. ZHANG, S.I. SENEVIRATNE (2012): Benchmark products for land evapotranspiration: LandFlux-EVAL multi-dataset synthesis, Hydrol. Earth Syst. Sci., submitted.

Top