Sample records for constraints performance steps

  1. Applying the theory of constraints in health care: Part 1--The philosophy.

    PubMed

    Breen, Anne M; Burton-Houle, Tracey; Aron, David C

    2002-01-01

    The imperative to improve both technical and service quality while simultaneously reducing costs is quite clear. The Theory of Constraints (TOC) is an emerging philosophy that rests on two assumptions: (1) systems thinking and (2) if a constraint "is anything that limits a system from achieving higher performance versus its goal," then every system must have at least one (and at most no more than a few) constraints or limiting factors. A constraint is neither good nor bad in itself. Rather, it just is. In fact, recognition of the existence of constraints represents an excellent opportunity for improvement because it allows one to focus ones efforts in the most productive area--identifying and managing the constraints. This is accomplished by using the five focusing steps of TOC: (1) identify the system's constraint; (2) decide how to exploit it; (3) subordinate/synchronize everything else to the above decisions; (4) elevate the system's constraint; and (5) if the constraint has shifted in the above steps, go back to step 1. Do not allow inertia to become the system's constraint. TOC also refers to a series of tools termed "thinking processes" and the sequence in which they are used.

  2. An attribute-driven statistics generator for use in a G.I.S. environment

    NASA Technical Reports Server (NTRS)

    Thomas, R. W.; Ritter, P. R.; Kaugars, A.

    1984-01-01

    When performing research using digital geographic information it is often useful to produce quantitative characterizations of the data, usually within some constraints. In the research environment the different combinations of required data and constraints can often become quite complex. This paper describes a technique that gives the researcher a powerful and flexible way to set up many possible combinations of data and constraints without having to perform numerous intermediate steps or create temporary data bands. This method provides an efficient way to produce descriptive statistics in such situations.

  3. Self-Paced and Temporally Constrained Throwing Performance by Team-Handball Experts and Novices without Foreknowledge of Target Position

    PubMed Central

    Rousanoglou, Elissavet N.; Noutsos, Konstantinos S.; Bayios, Ioannis A.; Boudolos, Konstantinos D.

    2015-01-01

    The fixed duration of a team-handball game and its continuously changing situations incorporate an inherent temporal pressure. Also, the target’s position is not foreknown but online determined by the player’s interceptive processing of visual information. These ecological limitations do not favour throwing performance, particularly in novice players, and are not reflected in previous experimental settings of self-paced throws with foreknowledge of target position. The study investigated the self-paced and temporally constrained throwing performance without foreknowledge of target position, in team-handball experts and novices in three shot types (Standing Shot, 3Step Shot, Jump Shot). The target position was randomly illuminated on a tabloid surface before (self-paced condition) and after (temporally constrained condition) shot initiation. Response time, throwing velocity and throwing accuracy were measured. A mixed 2 (experience) X 2 (temporal constraint condition) ANOVA was applied. The novices performed with significantly lower throwing velocity and worse throwing accuracy in all shot types (p = 0.000) and, longer response time only in the 3Step Shot (p = 0.013). The temporal constraint (significantly shorter response times in all shot types at p = 0.000) had a shot specific effect with lower throwing velocity only in the 3Step Shot (p = 0.001) and an unexpected greater throwing accuracy only in the Standing Shot (p = 0.002). The significant interaction between experience and temporal constraint condition in throwing accuracy (p = 0.003) revealed a significant temporal constraint effect in the novices (p = 0.002) but not in the experts (p = 0.798). The main findings of the study are the shot specificity of the temporal constraint effect, as well as that, depending on the shot, the novices’ throwing accuracy may benefit rather than worsen under temporal pressure. Key points The temporal constraint induced a shot specific significant difference in throwing velocity in both the experts and the novices. The temporal constraint induced a shot specific significant difference in throwing accuracy only in the novices. Depending on the shot demands, the throwing accuracy of the novices may benefit under temporally constrained situations. PMID:25729288

  4. Space Shuttle capabilities, constraints, and cost

    NASA Technical Reports Server (NTRS)

    Lee, C. M.

    1980-01-01

    The capabilities, constraints, and costs of the Space Transportation System (STS), which combines reusable and expendable components, are reviewed, and an overview of the current planning activities for operating the STS in an efficient and cost-effective manner is presented. Traffic forecasts, performance constraints and enhancements, and potential new applications are discussed. Attention is given to operating costs, pricing policies, and the steps involved in 'getting on board', which includes all the interfaces between NASA and the users necessary to come to launch service agreements.

  5. Multi-Time Step Service Restoration for Advanced Distribution Systems and Microgrids

    DOE PAGES

    Chen, Bo; Chen, Chen; Wang, Jianhui; ...

    2017-07-07

    Modern power systems are facing increased risk of disasters that can cause extended outages. The presence of remote control switches (RCSs), distributed generators (DGs), and energy storage systems (ESS) provides both challenges and opportunities for developing post-fault service restoration methodologies. Inter-temporal constraints of DGs, ESS, and loads under cold load pickup (CLPU) conditions impose extra complexity on problem formulation and solution. In this paper, a multi-time step service restoration methodology is proposed to optimally generate a sequence of control actions for controllable switches, ESSs, and dispatchable DGs to assist the system operator with decision making. The restoration sequence is determinedmore » to minimize the unserved customers by energizing the system step by step without violating operational constraints at each time step. The proposed methodology is formulated as a mixed-integer linear programming (MILP) model and can adapt to various operation conditions. Furthermore, the proposed method is validated through several case studies that are performed on modified IEEE 13-node and IEEE 123-node test feeders.« less

  6. Multi-Time Step Service Restoration for Advanced Distribution Systems and Microgrids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Bo; Chen, Chen; Wang, Jianhui

    Modern power systems are facing increased risk of disasters that can cause extended outages. The presence of remote control switches (RCSs), distributed generators (DGs), and energy storage systems (ESS) provides both challenges and opportunities for developing post-fault service restoration methodologies. Inter-temporal constraints of DGs, ESS, and loads under cold load pickup (CLPU) conditions impose extra complexity on problem formulation and solution. In this paper, a multi-time step service restoration methodology is proposed to optimally generate a sequence of control actions for controllable switches, ESSs, and dispatchable DGs to assist the system operator with decision making. The restoration sequence is determinedmore » to minimize the unserved customers by energizing the system step by step without violating operational constraints at each time step. The proposed methodology is formulated as a mixed-integer linear programming (MILP) model and can adapt to various operation conditions. Furthermore, the proposed method is validated through several case studies that are performed on modified IEEE 13-node and IEEE 123-node test feeders.« less

  7. Method and apparatus for automated assembly

    DOEpatents

    Jones, Rondall E.; Wilson, Randall H.; Calton, Terri L.

    1999-01-01

    A process and apparatus generates a sequence of steps for assembly or disassembly of a mechanical system. Each step in the sequence is geometrically feasible, i.e., the part motions required are physically possible. Each step in the sequence is also constraint feasible, i.e., the step satisfies user-definable constraints. Constraints allow process and other such limitations, not usually represented in models of the completed mechanical system, to affect the sequence.

  8. Brief Lags in Interrupted Sequential Performance: Evaluating a Model and Model Evaluation Method

    DTIC Science & Technology

    2015-01-05

    rehearsal mechanism in the model. To evaluate the model we developed a simple new goodness-of-fit test based on analysis of variance that offers an...repeated step). Sequen- tial constraints are common in medicine, equipment maintenance, computer programming and technical support, data analysis ...legal analysis , accounting, and many other home and workplace environ- ments. Sequential constraints also play a role in such basic cognitive processes

  9. An L1-norm phase constraint for half-Fourier compressed sensing in 3D MR imaging.

    PubMed

    Li, Guobin; Hennig, Jürgen; Raithel, Esther; Büchert, Martin; Paul, Dominik; Korvink, Jan G; Zaitsev, Maxim

    2015-10-01

    In most half-Fourier imaging methods, explicit phase replacement is used. In combination with parallel imaging, or compressed sensing, half-Fourier reconstruction is usually performed in a separate step. The purpose of this paper is to report that integration of half-Fourier reconstruction into iterative reconstruction minimizes reconstruction errors. The L1-norm phase constraint for half-Fourier imaging proposed in this work is compared with the L2-norm variant of the same algorithm, with several typical half-Fourier reconstruction methods. Half-Fourier imaging with the proposed phase constraint can be seamlessly combined with parallel imaging and compressed sensing to achieve high acceleration factors. In simulations and in in-vivo experiments half-Fourier imaging with the proposed L1-norm phase constraint enables superior performance both reconstruction of image details and with regard to robustness against phase estimation errors. The performance and feasibility of half-Fourier imaging with the proposed L1-norm phase constraint is reported. Its seamless combination with parallel imaging and compressed sensing enables use of greater acceleration in 3D MR imaging.

  10. Aspect-object alignment with Integer Linear Programming in opinion mining.

    PubMed

    Zhao, Yanyan; Qin, Bing; Liu, Ting; Yang, Wei

    2015-01-01

    Target extraction is an important task in opinion mining. In this task, a complete target consists of an aspect and its corresponding object. However, previous work has always simply regarded the aspect as the target itself and has ignored the important "object" element. Thus, these studies have addressed incomplete targets, which are of limited use for practical applications. This paper proposes a novel and important sentiment analysis task, termed aspect-object alignment, to solve the "object neglect" problem. The objective of this task is to obtain the correct corresponding object for each aspect. We design a two-step framework for this task. We first provide an aspect-object alignment classifier that incorporates three sets of features, namely, the basic, relational, and special target features. However, the objects that are assigned to aspects in a sentence often contradict each other and possess many complicated features that are difficult to incorporate into a classifier. To resolve these conflicts, we impose two types of constraints in the second step: intra-sentence constraints and inter-sentence constraints. These constraints are encoded as linear formulations, and Integer Linear Programming (ILP) is used as an inference procedure to obtain a final global decision that is consistent with the constraints. Experiments on a corpus in the camera domain demonstrate that the three feature sets used in the aspect-object alignment classifier are effective in improving its performance. Moreover, the classifier with ILP inference performs better than the classifier without it, thereby illustrating that the two types of constraints that we impose are beneficial.

  11. RSM 1.0 user's guide: A resupply scheduler using integer optimization

    NASA Technical Reports Server (NTRS)

    Viterna, Larry A.; Green, Robert D.; Reed, David M.

    1991-01-01

    The Resupply Scheduling Model (RSM) is a PC based, fully menu-driven computer program. It uses integer programming techniques to determine an optimum schedule to replace components on or before a fixed replacement period, subject to user defined constraints such as transportation mass and volume limits or available repair crew time. Principal input for RSJ includes properties such as mass and volume and an assembly sequence. Resource constraints are entered for each period corresponding to the component properties. Though written to analyze the electrical power system on the Space Station Freedom, RSM is quite general and can be used to model the resupply of almost any system subject to user defined resource constraints. Presented here is a step by step procedure for preparing the input, performing the analysis, and interpreting the results. Instructions for installing the program and information on the algorithms are given.

  12. Motor Cortex Activity During Functional Motor Skills: An fNIRS Study.

    PubMed

    Nishiyori, Ryota; Bisconti, Silvia; Ulrich, Beverly

    2016-01-01

    Assessments of brain activity during motor task performance have been limited to fine motor movements due to technological constraints presented by traditional neuroimaging techniques, such as functional magnetic resonance imaging. Functional near-infrared spectroscopy (fNIRS) offers a promising method by which to overcome these constraints and investigate motor performance of functional motor tasks. The current study used fNIRS to quantify hemodynamic responses within the primary motor cortex in twelve healthy adults as they performed unimanual right, unimanual left, and bimanual reaching, and stepping in place. Results revealed that during both unimanual reaching tasks, the contralateral hemisphere showed significant activation in channels located approximately 3 cm medial to the C3 (for right-hand reach) and C4 (for left-hand reach) landmarks. Bimanual reaching and stepping showed activation in similar channels, which were located bilaterally across the primary motor cortex. The medial channels, surrounding Cz, showed significantly higher activations during stepping when compared to bimanual reaching. Our results extend the viability of fNIRS to study motor function and build a foundation for future investigation of motor development in infants during nascent functional behaviors and monitor how they may change with age or practice.

  13. An Integrated Constraint Programming Approach to Scheduling Sports Leagues with Divisional and Round-robin Tournaments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey

    Previous approaches for scheduling a league with round-robin and divisional tournaments involved decomposing the problem into easier subproblems. This approach, used to schedule the top Swedish handball league Elitserien, reduces the problem complexity but can result in suboptimal schedules. This paper presents an integrated constraint programming model that allows to perform the scheduling in a single step. Particular attention is given to identifying implied and symmetry-breaking constraints that reduce the computational complexity significantly. The experimental evaluation of the integrated approach takes considerably less computational effort than the previous approach.

  14. Cascade Optimization Strategy for Aircraft and Air-Breathing Propulsion System Concepts

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Lavelle, Thomas M.; Hopkins, Dale A.; Coroneos, Rula M.

    1996-01-01

    Design optimization for subsonic and supersonic aircraft and for air-breathing propulsion engine concepts has been accomplished by soft-coupling the Flight Optimization System (FLOPS) and the NASA Engine Performance Program analyzer (NEPP), to the NASA Lewis multidisciplinary optimization tool COMETBOARDS. Aircraft and engine design problems, with their associated constraints and design variables, were cast as nonlinear optimization problems with aircraft weight and engine thrust as the respective merit functions. Because of the diversity of constraint types and the overall distortion of the design space, the most reliable single optimization algorithm available in COMETBOARDS could not produce a satisfactory feasible optimum solution. Some of COMETBOARDS' unique features, which include a cascade strategy, variable and constraint formulations, and scaling devised especially for difficult multidisciplinary applications, successfully optimized the performance of both aircraft and engines. The cascade method has two principal steps: In the first, the solution initiates from a user-specified design and optimizer, in the second, the optimum design obtained in the first step with some random perturbation is used to begin the next specified optimizer. The second step is repeated for a specified sequence of optimizers or until a successful solution of the problem is achieved. A successful solution should satisfy the specified convergence criteria and have several active constraints but no violated constraints. The cascade strategy available in the combined COMETBOARDS, FLOPS, and NEPP design tool converges to the same global optimum solution even when it starts from different design points. This reliable and robust design tool eliminates manual intervention in the design of aircraft and of air-breathing propulsion engines where it eases the cycle analysis procedures. The combined code is also much easier to use, which is an added benefit. This paper describes COMETBOARDS and its cascade strategy and illustrates the capability of the combined design tool through the optimization of a subsonic aircraft and a high-bypass-turbofan wave-rotor-topped engine.

  15. Bed crisis and elective surgery late cancellations: An approach using the theory of constraints.

    PubMed

    Sahraoui, Abderrazak; Elarref, Mohamed

    2014-01-01

    Late cancellations of scheduled elective surgery limit the ability of the surgical care service to achieve its goals. Attributes of these cancellations differ between hospitals and regions. The rate of late cancellations of elective surgery conducted in Hamad General Hospital, Doha, Qatar was found to be 13.14% which is similar to rates reported in hospitals elsewhere in the world; although elective surgery is performed six days a week from 7:00 am to 10:00 pm in our hospital. Simple and systematic analysis of these attributes typically provides limited solutions to the cancellation problem. Alternatively, the application of the theory of constraints with its five focusing steps, which analyze the system in its totality, is more likely to provide a better solution to the cancellation problem. To find the constraint, as a first focusing step, we carried out a retrospective and descriptive study using a quantitative approach combined with the Pareto Principle to find the main causes of cancellations, followed by a qualitative approach to find the main and ultimate underlying cause which pointed to the bed crisis. The remaining four focusing steps provided workable and effective solutions to reduce the cancellation rate of elective surgery.

  16. Bed crisis and elective surgery late cancellations: An approach using the theory of constraints

    PubMed Central

    Sahraoui, Abderrazak; Elarref, Mohamed

    2014-01-01

    Late cancellations of scheduled elective surgery limit the ability of the surgical care service to achieve its goals. Attributes of these cancellations differ between hospitals and regions. The rate of late cancellations of elective surgery conducted in Hamad General Hospital, Doha, Qatar was found to be 13.14% which is similar to rates reported in hospitals elsewhere in the world; although elective surgery is performed six days a week from 7:00 am to 10:00 pm in our hospital. Simple and systematic analysis of these attributes typically provides limited solutions to the cancellation problem. Alternatively, the application of the theory of constraints with its five focusing steps, which analyze the system in its totality, is more likely to provide a better solution to the cancellation problem. To find the constraint, as a first focusing step, we carried out a retrospective and descriptive study using a quantitative approach combined with the Pareto Principle to find the main causes of cancellations, followed by a qualitative approach to find the main and ultimate underlying cause which pointed to the bed crisis. The remaining four focusing steps provided workable and effective solutions to reduce the cancellation rate of elective surgery. PMID:25320686

  17. Pareto Tracer: a predictor-corrector method for multi-objective optimization problems

    NASA Astrophysics Data System (ADS)

    Martín, Adanay; Schütze, Oliver

    2018-03-01

    This article proposes a novel predictor-corrector (PC) method for the numerical treatment of multi-objective optimization problems (MOPs). The algorithm, Pareto Tracer (PT), is capable of performing a continuation along the set of (local) solutions of a given MOP with k objectives, and can cope with equality and box constraints. Additionally, the first steps towards a method that manages general inequality constraints are also introduced. The properties of PT are first discussed theoretically and later numerically on several examples.

  18. Efficient QoS-aware Service Composition

    NASA Astrophysics Data System (ADS)

    Alrifai, Mohammad; Risse, Thomas

    Web service composition requests are usually combined with endto-end QoS requirements, which are specified in terms of non-functional properties (e.g. response time, throughput and price). The goal of QoS-aware service composition is to find the best combination of services such that their aggregated QoS values meet these end-to-end requirements. Local selection techniques are very efficient but fail short in handling global QoS constraints. Global optimization techniques, on the other hand, can handle global constraints, but their poor performance render them inappropriate for applications with dynamic and real-time requirements. In this paper we address this problem and propose a solution that combines global optimization with local selection techniques for achieving a better performance. The proposed solution consists of two steps: first we use mixed integer linear programming (MILP) to find the optimal decomposition of global QoS constraints into local constraints. Second, we use local search to find the best web services that satisfy these local constraints. Unlike existing MILP-based global planning solutions, the size of the MILP model in our case is much smaller and independent on the number of available services, yields faster computation and more scalability. Preliminary experiments have been conducted to evaluate the performance of the proposed solution.

  19. Analytical design of an industrial two-term controller for optimal regulatory control of open-loop unstable processes under operational constraints.

    PubMed

    Tchamna, Rodrigue; Lee, Moonyong

    2018-01-01

    This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Modeling protein conformational changes by iterative fitting of distance constraints using reoriented normal modes.

    PubMed

    Zheng, Wenjun; Brooks, Bernard R

    2006-06-15

    Recently we have developed a normal-modes-based algorithm that predicts the direction of protein conformational changes given the initial state crystal structure together with a small number of pairwise distance constraints for the end state. Here we significantly extend this method to accurately model both the direction and amplitude of protein conformational changes. The new protocol implements a multisteps search in the conformational space that is driven by iteratively minimizing the error of fitting the given distance constraints and simultaneously enforcing the restraint of low elastic energy. At each step, an incremental structural displacement is computed as a linear combination of the lowest 10 normal modes derived from an elastic network model, whose eigenvectors are reorientated to correct for the distortions caused by the structural displacements in the previous steps. We test this method on a list of 16 pairs of protein structures for which relatively large conformational changes are observed (root mean square deviation >3 angstroms), using up to 10 pairwise distance constraints selected by a fluctuation analysis of the initial state structures. This method has achieved a near-optimal performance in almost all cases, and in many cases the final structural models lie within root mean square deviation of 1 approximately 2 angstroms from the native end state structures.

  1. Blind beam-hardening correction from Poisson measurements

    NASA Astrophysics Data System (ADS)

    Gu, Renliang; Dogandžić, Aleksandar

    2016-02-01

    We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.

  2. Precision reconstruction of manufactured free-form components

    NASA Astrophysics Data System (ADS)

    Ristic, Mihailo; Brujic, Djordje; Ainsworth, Iain

    2000-03-01

    Manufacturing needs in many industries, especially the aerospace and the automotive, involve CAD remodeling of manufactured free-form parts using NURBS. This is typically performed as part of 'first article inspection' or 'closing the design loop.' The reconstructed model must satisfy requirements such as accuracy, compatibility with the original CAD model and adherence to various constraints. The paper outlines a methodology for realizing this task. Efficiency and quality of the results are achieved by utilizing the nominal CAD model. It is argued that measurement and remodeling steps are equally important. We explain how the measurement was optimized in terms of accuracy, point distribution and measuring speed using a CMM. Remodeling steps include registration, data segmentation, parameterization and surface fitting. Enforcement of constraints such as continuity was performed as part of the surface fitting process. It was found necessary that the relevant algorithms are able to perform in the presence of measurement noise, while making no special assumptions about regularity of data distribution. In order to deal with real life situations, a number of supporting functions for geometric modeling were required and these are described. The presented methodology was applied using real aeroengine parts and the experimental results are presented.

  3. Assessment of power step performances of variable speed pump-turbine unit by means of hydro-electrical system simulation

    NASA Astrophysics Data System (ADS)

    Béguin, A.; Nicolet, C.; Hell, J.; Moreira, C.

    2017-04-01

    The paper explores the improvement in ancillary services that variable speed technologies can provide for the case of an existing pumped storage power plant of 2x210 MVA which conversion from fixed speed to variable speed is investigated with a focus on the power step performances of the units. First two motor-generator variable speed technologies are introduced, namely the Doubly Fed Induction Machine (DFIM) and the Full Scale Frequency Converter (FSFC). Then a detailed numerical simulation model of the investigated power plant used to simulate power steps response and comprising the waterways, the pump-turbine unit, the motor-generator, the grid connection and the control systems is presented. Hydroelectric system time domain simulations are performed in order to determine the shortest response time achievable, taking into account the constraints from the maximum penstock pressure and from the rotational speed limits. It is shown that the maximum instantaneous power step response up and down depends on the hydro-mechanical characteristics of the pump-turbine unit and of the motor-generator speed limits. As a results, for the investigated test case, the FSFC solution offer the best power step response performances.

  4. Development of Response Surface Models for Rapid Analysis & Multidisciplinary Optimization of Launch Vehicle Design Concepts

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1999-01-01

    Multdisciplinary design optimization (MDO) is an important step in the design and evaluation of launch vehicles, since it has a significant impact on performance and lifecycle cost. The objective in MDO is to search the design space to determine the values of design parameters that optimize the performance characteristics subject to system constraints. Vehicle Analysis Branch (VAB) at NASA Langley Research Center has computerized analysis tools in many of the disciplines required for the design and analysis of launch vehicles. Vehicle performance characteristics can be determined by the use of these computerized analysis tools. The next step is to optimize the system performance characteristics subject to multidisciplinary constraints. However, most of the complex sizing and performance evaluation codes used for launch vehicle design are stand-alone tools, operated by disciplinary experts. They are, in general, difficult to integrate and use directly for MDO. An alternative has been to utilize response surface methodology (RSM) to obtain polynomial models that approximate the functional relationships between performance characteristics and design variables. These approximation models, called response surface models, are then used to integrate the disciplines using mathematical programming methods for efficient system level design analysis, MDO and fast sensitivity simulations. A second-order response surface model of the form given has been commonly used in RSM since in many cases it can provide an adequate approximation especially if the region of interest is sufficiently limited.

  5. A two steps solution approach to solving large nonlinear models: application to a problem of conjunctive use.

    PubMed

    Vieira, J; Cunha, M C

    2011-01-01

    This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.

  6. Radio Resource Allocation on Complex 4G Wireless Cellular Networks

    NASA Astrophysics Data System (ADS)

    Psannis, Kostas E.

    2015-09-01

    In this article we consider the heuristic algorithm which improves step by step wireless data delivery over LTE cellular networks by using the total transmit power with the constraint on users’ data rates, and the total throughput with the constraints on the total transmit power as well as users’ data rates, which are jointly integrated into a hybrid-layer design framework to perform radio resource allocation for multiple users, and to effectively decide the optimal system parameter such as modulation and coding scheme (MCS) in order to adapt to the varying channel quality. We propose new heuristic algorithm which balances the accessible data rate, the initial data rates of each user allocated by LTE scheduler, the priority indicator which signals delay- throughput- packet loss awareness of the user, and the buffer fullness by achieving maximization of radio resource allocation for multiple users. It is noted that the overall performance is improved with the increase in the number of users, due to multiuser diversity. Experimental results illustrate and validate the accuracy of the proposed methodology.

  7. Practice increases procedural errors after task interruption.

    PubMed

    Altmann, Erik M; Hambrick, David Z

    2017-05-01

    Positive effects of practice are ubiquitous in human performance, but a finding from memory research suggests that negative effects are possible also. The finding is that memory for items on a list depends on the time interval between item presentations. This finding predicts a negative effect of practice on procedural performance under conditions of task interruption. As steps of a procedure are performed more quickly, memory for past performance should become less accurate, increasing the rate of skipped or repeated steps after an interruption. We found this effect, with practice generally improving speed and accuracy, but impairing accuracy after interruptions. The results show that positive effects of practice can interact with architectural constraints on episodic memory to have negative effects on performance. In practical terms, the results suggest that practice can be a risk factor for procedural errors in task environments with a high incidence of task interruption. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Effects of emotionally charged auditory stimulation on gait performance in the elderly: a preliminary study.

    PubMed

    Rizzo, John-Ross; Raghavan, Preeti; McCrery, J R; Oh-Park, Mooyeon; Verghese, Joe

    2015-04-01

    To evaluate the effect of a novel divided attention task-walking under auditory constraints-on gait performance in older adults and to determine whether this effect was moderated by cognitive status. Validation cohort. General community. Ambulatory older adults without dementia (N=104). Not applicable. In this pilot study, we evaluated walking under auditory constraints in 104 older adults who completed 3 pairs of walking trials on a gait mat under 1 of 3 randomly assigned conditions: 1 pair without auditory stimulation and 2 pairs with emotionally charged auditory stimulation with happy or sad sounds. The mean age of subjects was 80.6±4.9 years, and 63% (n=66) were women. The mean velocity during normal walking was 97.9±20.6cm/s, and the mean cadence was 105.1±9.9 steps/min. The effect of walking under auditory constraints on gait characteristics was analyzed using a 2-factorial analysis of variance with a 1-between factor (cognitively intact and minimal cognitive impairment groups) and a 1-within factor (type of auditory stimuli). In both happy and sad auditory stimulation trials, cognitively intact older adults (n=96) showed an average increase of 2.68cm/s in gait velocity (F1.86,191.71=3.99; P=.02) and an average increase of 2.41 steps/min in cadence (F1.75,180.42=10.12; P<.001) as compared with trials without auditory stimulation. In contrast, older adults with minimal cognitive impairment (Blessed test score, 5-10; n=8) showed an average reduction of 5.45cm/s in gait velocity (F1.87,190.83=5.62; P=.005) and an average reduction of 3.88 steps/min in cadence (F1.79,183.10=8.21; P=.001) under both auditory stimulation conditions. Neither baseline fall history nor performance of activities of daily living accounted for these differences. Our results provide preliminary evidence of the differentiating effect of emotionally charged auditory stimuli on gait performance in older individuals with minimal cognitive impairment compared with those without minimal cognitive impairment. A divided attention task using emotionally charged auditory stimuli might be able to elicit compensatory improvement in gait performance in cognitively intact older individuals, but lead to decompensation in those with minimal cognitive impairment. Further investigation is needed to compare gait performance under this task to gait on other dual-task paradigms and to separately examine the effect of physiological aging versus cognitive impairment on gait during walking under auditory constraints. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  9. Enforcing the Courant-Friedrichs-Lewy condition in explicitly conservative local time stepping schemes

    NASA Astrophysics Data System (ADS)

    Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.

    2018-04-01

    An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.

  10. Economic-Oriented Stochastic Optimization in Advanced Process Control of Chemical Processes

    PubMed Central

    Dobos, László; Király, András; Abonyi, János

    2012-01-01

    Finding the optimal operating region of chemical processes is an inevitable step toward improving economic performance. Usually the optimal operating region is situated close to process constraints related to product quality or process safety requirements. Higher profit can be realized only by assuring a relatively low frequency of violation of these constraints. A multilevel stochastic optimization framework is proposed to determine the optimal setpoint values of control loops with respect to predetermined risk levels, uncertainties, and costs of violation of process constraints. The proposed framework is realized as direct search-type optimization of Monte-Carlo simulation of the controlled process. The concept is illustrated throughout by a well-known benchmark problem related to the control of a linear dynamical system and the model predictive control of a more complex nonlinear polymerization process. PMID:23213298

  11. Joint L2,1 Norm and Fisher Discrimination Constrained Feature Selection for Rational Synthesis of Microporous Aluminophosphates.

    PubMed

    Qi, Miao; Wang, Ting; Yi, Yugen; Gao, Na; Kong, Jun; Wang, Jianzhong

    2017-04-01

    Feature selection has been regarded as an effective tool to help researchers understand the generating process of data. For mining the synthesis mechanism of microporous AlPOs, this paper proposes a novel feature selection method by joint l 2,1 norm and Fisher discrimination constraints (JNFDC). In order to obtain more effective feature subset, the proposed method can be achieved in two steps. The first step is to rank the features according to sparse and discriminative constraints. The second step is to establish predictive model with the ranked features, and select the most significant features in the light of the contribution of improving the predictive accuracy. To the best of our knowledge, JNFDC is the first work which employs the sparse representation theory to explore the synthesis mechanism of six kinds of pore rings. Numerical simulations demonstrate that our proposed method can select significant features affecting the specified structural property and improve the predictive accuracy. Moreover, comparison results show that JNFDC can obtain better predictive performances than some other state-of-the-art feature selection methods. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Hardware design and implementation of fast DOA estimation method based on multicore DSP

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Zhao, Yingxiao; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-10-01

    In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.

  13. Efficient robust reconstruction of dynamic PET activity maps with radioisotope decay constraints.

    PubMed

    Gao, Fei; Liu, Huafeng; Shi, Pengcheng

    2010-01-01

    Dynamic PET imaging performs sequence of data acquisition in order to provide visualization and quantification of physiological changes in specific tissues and organs. The reconstruction of activity maps is generally the first step in dynamic PET. State space Hinfinity approaches have been proved to be a robust method for PET image reconstruction where, however, temporal constraints are not considered during the reconstruction process. In addition, the state space strategies for PET image reconstruction have been computationally prohibitive for practical usage because of the need for matrix inversion. In this paper, we present a minimax formulation of the dynamic PET imaging problem where a radioisotope decay model is employed as physics-based temporal constraints on the photon counts. Furthermore, a robust steady state Hinfinity filter is developed to significantly improve the computational efficiency with minimal loss of accuracy. Experiments are conducted on Monte Carlo simulated image sequences for quantitative analysis and validation.

  14. Minimizing conflicts: A heuristic repair method for constraint-satisfaction and scheduling problems

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Johnston, Mark; Philips, Andrew; Laird, Phil

    1992-01-01

    This paper describes a simple heuristic approach to solving large-scale constraint satisfaction and scheduling problems. In this approach one starts with an inconsistent assignment for a set of variables and searches through the space of possible repairs. The search can be guided by a value-ordering heuristic, the min-conflicts heuristic, that attempts to minimize the number of constraint violations after each step. The heuristic can be used with a variety of different search strategies. We demonstrate empirically that on the n-queens problem, a technique based on this approach performs orders of magnitude better than traditional backtracking techniques. We also describe a scheduling application where the approach has been used successfully. A theoretical analysis is presented both to explain why this method works well on certain types of problems and to predict when it is likely to be most effective.

  15. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method.

    PubMed

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-21

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.

  16. Solving the infeasible trust-region problem using approximations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, John E.; Perez, Victor M.; Eldred, Michael Scott

    2004-07-01

    The use of optimization in engineering design has fueled the development of algorithms for specific engineering needs. When the simulations are expensive to evaluate or the outputs present some noise, the direct use of nonlinear optimizers is not advisable, since the optimization process will be expensive and may result in premature convergence. The use of approximations for both cases is an alternative investigated by many researchers including the authors. When approximations are present, a model management is required for proper convergence of the algorithm. In nonlinear programming, the use of trust-regions for globalization of a local algorithm has been provenmore » effective. The same approach has been used to manage the local move limits in sequential approximate optimization frameworks as in Alexandrov et al., Giunta and Eldred, Perez et al. , Rodriguez et al., etc. The experience in the mathematical community has shown that more effective algorithms can be obtained by the specific inclusion of the constraints (SQP type of algorithms) rather than by using a penalty function as in the augmented Lagrangian formulation. The presence of explicit constraints in the local problem bounded by the trust region, however, may have no feasible solution. In order to remedy this problem the mathematical community has developed different versions of a composite steps approach. This approach consists of a normal step to reduce the amount of constraint violation and a tangential step to minimize the objective function maintaining the level of constraint violation attained at the normal step. Two of the authors have developed a different approach for a sequential approximate optimization framework using homotopy ideas to relax the constraints. This algorithm called interior-point trust-region sequential approximate optimization (IPTRSAO) presents some similarities to the two normal-tangential steps algorithms. In this paper, a description of the similarities is presented and an expansion of the two steps algorithm is presented for the case of approximations.« less

  17. Design principles and optimal performance for molecular motors under realistic constraints

    NASA Astrophysics Data System (ADS)

    Tu, Yuhai; Cao, Yuansheng

    2018-02-01

    The performance of a molecular motor, characterized by its power output and energy efficiency, is investigated in the motor design space spanned by the stepping rate function and the motor-track interaction potential. Analytic results and simulations show that a gating mechanism that restricts forward stepping in a narrow window in configuration space is needed for generating high power at physiologically relevant loads. By deriving general thermodynamics laws for nonequilibrium motors, we find that the maximum torque (force) at stall is less than its theoretical limit for any realistic motor-track interactions due to speed fluctuations. Our study reveals a tradeoff for the motor-track interaction: while a strong interaction generates a high power output for forward steps, it also leads to a higher probability of wasteful spontaneous back steps. Our analysis and simulations show that this tradeoff sets a fundamental limit to the maximum motor efficiency in the presence of spontaneous back steps, i.e., loose-coupling. Balancing this tradeoff leads to an optimal design of the motor-track interaction for achieving a maximum efficiency close to 1 for realistic motors that are not perfectly coupled with the energy source. Comparison with existing data and suggestions for future experiments are discussed.

  18. Design and Implementation of a Threaded Search Engine for Tour Recommendation Systems

    NASA Astrophysics Data System (ADS)

    Lee, Junghoon; Park, Gyung-Leen; Ko, Jin-Hee; Shin, In-Hye; Kang, Mikyung

    This paper implements a threaded scan engine for the O(n!) search space and measures its performance, aiming at providing a responsive tour recommendation and scheduling service. As a preliminary step of integrating POI ontology, mobile object database, and personalization profile for the development of new vehicular telematics services, this implementation can give a useful guideline to design a challenging and computation-intensive vehicular telematics service. The implemented engine allocates the subtree to the respective threads and makes them run concurrently exploiting the primitives provided by the operating system and the underlying multiprocessor architecture. It also makes it easy to add a variety of constraints, for example, the search tree is pruned if the cost of partial allocation already exceeds the current best. The performance measurement result shows that the service can run even in the low-power telematics device when the number of destinations does not exceed 15, with an appropriate constraint processing.

  19. Behavioural variability and motor performance: Effect of practice specialization in front crawl swimming.

    PubMed

    Seifert, L; De Jesus, K; Komar, J; Ribeiro, J; Abraldes, J A; Figueiredo, P; Vilas-Boas, J P; Fernandes, R J

    2016-06-01

    The aim was to examine behavioural variability within and between individuals, especially in a swimming task, to explore how swimmers with various specialty (competitive short distance swimming vs. triathlon) adapt to repetitive events of sub-maximal intensity, controlled in speed but of various distances. Five swimmers and five triathletes randomly performed three variants (with steps of 200, 300 and 400m distances) of a front crawl incremental step test until exhaustion. Multi-camera system was used to collect and analyse eight kinematical and swimming efficiency parameters. Analysis of variance showed significant differences between swimmers and triathletes, with significant individual effect. Cluster analysis put these parameters together to investigate whether each individual used the same pattern(s) and one or several patterns to achieve the task goal. Results exhibited ten patterns for the whole population, with only two behavioural patterns shared between swimmers and triathletes. Swimmers tended to use higher hand velocity and index of coordination than triathletes. Mono-stability occurred in swimmers whatever the task constraint showing high stability, while triathletes revealed bi-stability because they switched to another pattern at mid-distance of the task. Finally, our analysis helped to explain and understand effect of specialty and more broadly individual adaptation to task constraint. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Scheduling double round-robin tournaments with divisional play using constraint programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey

    We study a tournament format that extends a traditional double round-robin format with divisional single round-robin tournaments. Elitserien, the top Swedish handball league, uses such a format for its league schedule. We present a constraint programming model that characterizes the general double round-robin plus divisional single round-robin format. This integrated model allows scheduling to be performed in a single step, as opposed to common multistep approaches that decompose scheduling into smaller problems and possibly miss optimal solutions. In addition to general constraints, we introduce Elitserien-specific requirements for its tournament. These general and league-specific constraints allow us to identify implicit andmore » symmetry-breaking properties that reduce the time to solution from hours to seconds. A scalability study of the number of teams shows that our approach is reasonably fast for even larger league sizes. The experimental evaluation of the integrated approach takes considerably less computational effort to schedule Elitserien than does the previous decomposed approach. (C) 2016 Elsevier B.V. All rights reserved« less

  1. The role of a vertical reference point in changing gait regulation in cricket run-ups.

    PubMed

    Greenwood, Daniel; Davids, Keith; Renshaw, Ian

    2016-10-01

    The need to identify information sources which facilitate a functional coupling of perception and action in representative practice contexts is an important challenge for sport scientists and coaches. The current study investigated the role of visual information in regulating athlete gait behaviours during a locomotor pointing task in cricket. Integration of experiential knowledge of elite coaches and theoretical understanding from previous empirical research led us to investigate whether the presence of an umpire would act as a vertical informational constraint that could constrain the emergent coordination tendencies of cricket bowlers' run-up patterns. To test this idea, umpire presence was manipulated during run-ups of 10 elite medium-fast bowlers. As hypothesised, removal of the umpire from the performance environment did not result in an inability to regulate gait to intercept a target, however, the absence of this informational constraint resulted in the emergence of different movement patterns in participant run-ups. Significantly lower standard deviation values of heel-to-crease distances were observed in the umpire condition at multiple steps, compared to performance in the no-umpire condition. Manipulation of this informational constraint altered gait regulation of participants, offering a mechanism to understand how perception-action couplings can be varied during performance in locomotor pointing tasks in sport.

  2. Noisy image magnification with total variation regularization and order-changed dictionary learning

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-12-01

    Noisy low resolution (LR) images are always obtained in real applications, but many existing image magnification algorithms can not get good result from a noisy LR image. We propose a two-step image magnification algorithm to solve this problem. The proposed algorithm takes the advantages of both regularization-based method and learning-based method. The first step is based on total variation (TV) regularization and the second step is based on sparse representation. In the first step, we add a constraint on the TV regularization model to magnify the LR image and at the same time to suppress the noise in it. In the second step, we propose an order-changed dictionary training algorithm to train the dictionaries which is dominated by texture details. Experimental results demonstrate that the proposed algorithm performs better than many other algorithms when the noise is not serious. The proposed algorithm can also provide better visual quality on natural LR images.

  3. Web-based software tool for constraint-based design specification of synthetic biological systems.

    PubMed

    Oberortner, Ernst; Densmore, Douglas

    2015-06-19

    miniEugene provides computational support for solving combinatorial design problems, enabling users to specify and enumerate designs for novel biological systems based on sets of biological constraints. This technical note presents a brief tutorial for biologists and software engineers in the field of synthetic biology on how to use miniEugene. After reading this technical note, users should know which biological constraints are available in miniEugene, understand the syntax and semantics of these constraints, and be able to follow a step-by-step guide to specify the design of a classical synthetic biological system-the genetic toggle switch.1 We also provide links and references to more information on the miniEugene web application and the integration of the miniEugene software library into sophisticated Computer-Aided Design (CAD) tools for synthetic biology ( www.eugenecad.org ).

  4. Formal Specification and Automatic Analysis of Business Processes under Authorization Constraints: An Action-Based Approach

    NASA Astrophysics Data System (ADS)

    Armando, Alessandro; Giunchiglia, Enrico; Ponta, Serena Elisa

    We present an approach to the formal specification and automatic analysis of business processes under authorization constraints based on the action language \\cal{C}. The use of \\cal{C} allows for a natural and concise modeling of the business process and the associated security policy and for the automatic analysis of the resulting specification by using the Causal Calculator (CCALC). Our approach improves upon previous work by greatly simplifying the specification step while retaining the ability to perform a fully automatic analysis. To illustrate the effectiveness of the approach we describe its application to a version of a business process taken from the banking domain and use CCALC to determine resource allocation plans complying with the security policy.

  5. First spin-parity constraint of the 306 keV resonance in Cl 35 for nova nucleosynthesis

    DOE PAGES

    Chipps, K. A.; Rutgers Univ., New Brunswick, NJ; Pain, S. D.; ...

    2017-04-28

    Something of particular interest in astrophysics is the 34 S ( p , γ ) 35 Cl reaction, which serves as a stepping stone in thermonuclear runaway reaction chains during a nova explosion. Although the isotopes involved are all stable, the reaction rate of this significant step is not well known, due to a lack of experimental spectroscopic information on states within the Gamow window above the proton separation threshold of 35 Cl . Furthermore, measurements of level spins and parities provide input for the calculation of resonance strengths, which ultimately determine the astrophysical reaction rate of the 34 Smore » ( p , γ ) 35 Cl proton capture reaction. By performing the 37 Cl ( p , t ) 35 Cl reaction in normal kinematics at the Holifield Radioactive Ion Beam Facility at Oak Ridge National Laboratory, we have conducted a study of the region of astrophysical interest in 35 Cl , and have made the first-ever constraint on the spin and parity assignment for a level at 6677 ± 15 keV ( E r = 306 keV), inside the Gamow window for novae.« less

  6. First spin-parity constraint of the 306 keV resonance in 35Cl for nova nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Chipps, K. A.; Pain, S. D.; Kozub, R. L.; Bardayan, D. W.; Cizewski, J. A.; Chae, K. Y.; Liang, J. F.; Matei, C.; Moazen, B. H.; Nesaraja, C. D.; O'Malley, P. D.; Peters, W. A.; Pittman, S. T.; Schmitt, K. T.; Smith, M. S.

    2017-04-01

    Of particular interest in astrophysics is the 34S(p ,γ )35Cl reaction, which serves as a stepping stone in thermonuclear runaway reaction chains during a nova explosion. Though the isotopes involved are all stable, the reaction rate of this significant step is not well known, due to a lack of experimental spectroscopic information on states within the Gamow window above the proton separation threshold of 35Cl. Measurements of level spins and parities provide input for the calculation of resonance strengths, which ultimately determine the astrophysical reaction rate of the 34S(p ,γ )35Cl proton capture reaction. By performing the 37Cl(p ,t )35Cl reaction in normal kinematics at the Holifield Radioactive Ion Beam Facility at Oak Ridge National Laboratory, we have conducted a study of the region of astrophysical interest in 35Cl, and have made the first-ever constraint on the spin and parity assignment for a level at 6677 ±15 keV (Er=306 keV), inside the Gamow window for novae.

  7. Combined Economic and Hydrologic Modeling to Support Collaborative Decision Making Processes

    NASA Astrophysics Data System (ADS)

    Sheer, D. P.

    2008-12-01

    For more than a decade, the core concept of the author's efforts in support of collaborative decision making has been a combination of hydrologic simulation and multi-objective optimization. The modeling has generally been used to support collaborative decision making processes. The OASIS model developed by HydroLogics Inc. solves a multi-objective optimization at each time step using a mixed integer linear program (MILP). The MILP can be configured to include any user defined objective, including but not limited too economic objectives. For example, an estimated marginal value for water for crops and M&I use were included in the objective function to drive trades in a model of the lower Rio Grande. The formulation of the MILP, constraints and objectives, in any time step is conditional: it changes based on the value of state variables and dynamic external forcing functions, such as rainfall, hydrology, market prices, arrival of migratory fish, water temperature, etc. It therefore acts as a dynamic short term multi-objective economic optimization for each time step. MILP is capable of solving a general problem that includes a very realistic representation of the physical system characteristics in addition to the normal multi-objective optimization objectives and constraints included in economic models. In all of these models, the short term objective function is a surrogate for achieving long term multi-objective results. The long term performance for any alternative (especially including operating strategies) is evaluated by simulation. An operating rule is the combination of conditions, parameters, constraints and objectives used to determine the formulation of the short term optimization in each time step. Heuristic wrappers for the simulation program have been developed improve the parameters of an operating rule, and are initiating research on a wrapper that will allow us to employ a genetic algorithm to improve the form of the rule (conditions, constraints, and short term objectives) as well. In the models operating rules represent different models of human behavior, and the objective of the modeling is to find rules for human behavior that perform well in terms of long term human objectives. The conceptual model used to represent human behavior incorporates economic multi-objective optimization for surrogate objectives, and rules that set those objectives based on current conditions and accounting for uncertainty, at least implicitly. The author asserts that real world operating rules follow this form and have evolved because they have been perceived as successful in the past. Thus, the modeling efforts focus on human behavior in much the same way that economic models focus on human behavior. This paper illustrates the above concepts with real world examples.

  8. Towards a Semantically-Enabled Control Strategy for Building Simulations: Integration of Semantic Technologies and Model Predictive Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delgoshaei, Parastoo; Austin, Mark A.; Pertzborn, Amanda J.

    State-of-the-art building simulation control methods incorporate physical constraints into their mathematical models, but omit implicit constraints associated with policies of operation and dependency relationships among rules representing those constraints. To overcome these shortcomings, there is a recent trend in enabling the control strategies with inference-based rule checking capabilities. One solution is to exploit semantic web technologies in building simulation control. Such approaches provide the tools for semantic modeling of domains, and the ability to deduce new information based on the models through use of Description Logic (DL). In a step toward enabling this capability, this paper presents a cross-disciplinary data-drivenmore » control strategy for building energy management simulation that integrates semantic modeling and formal rule checking mechanisms into a Model Predictive Control (MPC) formulation. The results show that MPC provides superior levels of performance when initial conditions and inputs are derived from inference-based rules.« less

  9. Constraint Embedding for Multibody System Dynamics

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan

    2009-01-01

    This paper describes a constraint embedding approach for the handling of local closure constraints in multibody system dynamics. The approach uses spatial operator techniques to eliminate local-loop constraints from the system and effectively convert the system into tree-topology systems. This approach allows the direct derivation of recursive O(N) techniques for solving the system dynamics and avoiding the expensive steps that would otherwise be required for handling the closedchain dynamics. The approach is very effective for systems where the constraints are confined to small-subgraphs within the system topology. The paper provides background on the spatial operator O(N) algorithms, the extensions for handling embedded constraints, and concludes with some examples of such constraints.

  10. Stochastic online appointment scheduling of multi-step sequential procedures in nuclear medicine.

    PubMed

    Pérez, Eduardo; Ntaimo, Lewis; Malavé, César O; Bailey, Carla; McCormack, Peter

    2013-12-01

    The increased demand for medical diagnosis procedures has been recognized as one of the contributors to the rise of health care costs in the U.S. in the last few years. Nuclear medicine is a subspecialty of radiology that uses advanced technology and radiopharmaceuticals for the diagnosis and treatment of medical conditions. Procedures in nuclear medicine require the use of radiopharmaceuticals, are multi-step, and have to be performed under strict time window constraints. These characteristics make the scheduling of patients and resources in nuclear medicine challenging. In this work, we derive a stochastic online scheduling algorithm for patient and resource scheduling in nuclear medicine departments which take into account the time constraints imposed by the decay of the radiopharmaceuticals and the stochastic nature of the system when scheduling patients. We report on a computational study of the new methodology applied to a real clinic. We use both patient and clinic performance measures in our study. The results show that the new method schedules about 600 more patients per year on average than a scheduling policy that was used in practice by improving the way limited resources are managed at the clinic. The new methodology finds the best start time and resources to be used for each appointment. Furthermore, the new method decreases patient waiting time for an appointment by about two days on average.

  11. A fast algorithm for solving a linear feasibility problem with application to Intensity-Modulated Radiation Therapy.

    PubMed

    Herman, Gabor T; Chen, Wei

    2008-03-01

    The goal of Intensity-Modulated Radiation Therapy (IMRT) is to deliver sufficient doses to tumors to kill them, but without causing irreparable damage to critical organs. This requirement can be formulated as a linear feasibility problem. The sequential (i.e., iteratively treating the constraints one after another in a cyclic fashion) algorithm ART3 is known to find a solution to such problems in a finite number of steps, provided that the feasible region is full dimensional. We present a faster algorithm called ART3+. The idea of ART3+ is to avoid unnecessary checks on constraints that are likely to be satisfied. The superior performance of the new algorithm is demonstrated by mathematical experiments inspired by the IMRT application.

  12. Limits of thermochemical and photochemical syntheses of gaseous fuels: a finite-time thermodynamic analysis. Annual report, September 1983-February, 1985

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, R.S.

    The objectives of this project are to develop methods for the evaluation of syntheses of gaseous fuels in terms of their optimum possible performance, particularly when they are required to supply those fuels at nonzero rates. The first objective is entirely in the tradition of classical thermodynamics, the processes, given the characteristics and constraints that define them. The new element which this project introduces is the capability to set limits more realistic than those from classical thermodynamics, by the inclusion of the influence of the rate or duration of a process on its performance. The development of these analyses ismore » a natural step in the evolution represented by the evaluative papers of Appendix IV, e.g., by Funk et al., Abraham, Shinnar, Bilgen and Fletcher. A second objective is to determine how any given process should be carried out, within its constraints, in order to yield its optimum performance and to use this information whenever possible to help guide the design of that process.« less

  13. Novel Fourier-domain constraint for fast phase retrieval in coherent diffraction imaging.

    PubMed

    Latychevskaia, Tatiana; Longchamp, Jean-Nicolas; Fink, Hans-Werner

    2011-09-26

    Coherent diffraction imaging (CDI) for visualizing objects at atomic resolution has been realized as a promising tool for imaging single molecules. Drawbacks of CDI are associated with the difficulty of the numerical phase retrieval from experimental diffraction patterns; a fact which stimulated search for better numerical methods and alternative experimental techniques. Common phase retrieval methods are based on iterative procedures which propagate the complex-valued wave between object and detector plane. Constraints in both, the object and the detector plane are applied. While the constraint in the detector plane employed in most phase retrieval methods requires the amplitude of the complex wave to be equal to the squared root of the measured intensity, we propose a novel Fourier-domain constraint, based on an analogy to holography. Our method allows achieving a low-resolution reconstruction already in the first step followed by a high-resolution reconstruction after further steps. In comparison to conventional schemes this Fourier-domain constraint results in a fast and reliable convergence of the iterative reconstruction process. © 2011 Optical Society of America

  14. Image superresolution by midfrequency sparse representation and total variation regularization

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-01-01

    Machine learning has provided many good tools for superresolution, whereas existing methods still need to be improved in many aspects. On one hand, the memory and time cost should be reduced. On the other hand, the step edges of the results obtained by the existing methods are not clear enough. We do the following work. First, we propose a method to extract the midfrequency features for dictionary learning. This method brings the benefit of a reduction of the memory and time complexity without sacrificing the performance. Second, we propose a detailed wiping-off total variation (DWO-TV) regularization model to reconstruct the sharp step edges. This model adds a novel constraint on the downsampling version of the high-resolution image to wipe off the details and artifacts and sharpen the step edges. Finally, step edges produced by the DWO-TV regularization and the details provided by learning are fused. Experimental results show that the proposed method offers a desirable compromise between low time and memory cost and the reconstruction quality.

  15. A muscle-driven approach to restore stepping with an exoskeleton for individuals with paraplegia.

    PubMed

    Chang, Sarah R; Nandor, Mark J; Li, Lu; Kobetic, Rudi; Foglyano, Kevin M; Schnellenberger, John R; Audu, Musa L; Pinault, Gilles; Quinn, Roger D; Triolo, Ronald J

    2017-05-30

    Functional neuromuscular stimulation, lower limb orthosis, powered lower limb exoskeleton, and hybrid neuroprosthesis (HNP) technologies can restore stepping in individuals with paraplegia due to spinal cord injury (SCI). However, a self-contained muscle-driven controllable exoskeleton approach based on an implanted neural stimulator to restore walking has not been previously demonstrated, which could potentially result in system use outside the laboratory and viable for long term use or clinical testing. In this work, we designed and evaluated an untethered muscle-driven controllable exoskeleton to restore stepping in three individuals with paralysis from SCI. The self-contained HNP combined neural stimulation to activate the paralyzed muscles and generate joint torques for limb movements with a controllable lower limb exoskeleton to stabilize and support the user. An onboard controller processed exoskeleton sensor signals, determined appropriate exoskeletal constraints and stimulation commands for a finite state machine (FSM), and transmitted data over Bluetooth to an off-board computer for real-time monitoring and data recording. The FSM coordinated stimulation and exoskeletal constraints to enable functions, selected with a wireless finger switch user interface, for standing up, standing, stepping, or sitting down. In the stepping function, the FSM used a sensor-based gait event detector to determine transitions between gait phases of double stance, early swing, late swing, and weight acceptance. The HNP restored stepping in three individuals with motor complete paralysis due to SCI. The controller appropriately coordinated stimulation and exoskeletal constraints using the sensor-based FSM for subjects with different stimulation systems. The average range of motion at hip and knee joints during walking were 8.5°-20.8° and 14.0°-43.6°, respectively. Walking speeds varied from 0.03 to 0.06 m/s, and cadences from 10 to 20 steps/min. A self-contained muscle-driven exoskeleton was a feasible intervention to restore stepping in individuals with paraplegia due to SCI. The untethered hybrid system was capable of adjusting to different individuals' needs to appropriately coordinate exoskeletal constraints with muscle activation using a sensor-driven FSM for stepping. Further improvements for out-of-the-laboratory use should include implantation of plantar flexor muscles to improve walking speed and power assist as needed at the hips and knees to maintain walking as muscles fatigue.

  16. A Coarse-to-Fine Geometric Scale-Invariant Feature Transform for Large Size High Resolution Satellite Image Registration

    PubMed Central

    Chang, Xueli; Du, Siliang; Li, Yingying; Fang, Shenghui

    2018-01-01

    Large size high resolution (HR) satellite image matching is a challenging task due to local distortion, repetitive structures, intensity changes and low efficiency. In this paper, a novel matching approach is proposed for the large size HR satellite image registration, which is based on coarse-to-fine strategy and geometric scale-invariant feature transform (SIFT). In the coarse matching step, a robust matching method scale restrict (SR) SIFT is implemented at low resolution level. The matching results provide geometric constraints which are then used to guide block division and geometric SIFT in the fine matching step. The block matching method can overcome the memory problem. In geometric SIFT, with area constraints, it is beneficial for validating the candidate matches and decreasing searching complexity. To further improve the matching efficiency, the proposed matching method is parallelized using OpenMP. Finally, the sensing image is rectified to the coordinate of reference image via Triangulated Irregular Network (TIN) transformation. Experiments are designed to test the performance of the proposed matching method. The experimental results show that the proposed method can decrease the matching time and increase the number of matching points while maintaining high registration accuracy. PMID:29702589

  17. Stereoscopic filming for investigating evasive side-stepping and anterior cruciate ligament injury risk

    NASA Astrophysics Data System (ADS)

    Lee, Marcus J. C.; Bourke, Paul; Alderson, Jacqueline A.; Lloyd, David G.; Lay, Brendan

    2010-02-01

    Non-contact anterior cruciate ligament (ACL) injuries are serious and debilitating, often resulting from the performance of evasive sides-stepping (Ssg) by team sport athletes. Previous laboratory based investigations of evasive Ssg have used generic visual stimuli to simulate realistic time and space constraints that athletes experience in the preparation and execution of the manoeuvre. However, the use of unrealistic visual stimuli to impose these constraints may not be accurately identifying the relationship between the perceptual demands and ACL loading during Ssg in actual game environments. We propose that stereoscopically filmed footage featuring sport specific opposing defender/s simulating a tackle on the viewer, when used as visual stimuli, could improve the ecological validity of laboratory based investigations of evasive Ssg. Due to the need for precision and not just the experience of viewing depth in these scenarios, a rigorous filming process built on key geometric considerations and equipment development to enable a separation of 6.5 cm between two commodity cameras had to be undertaken. Within safety limits, this could be an invaluable tool in enabling more accurate investigations of the associations between evasive Ssg and ACL injury risk.

  18. Kinematic constraints associated with the acquisition of overarm throwing part I: step and trunk actions.

    PubMed

    Stodden, David F; Langendorfer, Stephen J; Fleisig, Glenn S; Andrews, James R

    2006-12-01

    The purposes of this study were to: (a) examine differences within specific kinematic variables and ball velocity associated with developmental component levels of step and trunk action (Roberton & Halverson, 1984), and (b) if the differences in kinematic variables were significantly associated with the differences in component levels, determine potential kinematic constraints associated with skilled throwing acquisition. Results indicated stride length (69.3 %) and time from stride foot contact to ball release (39. 7%) provided substantial contributions to ball velocity (p < .001). All trunk kinematic measures increased significantly with increasing component levels (p < .001). Results suggest that trunk linear and rotational velocities, degree of trunk tilt, time from stride foot contact to ball release, and ball velocity represented potential control parameters and, therefore, constraints on overarm throwing acquisition.

  19. Performance Limiting Flow Processes in High-State Loading High-Mach Number Compressors

    DTIC Science & Technology

    2008-03-13

    the Doctoral Thesis Committee of the doctoral student. 3 3.0 Technical Background A strong incentive exists to reduce airfoil count in aircraft engine ...Advanced Turbine Engine ). A basic constraint on blade reduction is seen from the Euler turbine equation, which shows that, although a design can be carried...on the vane to rotor blade ratio of 8:11). Within the MSU Turbo code, specifying a small number of time steps requires more iteration at each time

  20. Displacement based multilevel structural optimization

    NASA Technical Reports Server (NTRS)

    Striz, Alfred G.

    1995-01-01

    Multidisciplinary design optimization (MDO) is expected to play a major role in the competitive transportation industries of tomorrow, i.e., in the design of aircraft and spacecraft, of high speed trains, boats, and automobiles. All of these vehicles require maximum performance at minimum weight to keep fuel consumption low and conserve resources. Here, MDO can deliver mathematically based design tools to create systems with optimum performance subject to the constraints of disciplines such as structures, aerodynamics, controls, etc. Although some applications of MDO are beginning to surface, the key to a widespread use of this technology lies in the improvement of its efficiency. This aspect is investigated here for the MDO subset of structural optimization, i.e., for the weight minimization of a given structure under size, strength, and displacement constraints. Specifically, finite element based multilevel optimization of structures (here, statically indeterminate trusses and beams for proof of concept) is performed. In the system level optimization, the design variables are the coefficients of assumed displacement functions, and the load unbalance resulting from the solution of the stiffness equations is minimized. Constraints are placed on the deflection amplitudes and the weight of the structure. In the subsystems level optimizations, the weight of each element is minimized under the action of stress constraints, with the cross sectional dimensions as design variables. This approach is expected to prove very efficient, especially for complex structures, since the design task is broken down into a large number of small and efficiently handled subtasks, each with only a small number of variables. This partitioning will also allow for the use of parallel computing, first, by sending the system and subsystems level computations to two different processors, ultimately, by performing all subsystems level optimizations in a massively parallel manner on separate processors. It is expected that the subsystems level optimizations can be further improved through the use of controlled growth, a method which reduces an optimization to a more efficient analysis with only a slight degradation in accuracy. The efficiency of all proposed techniques is being evaluated relative to the performance of the standard single level optimization approach where the complete structure is weight minimized under the action of all given constraints by one processor and to the performance of simultaneous analysis and design which combines analysis and optimization into a single step. It is expected that the present approach can be expanded to include additional structural constraints (buckling, free and forced vibration, etc.) or other disciplines (passive and active controls, aerodynamics, etc.) for true MDO.

  1. {gamma} parameter and Solar System constraint in chameleon-Brans-Dicke theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saaidi, Kh.; Mohammadi, A.; Sheikhahmadi, H.

    2011-05-15

    The post Newtonian parameter is considered in the chameleon-Brans-Dicke model. In the first step, the general form of this parameter and also effective gravitational constant is obtained. An arbitrary function for f({Phi}), which indicates the coupling between matter and scalar field, is introduced to investigate validity of solar system constraint. It is shown that the chameleon-Brans-Dicke model can satisfy the solar system constraint and gives us an {omega} parameter of order 10{sup 4}, which is in comparable to the constraint which has been indicated in [19].

  2. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  3. The min-conflicts heuristic: Experimental and theoretical results

    NASA Technical Reports Server (NTRS)

    Minton, Steven; Philips, Andrew B.; Johnston, Mark D.; Laird, Philip

    1991-01-01

    This paper describes a simple heuristic method for solving large-scale constraint satisfaction and scheduling problems. Given an initial assignment for the variables in a problem, the method operates by searching through the space of possible repairs. The search is guided by an ordering heuristic, the min-conflicts heuristic, that attempts to minimize the number of constraint violations after each step. We demonstrate empirically that the method performs orders of magnitude better than traditional backtracking techniques on certain standard problems. For example, the one million queens problem can be solved rapidly using our approach. We also describe practical scheduling applications where the method has been successfully applied. A theoretical analysis is presented to explain why the method works so well on certain types of problems and to predict when it is likely to be most effective.

  4. Using activity-based costing and theory of constraints to guide continuous improvement in managed care.

    PubMed

    Roybal, H; Baxendale, S J; Gupta, M

    1999-01-01

    Activity-based costing and the theory of constraints have been applied successfully in many manufacturing organizations. Recently, those concepts have been applied in service organizations. This article describes the application of activity-based costing and the theory of constraints in a managed care mental health and substance abuse organization. One of the unique aspects of this particular application was the integration of activity-based costing and the theory of constraints to guide process improvement efforts. This article describes the activity-based costing model and the application of the theory of constraint's focusing steps with an emphasis on unused capacities of activities in the organization.

  5. Visual Control for Multirobot Organized Rendezvous.

    PubMed

    Lopez-Nicolas, G; Aranda, M; Mezouar, Y; Sagues, C

    2012-08-01

    This paper addresses the problem of visual control of a set of mobile robots. In our framework, the perception system consists of an uncalibrated flying camera performing an unknown general motion. The robots are assumed to undergo planar motion considering nonholonomic constraints. The goal of the control task is to drive the multirobot system to a desired rendezvous configuration relying solely on visual information given by the flying camera. The desired multirobot configuration is defined with an image of the set of robots in that configuration without any additional information. We propose a homography-based framework relying on the homography induced by the multirobot system that gives a desired homography to be used to define the reference target, and a new image-based control law that drives the robots to the desired configuration by imposing a rigidity constraint. This paper extends our previous work, and the main contributions are that the motion constraints on the flying camera are removed, the control law is improved by reducing the number of required steps, the stability of the new control law is proved, and real experiments are provided to validate the proposal.

  6. Two-agent cooperative search using game models with endurance-time constraints

    NASA Astrophysics Data System (ADS)

    Sujit, P. B.; Ghose, Debasish

    2010-07-01

    In this article, the problem of two Unmanned Aerial Vehicles (UAVs) cooperatively searching an unknown region is addressed. The search region is discretized into hexagonal cells and each cell is assumed to possess an uncertainty value. The UAVs have to cooperatively search these cells taking limited endurance, sensor and communication range constraints into account. Due to limited endurance, the UAVs need to return to the base station for refuelling and also need to select a base station when multiple base stations are present. This article proposes a route planning algorithm that takes endurance time constraints into account and uses game theoretical strategies to reduce the uncertainty. The route planning algorithm selects only those cells that ensure the agent will return to any one of the available bases. A set of paths are formed using these cells which the game theoretical strategies use to select a path that yields maximum uncertainty reduction. We explore non-cooperative Nash, cooperative and security strategies from game theory to enhance the search effectiveness. Monte-Carlo simulations are carried out which show the superiority of the game theoretical strategies over greedy strategy for different look ahead step length paths. Within the game theoretical strategies, non-cooperative Nash and cooperative strategy perform similarly in an ideal case, but Nash strategy performs better than the cooperative strategy when the perceived information is different. We also propose a heuristic based on partitioning of the search space into sectors to reduce computational overhead without performance degradation.

  7. Theoretical Study of the Mechanism Behind the para-Selective Nitration of Toluene in Zeolite H-Beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, Amity; Govind, Niranjan; Subramanian, Lalitha

    Periodic density functional theory calculations were performed to investigate the origin of the favorable para-selective nitration of toluene exhibited by zeolite H-beta with acetyl nitrate nitration agent. Energy calculations were performed for each of the 32 crystallographically unique Bronsted acid sites of a beta polymorph B zeolite unit cell with multiple Bronsted acid sites of comparable stability. However, one particular aluminum T-site with three favorable Bronsted site oxygens embedded in a straight 12-T channel wall provides multiple favorable proton transfer sites. Transition state searches around this aluminum site were performed to determine the barrier to reaction for both para andmore » ortho nitration of toluene. A three-step process was assumed for the nitration of toluene with two organic intermediates: the pi- and sigma-complexes. The rate limiting step is the proton transfer from the sigma-complex to a zeolite Bronsted site. The barrier for this step in ortho nitration is shown to be nearly 2.5 times that in para nitration. This discrepancy appears to be due to steric constraints imposed by the curvature of the large 12-T pore channels of beta and the toluene methyl group in the ortho approach that are not present in the para approach.« less

  8. Information constraints in medical encounters.

    PubMed

    Hollander, R D

    1984-01-01

    This article describes three kinds of information constraints in medical encounters that have not been discussed at length in the medical ethics literature: constraints from the concept of a disease, from the diffusion of medical innovation, and from withholding information. It describes how these limit the reliance rational people can justifiably put in their doctors, and even the reliance doctors can have on their own advice. It notes the implications of these constraints for the value of informed consent, identifies several procedural steps that could increase the value of the latter and improve diffusion of innovation, and argues that recognition of these constraints should lead us to devise protections which intrude on but can improve these encounters.

  9. Voxel inversion of airborne electromagnetic data for improved groundwater model construction and prediction accuracy

    NASA Astrophysics Data System (ADS)

    Kruse Christensen, Nikolaj; Ferre, Ty Paul A.; Fiandaca, Gianluca; Christensen, Steen

    2017-03-01

    We present a workflow for efficient construction and calibration of large-scale groundwater models that includes the integration of airborne electromagnetic (AEM) data and hydrological data. In the first step, the AEM data are inverted to form a 3-D geophysical model. In the second step, the 3-D geophysical model is translated, using a spatially dependent petrophysical relationship, to form a 3-D hydraulic conductivity distribution. The geophysical models and the hydrological data are used to estimate spatially distributed petrophysical shape factors. The shape factors primarily work as translators between resistivity and hydraulic conductivity, but they can also compensate for structural defects in the geophysical model. The method is demonstrated for a synthetic case study with sharp transitions among various types of deposits. Besides demonstrating the methodology, we demonstrate the importance of using geophysical regularization constraints that conform well to the depositional environment. This is done by inverting the AEM data using either smoothness (smooth) constraints or minimum gradient support (sharp) constraints, where the use of sharp constraints conforms best to the environment. The dependency on AEM data quality is also tested by inverting the geophysical model using data corrupted with four different levels of background noise. Subsequently, the geophysical models are used to construct competing groundwater models for which the shape factors are calibrated. The performance of each groundwater model is tested with respect to four types of prediction that are beyond the calibration base: a pumping well's recharge area and groundwater age, respectively, are predicted by applying the same stress as for the hydrologic model calibration; and head and stream discharge are predicted for a different stress situation. As expected, in this case the predictive capability of a groundwater model is better when it is based on a sharp geophysical model instead of a smoothness constraint. This is true for predictions of recharge area, head change, and stream discharge, while we find no improvement for prediction of groundwater age. Furthermore, we show that the model prediction accuracy improves with AEM data quality for predictions of recharge area, head change, and stream discharge, while there appears to be no accuracy improvement for the prediction of groundwater age.

  10. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    NASA Astrophysics Data System (ADS)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.

  11. STEP: Satellite Test of the Equivalence Principle. Report on the phase A study

    NASA Technical Reports Server (NTRS)

    Blaser, J. P.; Bye, M.; Cavallo, G.; Damour, T.; Everitt, C. W. F.; Hedin, A.; Hellings, R. W.; Jafry, Y.; Laurance, R.; Lee, M.

    1993-01-01

    During Phase A, the STEP Study Team identified three types of experiments that can be accommodated on the STEP satellite within the mission constraints and whose performance is orders of magnitude better than any present or planned future experiment of the same kind on the ground. The scientific objectives of the STEP mission are to: test the Equivalence Principle to one part in 10(exp 17), six orders of magnitude better than has been achieved on the ground; search for a new interaction between quantum-mechanical spin and ordinary matter with a sensitivity of the mass-spin coupling constant g(sub p)g(sub s) = 6 x 10(exp -34) at a range of 1 mm, which represents a seven order-of-magnitude improvement over comparable ground-based measurements; and determine the constant of gravity G with a precision of one part in 10(exp 6) and to test the validity of the inverse square law with the same precision, both two orders of magnitude better than has been achieved on the ground.

  12. Retrospective Analysis of an Ongoing Group-Based Modified Constraint-Induced Movement Therapy Program for Children with Acquired Brain Injury.

    PubMed

    Komar, Alyssa; Ashley, Kelsey; Hanna, Kelly; Lavallee, Julia; Woodhouse, Janet; Bernstein, Janet; Andres, Matthew; Reed, Nick

    2016-01-01

    A pretest-posttest retrospective design was used to evaluate the impact of a group-based modified constraint-induced movement therapy (mCIMT) program on upper extremity function and occupational performance. 20 children ages 3 to 18 years with hemiplegia following an acquired brain injury participated in a 2-week group mCIMT program. Upper extremity function was measured with the Assisting Hand Assessment (AHA) and subtests from the Quality of Upper Extremity Skills Test (QUEST). Occupational performance and satisfaction were assessed using the Canadian Occupational Performance Measure (COPM). Data were analyzed using a Wilcoxon signed-ranks test. Group-based analysis revealed upper extremity function and occupational performance attained statistically significant improvements from pre- to postintervention on all outcome measures (AHA: Z = -3.63, p = <.001; QUEST Grasps: Z = -3.10, p = .002; QUEST Dissociated Movement: Z = -2.51, p = .012; COPM Performance: Z = -3.64, p = <.001; COPM Satisfaction: Z = -3.64, p = <.001). Across individuals, clinically significant improvements were found in 65% of participants' AHA scores. 80% of COPM Performance scores and 70% of COPM Satisfaction scores demonstrated clinically significant improvements in at least one identified goal. This study is an initial step in evaluating and providing preliminary evidence supporting the effectiveness of a group-based mCIMT program for children with hemiplegia following an acquired brain injury.

  13. antaRNA: ant colony-based RNA sequence design.

    PubMed

    Kleinkauf, Robert; Mann, Martin; Backofen, Rolf

    2015-10-01

    RNA sequence design is studied at least as long as the classical folding problem. Although for the latter the functional fold of an RNA molecule is to be found ,: inverse folding tries to identify RNA sequences that fold into a function-specific target structure. In combination with RNA-based biotechnology and synthetic biology ,: reliable RNA sequence design becomes a crucial step to generate novel biochemical components. In this article ,: the computational tool antaRNA is presented. It is capable of compiling RNA sequences for a given structure that comply in addition with an adjustable full range objective GC-content distribution ,: specific sequence constraints and additional fuzzy structure constraints. antaRNA applies ant colony optimization meta-heuristics and its superior performance is shown on a biological datasets. http://www.bioinf.uni-freiburg.de/Software/antaRNA CONTACT: backofen@informatik.uni-freiburg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  14. Online learning in optical tomography: a stochastic approach

    NASA Astrophysics Data System (ADS)

    Chen, Ke; Li, Qin; Liu, Jian-Guo

    2018-07-01

    We study the inverse problem of radiative transfer equation (RTE) using stochastic gradient descent method (SGD) in this paper. Mathematically, optical tomography amounts to recovering the optical parameters in RTE using the incoming–outgoing pair of light intensity. We formulate it as a PDE-constraint optimization problem, where the mismatch of computed and measured outgoing data is minimized with same initial data and RTE constraint. The memory and computation cost it requires, however, is typically prohibitive, especially in high dimensional space. Smart iterative solvers that only use partial information in each step is called for thereafter. Stochastic gradient descent method is an online learning algorithm that randomly selects data for minimizing the mismatch. It requires minimum memory and computation, and advances fast, therefore perfectly serves the purpose. In this paper we formulate the problem, in both nonlinear and its linearized setting, apply SGD algorithm and analyze the convergence performance.

  15. Preconditioned alternating direction method of multipliers for inverse problems with constraints

    NASA Astrophysics Data System (ADS)

    Jiao, Yuling; Jin, Qinian; Lu, Xiliang; Wang, Weijie

    2017-02-01

    We propose a preconditioned alternating direction method of multipliers (ADMM) to solve linear inverse problems in Hilbert spaces with constraints, where the feature of the sought solution under a linear transformation is captured by a possibly non-smooth convex function. During each iteration step, our method avoids solving large linear systems by choosing a suitable preconditioning operator. In case the data is given exactly, we prove the convergence of our preconditioned ADMM without assuming the existence of a Lagrange multiplier. In case the data is corrupted by noise, we propose a stopping rule using information on noise level and show that our preconditioned ADMM is a regularization method; we also propose a heuristic rule when the information on noise level is unavailable or unreliable and give its detailed analysis. Numerical examples are presented to test the performance of the proposed method.

  16. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  17. PSR Injection Line Upgrade

    NASA Astrophysics Data System (ADS)

    Blind, Barbara; Jason, Andrew J.

    1997-05-01

    We describe the new injection line to be implemented for the Los Alamos Proton Storage Ring in the change from a two-step process to direct H- injection. While obeying all geometrical constraints imposed by the existing structures, the new line has properties not found in the present injection line. In particular, it features decoupled transverse phase spaces downstream of the skew bend and a high degree of tunability of the beam at the injection foil. A comprehensive set of error studies has dictated the component tolerances imposed and has indicated the expected performance of the system.

  18. Animal Construction as a Free Boundary Problem: Evidence of Fractal Scaling Laws

    NASA Astrophysics Data System (ADS)

    Nicolis, S. C.

    2014-12-01

    We suggest that the main features of animal construction can be understood as the sum of locally independent actions of non-interacting individuals subjected to the global constraints imposed by the nascent structure. We first formulate an analytically tractable oscopic description of construction which predicts a 1/3 power law for how the length of the structure grows with time. We further show how the power law is modified when biases in random walk performed by the constructors as well as halting times between consecutive construction steps are included.

  19. Taking Proof based Verified Computation a Few Steps Closer to Practicality (extended version)

    DTIC Science & Technology

    2012-06-27

    general s2 + s, in general V’s per-instance CPU costs Issue commit queries (e + 2c) · n/β (e + 2c) · n/β Process commit responses d d Issue PCP...size (# of instances) (§2.3) e: cost of encrypting an element in F d : cost of decrypting an encrypted element f : cost of multiplying in F h: cost of...domain D (such as the integers, Z, or the rationals, Q) to equivalent constraints over a finite field, the programmer or compiler performs 3We suspect

  20. Fast and Easy 3D Reconstruction with the Help of Geometric Constraints and Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Annich, Afafe; El Abderrahmani, Abdellatif; Satori, Khalid

    2017-09-01

    The purpose of the work presented in this paper is to describe new method of 3D reconstruction from one or more uncalibrated images. This method is based on two important concepts: geometric constraints and genetic algorithms (GAs). At first, we are going to discuss the combination between bundle adjustment and GAs that we have proposed in order to improve 3D reconstruction efficiency and success. We used GAs in order to improve fitness quality of initial values that are used in the optimization problem. It will increase surely convergence rate. Extracted geometric constraints are used first to obtain an estimated value of focal length that helps us in the initialization step. Matching homologous points and constraints is used to estimate the 3D model. In fact, our new method gives us a lot of advantages: reducing the estimated parameter number in optimization step, decreasing used image number, winning time and stabilizing good quality of 3D results. At the end, without any prior information about our 3D scene, we obtain an accurate calibration of the cameras, and a realistic 3D model that strictly respects the geometric constraints defined before in an easy way. Various data and examples will be used to highlight the efficiency and competitiveness of our present approach.

  1. The gauge transformations of the constrained q-deformed KP hierarchy

    NASA Astrophysics Data System (ADS)

    Geng, Lumin; Chen, Huizhan; Li, Na; Cheng, Jipeng

    2018-06-01

    In this paper, we mainly study the gauge transformations of the constrained q-deformed Kadomtsev-Petviashvili (q-KP) hierarchy. Different from the usual case, we have to consider the additional constraints on the Lax operator of the constrained q-deformed KP hierarchy, since the form of the Lax operator must be kept when constructing the gauge transformations. For this reason, the selections of generating functions in elementary gauge transformation operators TD and TI must be very special, which are from the constraints in the Lax operator. At last, we consider the successive applications of n-step of TD and k-step of TI gauge transformations.

  2. Contact-aware simulations of particulate Stokesian suspensions

    NASA Astrophysics Data System (ADS)

    Lu, Libin; Rahimian, Abtin; Zorin, Denis

    2017-10-01

    We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.

  3. Cruise performance and range prediction reconsidered

    NASA Astrophysics Data System (ADS)

    Torenbeek, Egbert

    1997-05-01

    A unified analytical treatment of the cruise performance of subsonic transport aircraft is derived, valid for gas turbine powerplant installations: turboprop, turbojet and turbofan powered aircraft. Different from the classical treatment the present article deals with compressibility effects on the aerodynamic characteristics. Analytical criteria are derived for optimum cruise lift coefficient and Mach number, with and without constraints on the altitude and engine rating. A simple alternative to the Bréguet range equation is presented which applies to several practical cruising flight techniques: flight at constant altitude and Mach number and stepped cruise/climb. A practical non-iterative procedure for computing mission and reserve fuel loads in the preliminary design stage is proposed.

  4. Neonatal Atlas Construction Using Sparse Representation

    PubMed Central

    Shi, Feng; Wang, Li; Wu, Guorong; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang

    2014-01-01

    Atlas construction generally includes first an image registration step to normalize all images into a common space and then an atlas building step to fuse the information from all the aligned images. Although numerous atlas construction studies have been performed to improve the accuracy of the image registration step, unweighted or simply weighted average is often used in the atlas building step. In this article, we propose a novel patch-based sparse representation method for atlas construction after all images have been registered into the common space. By taking advantage of local sparse representation, more anatomical details can be recovered in the built atlas. To make the anatomical structures spatially smooth in the atlas, the anatomical feature constraints on group structure of representations and also the overlapping of neighboring patches are imposed to ensure the anatomical consistency between neighboring patches. The proposed method has been applied to 73 neonatal MR images with poor spatial resolution and low tissue contrast, for constructing a neonatal brain atlas with sharp anatomical details. Experimental results demonstrate that the proposed method can significantly enhance the quality of the constructed atlas by discovering more anatomical details especially in the highly convoluted cortical regions. The resulting atlas demonstrates superior performance of our atlas when applied to spatially normalizing three different neonatal datasets, compared with other start-of-the-art neonatal brain atlases. PMID:24638883

  5. Feasibility of intra-acquisition motion correction for 4D DSA reconstruction for applications in the thorax and abdomen

    NASA Astrophysics Data System (ADS)

    Wagner, Martin; Laeseke, Paul; Harari, Colin; Schafer, Sebastian; Speidel, Michael; Mistretta, Charles

    2018-03-01

    The recently proposed 4D DSA technique enables reconstruction of time resolved 3D volumes from two C-arm CT acquisitions. This provides information on the blood flow in neurovascular applications and can be used for the diagnosis and treatment of vascular diseases. For applications in the thorax and abdomen, respiratory motion can prevent successful 4D DSA reconstruction and cause severe artifacts. The purpose of this work is to propose a novel technique for motion compensated 4D DSA reconstruction to enable applications in the thorax and abdomen. The approach uses deformable 2D registration to align the projection images of a non-contrast and a contrast enhanced scan. A subset of projection images is then selected, which are acquired in a similar respiratory state and an iterative simultaneous multiplicative algebraic reconstruction is applied to determine a 3D constraint volume. A 2D-3D registration step then aligns the remaining projection images with the 3D constraint volume. Finally, a constrained back-projection is performed to create a 3D volume for each projection image. A pig study has been performed, where 4D DSA acquisitions were performed with and without respiratory motion to evaluate the feasibility of the approach. The dice similarity coefficient between the reference 3D constraint volume and the motion compensated reconstruction was 51.12 % compared to 35.99 % without motion compensation. This technique could improve the workflow for procedures in interventional radiology, e.g. liver embolizations, where changes in blood flow have to be monitored carefully.

  6. The synchronisation of lower limb responses with a variable metronome: the effect of biomechanical constraints on timing.

    PubMed

    Chen, Hui-Ya; Wing, Alan M; Pratt, David

    2006-04-01

    Stepping in time with a metronome has been reported to improve pathological gait. Although there have been many studies of finger tapping synchronisation tasks with a metronome, the specific details of the influences of metronome timing on walking remain unknown. As a preliminary to studying pathological control of gait timing, we designed an experiment with four synchronisation tasks, unilateral heel tapping in sitting, bilateral heel tapping in sitting, bilateral heel tapping in standing, and stepping on the spot, in order to examine the influence of biomechanical constraints on metronome timing. These four conditions allow study of the effects of bilateral co-ordination and maintenance of balance on timing. Eight neurologically normal participants made heel tapping and stepping responses in synchrony with a metronome producing 500 ms interpulse intervals. In each trial comprising 40 intervals, one interval, selected at random between intervals 15 and 30, was lengthened or shortened, which resulted in a shift in phase of all subsequent metronome pulses. Performance measures were the speed of compensation for the phase shift, in terms of the temporal difference between the response and the metronome pulse, i.e. asynchrony, and the standard deviation of the asynchronies and interresponse intervals of steady state synchronisation. The speed of compensation decreased with increase in the demands of maintaining balance. The standard deviation varied across conditions but was not related to the compensation speed. The implications of these findings for metronome assisted gait are discussed in terms of a first-order linear correction account of synchronisation.

  7. Experimental tests and radiometric calculations for the feasibility of fluorescence LIDAR-based discrimination of oil spills from UAV

    NASA Astrophysics Data System (ADS)

    Raimondi, Valentina; Palombi, Lorenzo; Lognoli, David; Masini, Andrea; Simeone, Emilio

    2017-09-01

    This paper presents experimental tests and radiometric calculations for the feasibility of an ultra-compact fluorescence LIDAR from an Unmanned Air Vehicle (UAV) for the characterisation of oil spills in natural waters. The first step of this study was to define the experimental conditions for a LIDAR and its budget constraints on the basis of the specifications of small UAVs already available on the market. The second step consisted of a set of fluorescence LIDAR measurements on oil spills in the laboratory in order to propose a simplified discrimination method and to calculate the oil fluorescence conversion efficiency. Lastly, the main technical specifications of the payload were defined and radiometric calculations carried out to evaluate the performances of both the payload and the proposed discrimination method.

  8. A Gauge Invariant Description for the General Conic Constrained Particle from the FJBW Iteration Algorithm

    NASA Astrophysics Data System (ADS)

    Barbosa, Gabriel D.; Thibes, Ronaldo

    2018-06-01

    We consider a second-degree algebraic curve describing a general conic constraint imposed on the motion of a massive spinless particle. The problem is trivial at classical level but becomes involved and interesting concerning its quantum counterpart with subtleties in its symplectic structure and symmetries. We start with a second-class version of the general conic constrained particle, which encompasses previous versions of circular and elliptical paths discussed in the literature. By applying the symplectic FJBW iteration program, we proceed on to show how a gauge invariant version for the model can be achieved from the originally second-class system. We pursue the complete constraint analysis in phase space and perform the Faddeev-Jackiw symplectic quantization following the Barcelos-Wotzasek iteration program to unravel the essential aspects of the constraint structure. While in the standard Dirac-Bergmann approach there are four second-class constraints, in the FJBW they reduce to two. By using the symplectic potential obtained in the last step of the FJBW iteration process, we construct a gauge invariant model exhibiting explicitly its BRST symmetry. We obtain the quantum BRST charge and write the Green functions generator for the gauge invariant version. Our results reproduce and neatly generalize the known BRST symmetry of the rigid rotor, clearly showing that this last one constitutes a particular case of a broader class of theories.

  9. A Study of Interactions between Mixing and Chemical Reaction Using the Rate-Controlled Constrained-Equilibrium Method

    NASA Astrophysics Data System (ADS)

    Hadi, Fatemeh; Janbozorgi, Mohammad; Sheikhi, M. Reza H.; Metghalchi, Hameed

    2016-10-01

    The rate-controlled constrained-equilibrium (RCCE) method is employed to study the interactions between mixing and chemical reaction. Considering that mixing can influence the RCCE state, the key objective is to assess the accuracy and numerical performance of the method in simulations involving both reaction and mixing. The RCCE formulation includes rate equations for constraint potentials, density and temperature, which allows taking account of mixing alongside chemical reaction without splitting. The RCCE is a dimension reduction method for chemical kinetics based on thermodynamics laws. It describes the time evolution of reacting systems using a series of constrained-equilibrium states determined by RCCE constraints. The full chemical composition at each state is obtained by maximizing the entropy subject to the instantaneous values of the constraints. The RCCE is applied to a spatially homogeneous constant pressure partially stirred reactor (PaSR) involving methane combustion in oxygen. Simulations are carried out over a wide range of initial temperatures and equivalence ratios. The chemical kinetics, comprised of 29 species and 133 reaction steps, is represented by 12 RCCE constraints. The RCCE predictions are compared with those obtained by direct integration of the same kinetics, termed detailed kinetics model (DKM). The RCCE shows accurate prediction of combustion in PaSR with different mixing intensities. The method also demonstrates reduced numerical stiffness and overall computational cost compared to DKM.

  10. Real-time inextensible surgical thread simulation.

    PubMed

    Xu, Lang; Liu, Qian

    2018-03-27

    This paper discusses a real-time simulation method of inextensible surgical thread based on the Cosserat rod theory using position-based dynamics (PBD). The method realizes stable twining and knotting of surgical thread while including inextensibility, bending, twisting and coupling effects. The Cosserat rod theory is used to model the nonlinear elastic behavior of surgical thread. The surgical thread model is solved with PBD to achieve a real-time, extremely stable simulation. Due to the one-dimensional linear structure of surgical thread, the direct solution of the distance constraint based on tridiagonal matrix algorithm is used to enhance stretching resistance in every constraint projection iteration. In addition, continuous collision detection and collision response guarantee a large time step and high performance. Furthermore, friction is integrated into the constraint projection process to stabilize the twining of multiple threads and complex contact situations. Through comparisons with existing methods, the surgical thread maintains constant length under large deformation after applying the direct distance constraint in our method. The twining and knotting of multiple threads correspond to stable solutions to contact and friction forces. A surgical suture scene is also modeled to demonstrate the practicality and simplicity of our method. Our method achieves stable and fast simulation of inextensible surgical thread. Benefiting from the unified particle framework, the rigid body, elastic rod, and soft body can be simultaneously simulated. The method is appropriate for applications in virtual surgery that require multiple dynamic bodies.

  11. Using system-of-systems simulation modeling and analysis to measure energy KPP impacts for brigade combat team missions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawton, Craig R.; Welch, Kimberly M.; Kerper, Jessica

    2010-06-01

    The Department of Defense's (DoD) Energy Posture identified dependence of the US Military on fossil fuel energy as a key issue facing the military. Inefficient energy consumption leads to increased costs, effects operational performance and warfighter protection through large and vulnerable logistics support infrastructures. Military's use of energy is a critical national security problem. DoD's proposed metrics Fully Burdened Cost of Fuel and Energy Efficiency Key Performance Parameter (FBCF and Energy KPP) are a positive step to force energy use accountability onto Military programs. The ability to measure impacts of sustainment are required to fully measure Energy KPP. Sandia's workmore » with Army demonstrates the capability to measure performance which includes energy constraint.« less

  12. Active Semi-Supervised Community Detection Based on Must-Link and Cannot-Link Constraints

    PubMed Central

    Cheng, Jianjun; Leng, Mingwei; Li, Longjie; Zhou, Hanhai; Chen, Xiaoyun

    2014-01-01

    Community structure detection is of great importance because it can help in discovering the relationship between the function and the topology structure of a network. Many community detection algorithms have been proposed, but how to incorporate the prior knowledge in the detection process remains a challenging problem. In this paper, we propose a semi-supervised community detection algorithm, which makes full utilization of the must-link and cannot-link constraints to guide the process of community detection and thereby extracts high-quality community structures from networks. To acquire the high-quality must-link and cannot-link constraints, we also propose a semi-supervised component generation algorithm based on active learning, which actively selects nodes with maximum utility for the proposed semi-supervised community detection algorithm step by step, and then generates the must-link and cannot-link constraints by accessing a noiseless oracle. Extensive experiments were carried out, and the experimental results show that the introduction of active learning into the problem of community detection makes a success. Our proposed method can extract high-quality community structures from networks, and significantly outperforms other comparison methods. PMID:25329660

  13. Quantitative Susceptibility Mapping using Structural Feature based Collaborative Reconstruction (SFCR) in the Human Brain

    PubMed Central

    Cai, Congbo; Chen, Zhong; van Zijl, Peter C.M.

    2017-01-01

    The reconstruction of MR quantitative susceptibility mapping (QSM) from local phase measurements is an ill posed inverse problem and different regularization strategies incorporating a priori information extracted from magnitude and phase images have been proposed. However, the anatomy observed in magnitude and phase images does not always coincide spatially with that in susceptibility maps, which could give erroneous estimation in the reconstructed susceptibility map. In this paper, we develop a structural feature based collaborative reconstruction (SFCR) method for QSM including both magnitude and susceptibility based information. The SFCR algorithm is composed of two consecutive steps corresponding to complementary reconstruction models, each with a structural feature based l1 norm constraint and a voxel fidelity based l2 norm constraint, which allows both the structure edges and tiny features to be recovered, whereas the noise and artifacts could be reduced. In the M-step, the initial susceptibility map is reconstructed by employing a k-space based compressed sensing model incorporating magnitude prior. In the S-step, the susceptibility map is fitted in spatial domain using weighted constraints derived from the initial susceptibility map from the M-step. Simulations and in vivo human experiments at 7T MRI show that the SFCR method provides high quality susceptibility maps with improved RMSE and MSSIM. Finally, the susceptibility values of deep gray matter are analyzed in multiple head positions, with the supine position most approximate to the gold standard COSMOS result. PMID:27019480

  14. How Do Severe Constraints Affect the Search Ability of Multiobjective Evolutionary Algorithms in Water Resources?

    NASA Astrophysics Data System (ADS)

    Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.

    2015-12-01

    This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or more metrics.

  15. Kinematic Constraints Associated with the Acquisition of Overarm Throwing Part I: Step and Trunk Actions

    ERIC Educational Resources Information Center

    Stodden, David F.; Langendorfer, Stephen J.; Fleisig, Glenn S.; Andrews, James R.

    2006-01-01

    The purposes of this study were to: (a) examine differences within specific kinematic variables and ball velocity associated with developmental component levels of step and trunk action (Roberton & Halverson, 1984), and (b) if the differences in kinematic variables were significantly associated with the differences in component levels, determine…

  16. Applying high-throughput methods to develop a purification process for a highly glycosylated protein.

    PubMed

    Sanaie, Nooshafarin; Cecchini, Douglas; Pieracci, John

    2012-10-01

    Micro-scale chromatography formats are becoming more routinely used in purification process development because of their ability to rapidly screen large number of process conditions at a time with minimal material. Given the usual constraints that exist on development timelines and resources, these systems can provide a means to maximize process knowledge and process robustness compared to traditional packed column formats. In this work, a high-throughput, 96-well filter plate format was used in the development of the cation exchange and hydrophobic interaction chromatography steps of a purification process designed to alter the glycoform distribution of a small protein. The significant input parameters affecting process performance were rapidly identified for both steps and preliminary operating conditions were identified. These ranges were verified in a packed chromatography column in order to assess the ability of the 96-well plate to predict packed column performance. In both steps, the 96-well plate format consistently led to underestimated glycoform-enrichment levels and to overestimated product recovery rates compared to the column-based approach. These studies demonstrate that the plate format can be used as a screening tool to narrow the operating ranges prior to further optimization on packed chromatography columns. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Reconstruction of sparse-view X-ray computed tomography using adaptive iterative algorithms.

    PubMed

    Liu, Li; Lin, Weikai; Jin, Mingwu

    2015-01-01

    In this paper, we propose two reconstruction algorithms for sparse-view X-ray computed tomography (CT). Treating the reconstruction problems as data fidelity constrained total variation (TV) minimization, both algorithms adapt the alternate two-stage strategy: projection onto convex sets (POCS) for data fidelity and non-negativity constraints and steepest descent for TV minimization. The novelty of this work is to determine iterative parameters automatically from data, thus avoiding tedious manual parameter tuning. In TV minimization, the step sizes of steepest descent are adaptively adjusted according to the difference from POCS update in either the projection domain or the image domain, while the step size of algebraic reconstruction technique (ART) in POCS is determined based on the data noise level. In addition, projection errors are used to compare with the error bound to decide whether to perform ART so as to reduce computational costs. The performance of the proposed methods is studied and evaluated using both simulated and physical phantom data. Our methods with automatic parameter tuning achieve similar, if not better, reconstruction performance compared to a representative two-stage algorithm. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study

    NASA Technical Reports Server (NTRS)

    Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.

    2010-01-01

    This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.

  19. Data fusion for target tracking and classification with wireless sensor network

    NASA Astrophysics Data System (ADS)

    Pannetier, Benjamin; Doumerc, Robin; Moras, Julien; Dezert, Jean; Canevet, Loic

    2016-10-01

    In this paper, we address the problem of multiple ground target tracking and classification with information obtained from a unattended wireless sensor network. A multiple target tracking (MTT) algorithm, taking into account road and vegetation information, is proposed based on a centralized architecture. One of the key issue is how to adapt classical MTT approach to satisfy embedded processing. Based on track statistics, the classification algorithm uses estimated location, velocity and acceleration to help to classify targets. The algorithms enables tracking human and vehicles driving both on and off road. We integrate road or trail width and vegetation cover, as constraints in target motion models to improve performance of tracking under constraint with classification fusion. Our algorithm also presents different dynamic models, to palliate the maneuvers of targets. The tracking and classification algorithms are integrated into an operational platform (the fusion node). In order to handle realistic ground target tracking scenarios, we use an autonomous smart computer deposited in the surveillance area. After the calibration step of the heterogeneous sensor network, our system is able to handle real data from a wireless ground sensor network. The performance of system is evaluated in a real exercise for intelligence operation ("hunter hunt" scenario).

  20. A distributed model predictive control scheme for leader-follower multi-agent systems

    NASA Astrophysics Data System (ADS)

    Franzè, Giuseppe; Lucia, Walter; Tedesco, Francesco

    2018-02-01

    In this paper, we present a novel receding horizon control scheme for solving the formation problem of leader-follower configurations. The algorithm is based on set-theoretic ideas and is tuned for agents described by linear time-invariant (LTI) systems subject to input and state constraints. The novelty of the proposed framework relies on the capability to jointly use sequences of one-step controllable sets and polyhedral piecewise state-space partitions in order to online apply the 'better' control action in a distributed receding horizon fashion. Moreover, we prove that the design of both robust positively invariant sets and one-step-ahead controllable regions is achieved in a distributed sense. Simulations and numerical comparisons with respect to centralised and local-based strategies are finally performed on a group of mobile robots to demonstrate the effectiveness of the proposed control strategy.

  1. Data on the configuration design of internet-connected home cooling systems by engineering students.

    PubMed

    McComb, Christopher; Cagan, Jonathan; Kotovsky, Kenneth

    2017-10-01

    This experiment was carried out to record the step-by-step actions that humans take in solving a configuration design problem, either in small teams or individually. Specifically, study participants were tasked with configuring an internet-connected system of products to maintain temperature within a home, subject to cost constraints. Every participant was given access to a computer-based design interface that allowed them to construct and assess solutions. The interface was also used to record the data that is presented here. In total, data was collected for 68 participants, and each participant was allowed to perform 50 design actions in solving the configuration design problem. Major results based on the data presented here have been reported separately, including initial behavioral analysis (McComb et al.) [1], [2] and design pattern assessments via Markovian modeling (McComb et al., 2017; McComb et al., 2017) [3], [4].

  2. Analysis of two-equation turbulence models for recirculating flows

    NASA Technical Reports Server (NTRS)

    Thangam, S.

    1991-01-01

    The two-equation kappa-epsilon model is used to analyze turbulent separated flow past a backward-facing step. It is shown that if the model constraints are modified to be consistent with the accepted energy decay rate for isotropic turbulence, the dominant features of the flow field, namely the size of the separation bubble and the streamwise component of the mean velocity, can be accurately predicted. In addition, except in the vicinity of the step, very good predictions for the turbulent shear stress, the wall pressure, and the wall shear stress are obtained. The model is also shown to provide good predictions for the turbulence intensity in the region downstream of the reattachment point. Estimated long time growth rates for the turbulent kinetic energy and dissipation rate of homogeneous shear flow are utilized to develop an optimal set of constants for the two equation kappa-epsilon model. The physical implications of the model performance are also discussed.

  3. Analysis of stability for stochastic delay integro-differential equations.

    PubMed

    Zhang, Yu; Li, Longsuo

    2018-01-01

    In this paper, we concern stability of numerical methods applied to stochastic delay integro-differential equations. For linear stochastic delay integro-differential equations, it is shown that the mean-square stability is derived by the split-step backward Euler method without any restriction on step-size, while the Euler-Maruyama method could reproduce the mean-square stability under a step-size constraint. We also confirm the mean-square stability of the split-step backward Euler method for nonlinear stochastic delay integro-differential equations. The numerical experiments further verify the theoretical results.

  4. Molecular Dynamics Simulations of Hydrophobic Residues

    NASA Astrophysics Data System (ADS)

    Caballero, Diego; Zhou, Alice; Regan, Lynne; O'Hern, Corey

    2013-03-01

    Molecular recognition and protein-protein interactions are involved in important biological processes. However, despite recent improvements in computational methods for protein design, we still lack a predictive understanding of protein structure and interactions. To begin to address these shortcomings, we performed molecular dynamics simulations of hydrophobic residues modeled as hard spheres with stereo-chemical constraints initially at high temperature, and then quenched to low temperature to obtain local energy minima. We find that there is a range of quench rates over which the probabilities of side-chain dihedral angles for hydrophobic residues match the probabilities obtained for known protein structures. In addition, we predict the side-chain dihedral angle propensities in the core region of the proteins T4, ROP, and several mutants. These studies serve as a first step in developing the ability to quantitatively rank the energies of designed protein constructs. The success of these studies suggests that only hard-sphere dynamics with geometrical constraints are needed for accurate protein structure prediction in hydrophobic cavities and binding interfaces. NSF Grant PHY-1019147

  5. Tensor network method for reversible classical computation

    NASA Astrophysics Data System (ADS)

    Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.

    2018-03-01

    We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.

  6. Measuring Constraint-Set Utility for Partitional Clustering Algorithms

    NASA Technical Reports Server (NTRS)

    Davidson, Ian; Wagstaff, Kiri L.; Basu, Sugato

    2006-01-01

    Clustering with constraints is an active area of machine learning and data mining research. Previous empirical work has convincingly shown that adding constraints to clustering improves the performance of a variety of algorithms. However, in most of these experiments, results are averaged over different randomly chosen constraint sets from a given set of labels, thereby masking interesting properties of individual sets. We demonstrate that constraint sets vary significantly in how useful they are for constrained clustering; some constraint sets can actually decrease algorithm performance. We create two quantitative measures, informativeness and coherence, that can be used to identify useful constraint sets. We show that these measures can also help explain differences in performance for four particular constrained clustering algorithms.

  7. Drift Reduction in Pedestrian Navigation System by Exploiting Motion Constraints and Magnetic Field.

    PubMed

    Ilyas, Muhammad; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-09-09

    Pedestrian navigation systems (PNS) using foot-mounted MEMS inertial sensors use zero-velocity updates (ZUPTs) to reduce drift in navigation solutions and estimate inertial sensor errors. However, it is well known that ZUPTs cannot reduce all errors, especially as heading error is not observable. Hence, the position estimates tend to drift and even cyclic ZUPTs are applied in updated steps of the Extended Kalman Filter (EKF). This urges the use of other motion constraints for pedestrian gait and any other valuable heading reduction information that is available. In this paper, we exploit two more motion constraints scenarios of pedestrian gait: (1) walking along straight paths; (2) standing still for a long time. It is observed that these motion constraints (called "virtual sensor"), though considerably reducing drift in PNS, still need an absolute heading reference. One common absolute heading estimation sensor is the magnetometer, which senses the Earth's magnetic field and, hence, the true heading angle can be calculated. However, magnetometers are susceptible to magnetic distortions, especially in indoor environments. In this work, an algorithm, called magnetic anomaly detection (MAD) and compensation is designed by incorporating only healthy magnetometer data in the EKF updating step, to reduce drift in zero-velocity updated INS. Experiments are conducted in GPS-denied and magnetically distorted environments to validate the proposed algorithms.

  8. Advanced timeline systems

    NASA Technical Reports Server (NTRS)

    Bulfin, R. L.; Perdue, C. A.

    1994-01-01

    The Mission Planning Division of the Mission Operations Laboratory at NASA's Marshall Space Flight Center is responsible for scheduling experiment activities for space missions controlled at MSFC. In order to draw statistically relevant conclusions, all experiments must be scheduled at least once and may have repeated performances during the mission. An experiment consists of a series of steps which, when performed, provide results pertinent to the experiment's functional objective. Since these experiments require a set of resources such as crew and power, the task of creating a timeline of experiment activities for the mission is one of resource constrained scheduling. For each experiment, a computer model with detailed information of the steps involved in running the experiment, including crew requirements, processing times, and resource requirements is created. These models are then loaded into the Experiment Scheduling Program (ESP) which attempts to create a schedule which satisfies all resource constraints. ESP uses a depth-first search technique to place each experiment into a time interval, and a scoring function to evaluate the schedule. The mission planners generate several schedules and choose one with a high value of the scoring function to send through the approval process. The process of approving a mission timeline can take several months. Each timeline must meet the requirements of the scientists, the crew, and various engineering departments as well as enforce all resource restrictions. No single objective is considered in creating a timeline. The experiment scheduling problem is: given a set of experiments, place each experiment along the mission timeline so that all resource requirements and temporal constraints are met and the timeline is acceptable to all who must approve it. Much work has been done on multicriteria decision making (MCDM). When there are two criteria, schedules which perform well with respect to one criterion will often perform poorly with respect to the other. One schedule dominates another if it performs strictly better on one criterion, and no worse on the other. Clearly, dominated schedules are undesireable. A nondominated schedule can be generated by some sort of optimization problem. Generally there are two approaches: the first is a hierarchical approach while the second requires optimizing a weighting or scoring function.

  9. In Vitro and In Vivo Single Myosin Step-Sizes in Striated Muscle a

    PubMed Central

    Burghardt, Thomas P.; Sun, Xiaojing; Wang, Yihua; Ajtai, Katalin

    2016-01-01

    Myosin in muscle transduces ATP free energy into the mechanical work of moving actin. It has a motor domain transducer containing ATP and actin binding sites, and, mechanical elements coupling motor impulse to the myosin filament backbone providing transduction/mechanical-coupling. The mechanical coupler is a lever-arm stabilized by bound essential and regulatory light chains. The lever-arm rotates cyclically to impel bound filamentous actin. Linear actin displacement due to lever-arm rotation is the myosin step-size. A high-throughput quantum dot labeled actin in vitro motility assay (Qdot assay) measures motor step-size in the context of an ensemble of actomyosin interactions. The ensemble context imposes a constant velocity constraint for myosins interacting with one actin filament. In a cardiac myosin producing multiple step-sizes, a “second characterization” is step-frequency that adjusts longer step-size to lower frequency maintaining a linear actin velocity identical to that from a shorter step-size and higher frequency actomyosin cycle. The step-frequency characteristic involves and integrates myosin enzyme kinetics, mechanical strain, and other ensemble affected characteristics. The high-throughput Qdot assay suits a new paradigm calling for wide surveillance of the vast number of disease or aging relevant myosin isoforms that contrasts with the alternative model calling for exhaustive research on a tiny subset myosin forms. The zebrafish embryo assay (Z assay) performs single myosin step-size and step-frequency assaying in vivo combining single myosin mechanical and whole muscle physiological characterizations in one model organism. The Qdot and Z assays cover “bottom-up” and “top-down” assaying of myosin characteristics. PMID:26728749

  10. High-throughput screening of chromatographic separations: IV. Ion-exchange.

    PubMed

    Kelley, Brian D; Switzer, Mary; Bastek, Patrick; Kramarczyk, Jack F; Molnar, Kathleen; Yu, Tianning; Coffman, Jon

    2008-08-01

    Ion-exchange (IEX) chromatography steps are widely applied in protein purification processes because of their high capacity, selectivity, robust operation, and well-understood principles. Optimization of IEX steps typically involves resin screening and selection of the pH and counterion concentrations of the load, wash, and elution steps. Time and material constraints associated with operating laboratory columns often preclude evaluating more than 20-50 conditions during early stages of process development. To overcome this limitation, a high-throughput screening (HTS) system employing a robotic liquid handling system and 96-well filterplates was used to evaluate various operating conditions for IEX steps for monoclonal antibody (mAb) purification. A screening study for an adsorptive cation-exchange step evaluated eight different resins. Sodium chloride concentrations defining the operating boundaries of product binding and elution were established at four different pH levels for each resin. Adsorption isotherms were measured for 24 different pH and salt combinations for a single resin. An anion-exchange flowthrough step was then examined, generating data on mAb adsorption for 48 different combinations of pH and counterion concentration for three different resins. The mAb partition coefficients were calculated and used to estimate the characteristic charge of the resin-protein interaction. Host cell protein and residual Protein A impurity levels were also measured, providing information on selectivity within this operating window. The HTS system shows promise for accelerating process development of IEX steps, enabling rapid acquisition of large datasets addressing the performance of the chromatography step under many different operating conditions. (c) 2008 Wiley Periodicals, Inc.

  11. RNAiFOLD: a constraint programming algorithm for RNA inverse folding and molecular design.

    PubMed

    Garcia-Martin, Juan Antonio; Clote, Peter; Dotu, Ivan

    2013-04-01

    Synthetic biology is a rapidly emerging discipline with long-term ramifications that range from single-molecule detection within cells to the creation of synthetic genomes and novel life forms. Truly phenomenal results have been obtained by pioneering groups--for instance, the combinatorial synthesis of genetic networks, genome synthesis using BioBricks, and hybridization chain reaction (HCR), in which stable DNA monomers assemble only upon exposure to a target DNA fragment, biomolecular self-assembly pathways, etc. Such work strongly suggests that nanotechnology and synthetic biology together seem poised to constitute the most transformative development of the 21st century. In this paper, we present a Constraint Programming (CP) approach to solve the RNA inverse folding problem. Given a target RNA secondary structure, we determine an RNA sequence which folds into the target structure; i.e. whose minimum free energy structure is the target structure. Our approach represents a step forward in RNA design--we produce the first complete RNA inverse folding approach which allows for the specification of a wide range of design constraints. We also introduce a Large Neighborhood Search approach which allows us to tackle larger instances at the cost of losing completeness, while retaining the advantages of meeting design constraints (motif, GC-content, etc.). Results demonstrate that our software, RNAiFold, performs as well or better than all state-of-the-art approaches; nevertheless, our approach is unique in terms of completeness, flexibility, and the support of various design constraints. The algorithms presented in this paper are publicly available via the interactive webserver http://bioinformatics.bc.edu/clotelab/RNAiFold; additionally, the source code can be downloaded from that site.

  12. Integrating viscoelastic mass spring dampers into position-based dynamics to simulate soft tissue deformation in real time

    PubMed Central

    Lu, Yuhua; Liu, Qian

    2018-01-01

    We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications. PMID:29515870

  13. Integrating viscoelastic mass spring dampers into position-based dynamics to simulate soft tissue deformation in real time.

    PubMed

    Xu, Lang; Lu, Yuhua; Liu, Qian

    2018-02-01

    We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications.

  14. The MVM imaging system and its spacecraft interactions. [Mariner Venus/Mercury TV system performance

    NASA Technical Reports Server (NTRS)

    Vescelus, F. E.

    1975-01-01

    The present work describes the main considerations and steps taken in determining the functional design of the imaging system of the Mariner Venus/Mercury (MVM) spacecraft and gives examples of some of the interactions between the spacecraft and the imaging instrument during the design and testing phases. Stringent cost and scheduling constraints dictated the use of the previous Mariner 9 dual-camera TV system. The TV parameters laid the groundwork for the imaging definition. Based on the flyby distances from Venus and Mercury and the goal of surface resolution better than 500 meters per sample pair, calculation was performed on focal length, format size, planetary coverage, and data rates. Some problems encountered in initial mechanical operation and as a result of spacecraft drift during the mission are also discussed.

  15. Nonlinear robust controller design for multi-robot systems with unknown payloads

    NASA Technical Reports Server (NTRS)

    Song, Y. D.; Anderson, J. N.; Homaifar, A.; Lai, H. Y.

    1992-01-01

    This work is concerned with the control problem of a multi-robot system handling a payload with unknown mass properties. Force constraints at the grasp points are considered. Robust control schemes are proposed that cope with the model uncertainty and achieve asymptotic path tracking. To deal with the force constraints, a strategy for optimally sharing the task is suggested. This strategy basically consists of two steps. The first detects the robots that need help and the second arranges that help. It is shown that the overall system is not only robust to uncertain payload parameters, but also satisfies the force constraints.

  16. Direct Sensor Orientation of a Land-Based Mobile Mapping System

    PubMed Central

    Rau, Jiann-Yeou; Habib, Ayman F.; Kersting, Ana P.; Chiang, Kai-Wei; Bang, Ki-In; Tseng, Yi-Hsing; Li, Yu-Hua

    2011-01-01

    A land-based mobile mapping system (MMS) is flexible and useful for the acquisition of road environment geospatial information. It integrates a set of imaging sensors and a position and orientation system (POS). The positioning quality of such systems is highly dependent on the accuracy of the utilized POS. This limitation is the major drawback due to the elevated cost associated with high-end GPS/INS units, particularly the inertial system. The potential accuracy of the direct sensor orientation depends on the architecture and quality of the GPS/INS integration process as well as the validity of the system calibration (i.e., calibration of the individual sensors as well as the system mounting parameters). In this paper, a novel single-step procedure using integrated sensor orientation with relative orientation constraint for the estimation of the mounting parameters is introduced. A comparative analysis between the proposed single-step and the traditional two-step procedure is carried out. Moreover, the estimated mounting parameters using the different methods are used in a direct geo-referencing procedure to evaluate their performance and the feasibility of the implemented system. Experimental results show that the proposed system using single-step system calibration method can achieve high 3D positioning accuracy. PMID:22164015

  17. An Automatic and Robust Algorithm of Reestablishment of Digital Dental Occlusion

    PubMed Central

    Chang, Yu-Bing; Xia, James J.; Gateno, Jaime; Xiong, Zixiang; Zhou, Xiaobo; Wong, Stephen T. C.

    2017-01-01

    In the field of craniomaxillofacial (CMF) surgery, surgical planning can be performed on composite 3-D models that are generated by merging a computerized tomography scan with digital dental models. Digital dental models can be generated by scanning the surfaces of plaster dental models or dental impressions with a high-resolution laser scanner. During the planning process, one of the essential steps is to reestablish the dental occlusion. Unfortunately, this task is time-consuming and often inaccurate. This paper presents a new approach to automatically and efficiently reestablish dental occlusion. It includes two steps. The first step is to initially position the models based on dental curves and a point matching technique. The second step is to reposition the models to the final desired occlusion based on iterative surface-based minimum distance mapping with collision constraints. With linearization of rotation matrix, the alignment is modeled by solving quadratic programming. The simulation was completed on 12 sets of digital dental models. Two sets of dental models were partially edentulous, and another two sets have first premolar extractions for orthodontic treatment. Two validation methods were applied to the articulated models. The results show that using our method, the dental models can be successfully articulated with a small degree of deviations from the occlusion achieved with the gold-standard method. PMID:20529735

  18. Balanced sections and the propagation of décollement: A Jura perspective

    NASA Astrophysics Data System (ADS)

    Laubscher, Hans

    2003-12-01

    The propagation of thrusting is an important problem in tectonics that is usually approached by forward (kinematical) modeling of balanced sections. Although modeling techniques are similar in most foreland fold-thrust belts, it turns out that in the Jura, there are modeling problems that require modifications of widely used techniques. In particular, attention is called to the role of model constraints that complement the set of observational constraints in order to fully define the model. In the eastern Jura, such model constraints may be inferred from the regional geology, which shows a peculiar noncoaxial relation between thrusts and subsequent folds. This relation implies changes in the direction of translation and the mode of deformation in the course of the propagation of décollement. These changes are conjectured to be the result of a change in partial decoupling between the thin-skinned fold-thrust system (nappe) and the obliquely subducted foreland. As a particularly instructive case in point, a cross section through the Weissenstein range is discussed. A two-step forward (kinematical) model is proposed that uses both local observational constraints as well as model constraints inferred from regional data. As a first step, a fault bend fold is generated in the hanging wall of a thrust of 1500 m shortening. As a second step, this structure is transferred by flexural slip into the actual fold observed at the surface. This requires an additional 1600 m of shortening and leads to folding of the original thrust. Thereafter, the footwall is deformed so as to respect the constraint that this deformation must fit into the space defined by the folded thrust as the upper boundary and the décollement surface as the lower boundary, and that, in addition, should be confined to the area immediately below the fold. In modeling the footwall deformation a mix of balancing methods is used: fault propagation folds for the competent intervals of the stratigraphic column and area balancing for the incompetent ones. Further propagation of décollement into the foreland is made possible by the folding process, which is dominated by a sort of kinking and which is the main contribution to structural elevation and hence to producing a sort of critical taper of the moving thin-skinned wedge.

  19. Computationally optimized ECoG stimulation with local safety constraints.

    PubMed

    Guler, Seyhmus; Dannhauer, Moritz; Roig-Solvas, Biel; Gkogkidis, Alexis; Macleod, Rob; Ball, Tonio; Ojemann, Jeffrey G; Brooks, Dana H

    2018-06-01

    Direct stimulation of the cortical surface is used clinically for cortical mapping and modulation of local activity. Future applications of cortical modulation and brain-computer interfaces may also use cortical stimulation methods. One common method to deliver current is through electrocorticography (ECoG) stimulation in which a dense array of electrodes are placed subdurally or epidurally to stimulate the cortex. However, proximity to cortical tissue limits the amount of current that can be delivered safely. It may be desirable to deliver higher current to a specific local region of interest (ROI) while limiting current to other local areas more stringently than is guaranteed by global safety limits. Two commonly used global safety constraints bound the total injected current and individual electrode currents. However, these two sets of constraints may not be sufficient to prevent high current density locally (hot-spots). In this work, we propose an efficient approach that prevents current density hot-spots in the entire brain while optimizing ECoG stimulus patterns for targeted stimulation. Specifically, we maximize the current along a particular desired directional field in the ROI while respecting three safety constraints: one on the total injected current, one on individual electrode currents, and the third on the local current density magnitude in the brain. This third set of constraints creates a computational barrier due to the huge number of constraints needed to bound the current density at every point in the entire brain. We overcome this barrier by adopting an efficient two-step approach. In the first step, the proposed method identifies the safe brain region, which cannot contain any hot-spots solely based on the global bounds on total injected current and individual electrode currents. In the second step, the proposed algorithm iteratively adjusts the stimulus pattern to arrive at a solution that exhibits no hot-spots in the remaining brain. We report on simulations on a realistic finite element (FE) head model with five anatomical ROIs and two desired directional fields. We also report on the effect of ROI depth and desired directional field on the focality of the stimulation. Finally, we provide an analysis of optimization runtime as a function of different safety and modeling parameters. Our results suggest that optimized stimulus patterns tend to differ from those used in clinical practice. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Robust model predictive control for multi-step short range spacecraft rendezvous

    NASA Astrophysics Data System (ADS)

    Zhu, Shuyi; Sun, Ran; Wang, Jiaolong; Wang, Jihe; Shao, Xiaowei

    2018-07-01

    This work presents a robust model predictive control (MPC) approach for the multi-step short range spacecraft rendezvous problem. During the specific short range phase concerned, the chaser is supposed to be initially outside the line-of-sight (LOS) cone. Therefore, the rendezvous process naturally includes two steps: the first step is to transfer the chaser into the LOS cone and the second step is to transfer the chaser into the aimed region with its motion confined within the LOS cone. A novel MPC framework named after Mixed MPC (M-MPC) is proposed, which is the combination of the Variable-Horizon MPC (VH-MPC) framework and the Fixed-Instant MPC (FI-MPC) framework. The M-MPC framework enables the optimization for the two steps to be implemented jointly rather than to be separated factitiously, and its computation workload is acceptable for the usually low-power processors onboard spacecraft. Then considering that disturbances including modeling error, sensor noise and thrust uncertainty may induce undesired constraint violations, a robust technique is developed and it is attached to the above M-MPC framework to form a robust M-MPC approach. The robust technique is based on the chance-constrained idea, which ensures that constraints can be satisfied with a prescribed probability. It improves the robust technique proposed by Gavilan et al., because it eliminates the unnecessary conservativeness by explicitly incorporating known statistical properties of the navigation uncertainty. The efficacy of the robust M-MPC approach is shown in a simulation study.

  1. Effect of data quality on a hybrid Coulomb/STEP model for earthquake forecasting

    NASA Astrophysics Data System (ADS)

    Steacy, Sandy; Jimenez, Abigail; Gerstenberger, Matt; Christophersen, Annemarie

    2014-05-01

    Operational earthquake forecasting is rapidly becoming a 'hot topic' as civil protection authorities seek quantitative information on likely near future earthquake distributions during seismic crises. At present, most of the models in public domain are statistical and use information about past and present seismicity as well as b-value and Omori's law to forecast future rates. A limited number of researchers, however, are developing hybrid models which add spatial constraints from Coulomb stress modeling to existing statistical approaches. Steacy et al. (2013), for instance, recently tested a model that combines Coulomb stress patterns with the STEP (short-term earthquake probability) approach against seismicity observed during the 2010-2012 Canterbury earthquake sequence. They found that the new model performed at least as well as, and often better than, STEP when tested against retrospective data but that STEP was generally better in pseudo-prospective tests that involved data actually available within the first 10 days of each event of interest. They suggested that the major reason for this discrepancy was uncertainty in the slip models and, in particular, in the geometries of the faults involved in each complex major event. Here we test this hypothesis by developing a number of retrospective forecasts for the Landers earthquake using hypothetical slip distributions developed by Steacy et al. (2004) to investigate the sensitivity of Coulomb stress models to fault geometry and earthquake slip. Specifically, we consider slip models based on the NEIC location, the CMT solution, surface rupture, and published inversions and find significant variation in the relative performance of the models depending upon the input data.

  2. Drift Reduction in Pedestrian Navigation System by Exploiting Motion Constraints and Magnetic Field

    PubMed Central

    Ilyas, Muhammad; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    Pedestrian navigation systems (PNS) using foot-mounted MEMS inertial sensors use zero-velocity updates (ZUPTs) to reduce drift in navigation solutions and estimate inertial sensor errors. However, it is well known that ZUPTs cannot reduce all errors, especially as heading error is not observable. Hence, the position estimates tend to drift and even cyclic ZUPTs are applied in updated steps of the Extended Kalman Filter (EKF). This urges the use of other motion constraints for pedestrian gait and any other valuable heading reduction information that is available. In this paper, we exploit two more motion constraints scenarios of pedestrian gait: (1) walking along straight paths; (2) standing still for a long time. It is observed that these motion constraints (called “virtual sensor”), though considerably reducing drift in PNS, still need an absolute heading reference. One common absolute heading estimation sensor is the magnetometer, which senses the Earth’s magnetic field and, hence, the true heading angle can be calculated. However, magnetometers are susceptible to magnetic distortions, especially in indoor environments. In this work, an algorithm, called magnetic anomaly detection (MAD) and compensation is designed by incorporating only healthy magnetometer data in the EKF updating step, to reduce drift in zero-velocity updated INS. Experiments are conducted in GPS-denied and magnetically distorted environments to validate the proposed algorithms. PMID:27618056

  3. Constraints on muscle performance provide a novel explanation for the scaling of posture in terrestrial animals.

    PubMed

    Usherwood, James R

    2013-08-23

    Larger terrestrial animals tend to support their weight with more upright limbs. This makes structural sense, reducing the loading on muscles and bones, which is disproportionately challenging in larger animals. However, it does not account for why smaller animals are more crouched; instead, they could enjoy relatively more slender supporting structures or higher safety factors. Here, an alternative account for the scaling of posture is proposed, with close parallels to the scaling of jump performance. If the costs of locomotion are related to the volume of active muscle, and the active muscle volume required depends on both the work and the power demanded during the push-off phase of each step (not just the net positive work), then the disproportional scaling of requirements for work and push-off power are revealing. Larger animals require relatively greater active muscle volumes for dynamically similar gaits (e.g. top walking speed)-which may present an ultimate constraint to the size of running animals. Further, just as for jumping, animals with shorter legs and briefer push-off periods are challenged to provide the power (not the work) required for push-off. This can be ameliorated by having relatively long push-off periods, potentially accounting for the crouched stance of small animals.

  4. Constraints on muscle performance provide a novel explanation for the scaling of posture in terrestrial animals

    PubMed Central

    Usherwood, James R.

    2013-01-01

    Larger terrestrial animals tend to support their weight with more upright limbs. This makes structural sense, reducing the loading on muscles and bones, which is disproportionately challenging in larger animals. However, it does not account for why smaller animals are more crouched; instead, they could enjoy relatively more slender supporting structures or higher safety factors. Here, an alternative account for the scaling of posture is proposed, with close parallels to the scaling of jump performance. If the costs of locomotion are related to the volume of active muscle, and the active muscle volume required depends on both the work and the power demanded during the push-off phase of each step (not just the net positive work), then the disproportional scaling of requirements for work and push-off power are revealing. Larger animals require relatively greater active muscle volumes for dynamically similar gaits (e.g. top walking speed)—which may present an ultimate constraint to the size of running animals. Further, just as for jumping, animals with shorter legs and briefer push-off periods are challenged to provide the power (not the work) required for push-off. This can be ameliorated by having relatively long push-off periods, potentially accounting for the crouched stance of small animals. PMID:23825086

  5. Forecasts of non-Gaussian parameter spaces using Box-Cox transformations

    NASA Astrophysics Data System (ADS)

    Joachimi, B.; Taylor, A. N.

    2011-09-01

    Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.

  6. Rate Adaptive Based Resource Allocation with Proportional Fairness Constraints in OFDMA Systems

    PubMed Central

    Yin, Zhendong; Zhuang, Shufeng; Wu, Zhilu; Ma, Bo

    2015-01-01

    Orthogonal frequency division multiple access (OFDMA), which is widely used in the wireless sensor networks, allows different users to obtain different subcarriers according to their subchannel gains. Therefore, how to assign subcarriers and power to different users to achieve a high system sum rate is an important research area in OFDMA systems. In this paper, the focus of study is on the rate adaptive (RA) based resource allocation with proportional fairness constraints. Since the resource allocation is a NP-hard and non-convex optimization problem, a new efficient resource allocation algorithm ACO-SPA is proposed, which combines ant colony optimization (ACO) and suboptimal power allocation (SPA). To reduce the computational complexity, the optimization problem of resource allocation in OFDMA systems is separated into two steps. For the first one, the ant colony optimization algorithm is performed to solve the subcarrier allocation. Then, the suboptimal power allocation algorithm is developed with strict proportional fairness, and the algorithm is based on the principle that the sums of power and the reciprocal of channel-to-noise ratio for each user in different subchannels are equal. To support it, plenty of simulation results are presented. In contrast with root-finding and linear methods, the proposed method provides better performance in solving the proportional resource allocation problem in OFDMA systems. PMID:26426016

  7. Back-stepping active disturbance rejection control design for integrated missile guidance and control system via reduced-order ESO.

    PubMed

    Xingling, Shao; Honglun, Wang

    2015-07-01

    This paper proposes a novel composite integrated guidance and control (IGC) law for missile intercepting against unknown maneuvering target with multiple uncertainties and control constraint. First, by using back-stepping technique, the proposed IGC law design is separated into guidance loop and control loop. The unknown target maneuvers and variations of aerodynamics parameters in guidance and control loop are viewed as uncertainties, which are estimated and compensated by designed model-assisted reduced-order extended state observer (ESO). Second, based on the principle of active disturbance rejection control (ADRC), enhanced feedback linearization (FL) based control law is implemented for the IGC model using the estimates generated by reduced-order ESO. In addition, performance analysis and comparisons between ESO and reduced-order ESO are examined. Nonlinear tracking differentiator is employed to construct the derivative of virtual control command in the control loop. Third, the closed-loop stability for the considered system is established. Finally, the effectiveness of the proposed IGC law in enhanced interception performance such as smooth interception course, improved robustness against multiple uncertainties as well as reduced control consumption during initial phase are demonstrated through simulations. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Research on millisecond load recovery strategy in the late period of UHVDC fault dispose

    NASA Astrophysics Data System (ADS)

    Qiu, Chenguang; Qian, Tiantian; Cheng, Jinmin; Wang, Ke

    2018-06-01

    When UHVDC has a fault, it needs to quickly cut off the load so that the entire system can keep balance. In the late period of fault dispose, it needs to recover the load step by step. The recovery strategy of millisecond load is studied in this paper. Aimed at the maximum recovery load in one step, combined with grid security constraints, the recovery model of millisecond load is built, and then solved by Genetic Algorithms. The simulation example is established to verify the effectiveness of proposed method.

  9. Spatial effect of new municipal solid waste landfill siting using different guidelines.

    PubMed

    Ahmad, Siti Zubaidah; Ahamad, Mohd Sanusi S; Yusoff, Mohd Suffian

    2014-01-01

    Proper implementation of landfill siting with the right regulations and constraints can prevent undesirable long-term effects. Different countries have respective guidelines on criteria for new landfill sites. In this article, we perform a comparative study of municipal solid waste landfill siting criteria stated in the policies and guidelines of eight different constitutional bodies from Malaysia, Australia, India, U.S.A., Europe, China and the Middle East, and the World Bank. Subsequently, a geographic information system (GIS) multi-criteria evaluation model was applied to determine new suitable landfill sites using different criterion parameters using a constraint mapping technique and weighted linear combination. Application of Macro Modeler provided in the GIS-IDRISI Andes software helps in building and executing multi-step models. In addition, the analytic hierarchy process technique was included to determine the criterion weight of the decision maker's preferences as part of the weighted linear combination procedure. The differences in spatial results of suitable sites obtained signifies that dissimilarity in guideline specifications and requirements will have an effect on the decision-making process.

  10. The design of an ultra-low sidelobe offset-fed 1.22m antenna for use in the broadcasting satellite service

    NASA Technical Reports Server (NTRS)

    Janky, J. M.

    1981-01-01

    A feed design and reflector geometry were determined for an ultra low sidelobe offset fed 1.22 meter antenna suitable for use in the 12 GHz broadcasting satellite service. Arbitrary constraints used to evaluate the relative merits of the feed horns and range of f/D geometries are: minimum efficiency of 55 percent, -30 dB first sidelobe level (relative to on axis gain), a 0 dBi plateau beyond the near in sidelobe region, and a Chebyshev polynomial based envelope (borrowed from filter theory) for the region from the -3 dB beamwidth points to the 0 dBi plateau region. This envelope is extremely stringent but the results of this research effort indicate that two steps of corrugated feed and a cluster array of small 1 lambda horns do meet the constraints. A set of performance specifications and a mechanical design suitable for a consumer oriented market in the broadcasting satellite service was developed. Costs for production quantities of 10,000 units/yr. are estimated to be around $150.

  11. Speedup for quantum optimal control from automatic differentiation based on graphics processing units

    NASA Astrophysics Data System (ADS)

    Leung, Nelson; Abdelhafez, Mohamed; Koch, Jens; Schuster, David

    2017-04-01

    We implement a quantum optimal control algorithm based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them in the optimization process with ease. We show that the use of GPUs can speedup calculations by more than an order of magnitude. Our strategy facilitates efficient numerical simulations on affordable desktop computers and exploration of a host of optimization constraints and system parameters relevant to real-life experiments. We demonstrate optimization of quantum evolution based on fine-grained evaluation of performance at each intermediate time step, thus enabling more intricate control on the evolution path, suppression of departures from the truncated model subspace, as well as minimization of the physical time needed to perform high-fidelity state preparation and unitary gates.

  12. Polychromatic sparse image reconstruction and mass attenuation spectrum estimation via B-spline basis function expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu, Renliang, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu; Dogandžić, Aleksandar, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu

    2015-03-31

    We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of themore » density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.« less

  13. Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing

    PubMed Central

    Zhang, Qianghui; Wu, Junjie; Li, Wenchao; Huang, Yulin; Yang, Jianyu; Yang, Haiguang

    2016-01-01

    Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR) equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS), which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR) provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP) is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD) based on Stolt interpolation. Finally, a modified TSP (MTSP) is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application. PMID:27472341

  14. Network-driven design principles for neuromorphic systems.

    PubMed

    Partzsch, Johannes; Schüffny, Rene

    2015-01-01

    Synaptic connectivity is typically the most resource-demanding part of neuromorphic systems. Commonly, the architecture of these systems is chosen mainly on technical considerations. As a consequence, the potential for optimization arising from the inherent constraints of connectivity models is left unused. In this article, we develop an alternative, network-driven approach to neuromorphic architecture design. We describe methods to analyse performance of existing neuromorphic architectures in emulating certain connectivity models. Furthermore, we show step-by-step how to derive a neuromorphic architecture from a given connectivity model. For this, we introduce a generalized description for architectures with a synapse matrix, which takes into account shared use of circuit components for reducing total silicon area. Architectures designed with this approach are fitted to a connectivity model, essentially adapting to its connection density. They are guaranteeing faithful reproduction of the model on chip, while requiring less total silicon area. In total, our methods allow designers to implement more area-efficient neuromorphic systems and verify usability of the connectivity resources in these systems.

  15. Network-driven design principles for neuromorphic systems

    PubMed Central

    Partzsch, Johannes; Schüffny, Rene

    2015-01-01

    Synaptic connectivity is typically the most resource-demanding part of neuromorphic systems. Commonly, the architecture of these systems is chosen mainly on technical considerations. As a consequence, the potential for optimization arising from the inherent constraints of connectivity models is left unused. In this article, we develop an alternative, network-driven approach to neuromorphic architecture design. We describe methods to analyse performance of existing neuromorphic architectures in emulating certain connectivity models. Furthermore, we show step-by-step how to derive a neuromorphic architecture from a given connectivity model. For this, we introduce a generalized description for architectures with a synapse matrix, which takes into account shared use of circuit components for reducing total silicon area. Architectures designed with this approach are fitted to a connectivity model, essentially adapting to its connection density. They are guaranteeing faithful reproduction of the model on chip, while requiring less total silicon area. In total, our methods allow designers to implement more area-efficient neuromorphic systems and verify usability of the connectivity resources in these systems. PMID:26539079

  16. Arm motion coupling during locomotion-like actions: An experimental study and a dynamic model

    PubMed Central

    Shapkova, E.Yu; Terekhov, A.V.; Latash, M.L.

    2010-01-01

    We studied the coordination of arm movements in standing persons who performed an out-of-phase arm-swinging task while stepping in place or while standing. The subjects were instructed to stop one of the arms in response to an auditory signal while trying to keep the rest of the movement pattern unchanged. A significant increase was observed in the amplitude of the arm that continued swinging under both the stepping and standing conditions. This increase was similar between the right and left arms. A dynamic model was developed including two coupled non-linear van der Pol oscillators. We assumed that stopping an arm did not eliminate the coupling but introduced a new constraint. Within the model, superposition of two factors, a command to stop the ongoing movement of one arm and the coupling between the two oscillators, has been able to account for the observed effects. The model makes predictions for future experiments. PMID:21628725

  17. Active traffic management : the next step in congestion management

    DOT National Transportation Integrated Search

    2007-07-01

    The combination of continued travel growth and budget constraints makes it difficult for transportation agencies to provide sufficient roadway capacity in major metropolitan areas. The Federal Highway Administration, American Association of State Hig...

  18. Age-related modifications in steering behaviour: effects of base-of-support constraints at the turn point.

    PubMed

    Paquette, Maxime R; Fuller, Jason R; Adkin, Allan L; Vallis, Lori Ann

    2008-09-01

    This study investigated the effects of altering the base of support (BOS) at the turn point on anticipatory locomotor adjustments during voluntary changes in travel direction in healthy young and older adults. Participants were required to walk at their preferred pace along a 3-m straight travel path and continue to walk straight ahead or turn 40 degrees to the left or right for an additional 2-m. The starting foot and occasionally the gait starting point were adjusted so that participants had to execute the turn using a cross-over step with a narrow BOS or a lead-out step with a wide BOS. Spatial and temporal gait variables, magnitudes of angular segmental movement, and timing and sequencing of body segment reorientation were similar despite executing the turn with a narrow or wide BOS. A narrow BOS during turning generated an increased step width in the step prior to the turn for both young and older adults. Age-related changes when turning included reduced step velocity and step length for older compared to young adults. Age-related changes in the timing and sequencing of body segment reorientation prior to the turn point were also observed. A reduction in walking speed and an increase in step width just prior to the turn, combined with a delay in motion of the center of mass suggests that older adults used a more cautious combined foot placement and hip strategy to execute changes in travel direction compared to young adults. The results of this study provide insight into mobility constraints during a common locomotor task in older adults.

  19. Asynchronous collision integrators: Explicit treatment of unilateral contact with friction and nodal restraints

    PubMed Central

    Wolff, Sebastian; Bucher, Christian

    2013-01-01

    This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806

  20. Study of adaptive methods for data compression of scanner data

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The performance of adaptive image compression techniques and the applicability of a variety of techniques to the various steps in the data dissemination process are examined in depth. It is concluded that the bandwidth of imagery generated by scanners can be reduced without introducing significant degradation such that the data can be transmitted over an S-band channel. This corresponds to a compression ratio equivalent to 1.84 bits per pixel. It is also shown that this can be achieved using at least two fairly simple techniques with weight-power requirements well within the constraints of the LANDSAT-D satellite. These are the adaptive 2D DPCM and adaptive hybrid techniques.

  1. Deep Search for Satellites Around the Lucy Mission Targets

    NASA Astrophysics Data System (ADS)

    Noll, Keith

    2017-08-01

    By performing the first deep search for Trojan satellites with HST we will obtain unique constraints on satellite-forming processes in this population. We have selected the targets from NASA's Lucy mission because they represent a taxonomically and physically diverse set of targets that allow intercomparisons from a small survey. Also, by searching now to identify any orbiting material around the Lucy targets, it will be possible impact hardware decisions and plan for maximum scientific return from the mission. This search also is a necessary step to assure mission safety as the Lucy spacecraft will fly within 1000 km of the targets, well within the region where stable orbits can exist.

  2. A extract method of mountainous area settlement place information from GF-1 high resolution optical remote sensing image under semantic constraints

    NASA Astrophysics Data System (ADS)

    Guo, H., II

    2016-12-01

    Spatial distribution information of mountainous area settlement place is of great significance to the earthquake emergency work because most of the key earthquake hazardous areas of china are located in the mountainous area. Remote sensing has the advantages of large coverage and low cost, it is an important way to obtain the spatial distribution information of mountainous area settlement place. At present, fully considering the geometric information, spectral information and texture information, most studies have applied object-oriented methods to extract settlement place information, In this article, semantic constraints is to be added on the basis of object-oriented methods. The experimental data is one scene remote sensing image of domestic high resolution satellite (simply as GF-1), with a resolution of 2 meters. The main processing consists of 3 steps, the first is pretreatment, including ortho rectification and image fusion, the second is Object oriented information extraction, including Image segmentation and information extraction, the last step is removing the error elements under semantic constraints, in order to formulate these semantic constraints, the distribution characteristics of mountainous area settlement place must be analyzed and the spatial logic relation between settlement place and other objects must be considered. The extraction accuracy calculation result shows that the extraction accuracy of object oriented method is 49% and rise up to 86% after the use of semantic constraints. As can be seen from the extraction accuracy, the extract method under semantic constraints can effectively improve the accuracy of mountainous area settlement place information extraction. The result shows that it is feasible to extract mountainous area settlement place information form GF-1 image, so the article proves that it has a certain practicality to use domestic high resolution optical remote sensing image in earthquake emergency preparedness.

  3. Experiential knowledge of expert coaches can help identify informational constraints on performance of dynamic interceptive actions.

    PubMed

    Greenwood, Daniel; Davids, Keith; Renshaw, Ian

    2014-01-01

    Coordination of dynamic interceptive movements is predicated on cyclical relations between an individual's actions and information sources from the performance environment. To identify dynamic informational constraints, which are interwoven with individual and task constraints, coaches' experiential knowledge provides a complementary source to support empirical understanding of performance in sport. In this study, 15 expert coaches from 3 sports (track and field, gymnastics and cricket) participated in a semi-structured interview process to identify potential informational constraints which they perceived to regulate action during run-up performance. Expert coaches' experiential knowledge revealed multiple information sources which may constrain performance adaptations in such locomotor pointing tasks. In addition to the locomotor pointing target, coaches' knowledge highlighted two other key informational constraints: vertical reference points located near the locomotor pointing target and a check mark located prior to the locomotor pointing target. This study highlights opportunities for broadening the understanding of perception and action coupling processes, and the identified information sources warrant further empirical investigation as potential constraints on athletic performance. Integration of experiential knowledge of expert coaches with theoretically driven empirical knowledge represents a promising avenue to drive future applied science research and pedagogical practice.

  4. Autonomy for Constellation

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Szczur, Martha R. (Technical Monitor)

    2000-01-01

    The newer types of space systems, which are planned for the future, are placing challenging demands for newer autonomy concepts and techniques. Motivating these challenges are resource constraints. Even though onboard computing power will surely increase in the coming years, the resource constraints associated with space-based processes will continue to be a major factor that needs to be considered when dealing with, for example, agent-based spacecraft autonomy. To realize "economical intelligence", i.e., constrained computational intelligence that can reside within a process under severe resource constraints (time, power, space, etc.), is a major goal for such space systems as the Nanosat constellations. To begin to address the new challenges, we are developing approaches to constellation autonomy with constraints in mind. Within the Agent Concepts Testbed (ACT) at the Goddard Space Flight Center we are currently developing a Nanosat-related prototype for the first of the two-step program.

  5. Radiofrequency pulse design in parallel transmission under strict temperature constraints.

    PubMed

    Boulant, Nicolas; Massire, Aurélien; Amadon, Alexis; Vignaud, Alexandre

    2014-09-01

    To gain radiofrequency (RF) pulse performance by directly addressing the temperature constraints, as opposed to the specific absorption rate (SAR) constraints, in parallel transmission at ultra-high field. The magnitude least-squares RF pulse design problem under hard SAR constraints was solved repeatedly by using the virtual observation points and an active-set algorithm. The SAR constraints were updated at each iteration based on the result of a thermal simulation. The numerical study was performed for an SAR-demanding and simplified time of flight sequence using B1 and ΔB0 maps obtained in vivo on a human brain at 7T. The proposed adjustment of the SAR constraints combined with an active-set algorithm provided higher flexibility in RF pulse design within a reasonable time. The modifications of those constraints acted directly upon the thermal response as desired. Although further confidence in the thermal models is needed, this study shows that RF pulse design under strict temperature constraints is within reach, allowing better RF pulse performance and faster acquisitions at ultra-high fields at the cost of higher sequence complexity. Copyright © 2013 Wiley Periodicals, Inc.

  6. Conformational Sampling in Template-Free Protein Loop Structure Modeling: An Overview

    PubMed Central

    Li, Yaohang

    2013-01-01

    Accurately modeling protein loops is an important step to predict three-dimensional structures as well as to understand functions of many proteins. Because of their high flexibility, modeling the three-dimensional structures of loops is difficult and is usually treated as a “mini protein folding problem” under geometric constraints. In the past decade, there has been remarkable progress in template-free loop structure modeling due to advances of computational methods as well as stably increasing number of known structures available in PDB. This mini review provides an overview on the recent computational approaches for loop structure modeling. In particular, we focus on the approaches of sampling loop conformation space, which is a critical step to obtain high resolution models in template-free methods. We review the potential energy functions for loop modeling, loop buildup mechanisms to satisfy geometric constraints, and loop conformation sampling algorithms. The recent loop modeling results are also summarized. PMID:24688696

  7. Conformational sampling in template-free protein loop structure modeling: an overview.

    PubMed

    Li, Yaohang

    2013-01-01

    Accurately modeling protein loops is an important step to predict three-dimensional structures as well as to understand functions of many proteins. Because of their high flexibility, modeling the three-dimensional structures of loops is difficult and is usually treated as a "mini protein folding problem" under geometric constraints. In the past decade, there has been remarkable progress in template-free loop structure modeling due to advances of computational methods as well as stably increasing number of known structures available in PDB. This mini review provides an overview on the recent computational approaches for loop structure modeling. In particular, we focus on the approaches of sampling loop conformation space, which is a critical step to obtain high resolution models in template-free methods. We review the potential energy functions for loop modeling, loop buildup mechanisms to satisfy geometric constraints, and loop conformation sampling algorithms. The recent loop modeling results are also summarized.

  8. The Effect of Intensity on 3-Dimensional Kinematics and Coordination in Front-Crawl Swimming.

    PubMed

    de Jesus, Kelly; Sanders, Ross; de Jesus, Karla; Ribeiro, João; Figueiredo, Pedro; Vilas-Boas, João P; Fernandes, Ricardo J

    2016-09-01

    Coaches are often challenged to optimize swimmers' technique at different training and competition intensities, but 3-dimensional (3D) analysis has not been conducted for a wide range of training zones. To analyze front-crawl 3D kinematics and interlimb coordination from low to severe swimming intensities. Ten male swimmers performed a 200-m front crawl at 7 incrementally increasing paces until exhaustion (0.05-m/s increments and 30-s intervals), with images from 2 cycles in each step (at the 25- and 175-m laps) being recorded by 2 surface and 4 underwater video cameras. Metabolic anaerobic threshold (AnT) was also assessed using the lactate-concentration-velocity curve-modeling method. Stroke frequency increased, stroke length decreased, hand and foot speed increased, and the index of interlimb coordination increased (within a catch-up mode) from low to severe intensities (P ≤ .05) and within the 200-m steps performed above the AnT (at or closer to the 4th step; P ≤ .05). Concurrently, intracyclic velocity variations and propelling efficiency remained similar between and within swimming intensities (P > .05). Swimming intensity has a significant impact on swimmers' segmental kinematics and interlimb coordination, with modifications being more evident after the point when AnT is reached. As competitive swimming events are conducted at high intensities (in which anaerobic metabolism becomes more prevalent), coaches should implement specific training series that lead swimmers to adapt their technique to the task constraints that exist in nonhomeostatic race conditions.

  9. Aiding USAF/UPT (Undergraduate Pilot Training) Aircrew Scheduling Using Network Flow Models.

    DTIC Science & Technology

    1986-06-01

    51 3.4 Heuristic Modifications ............ 55 CHAPTER 4 STUDENT SCHEDULING PROBLEM (LEVEL 2) 4.0 Introduction 4.01 Constraints ............. 60 4.02...Covering" Complete Enumeration . . .. . 71 4.14 Heuristics . ............. 72 4.2 Heuristic Method for the Level 2 Problem 4.21 Step I ............... 73...4.22 Step 2 ............... 74 4.23 Advantages to the Heuristic Method. .... .. 78 4.24 Problems with the Heuristic Method. . ... 79 :,., . * CHAPTER5

  10. Monocular Visual Odometry Based on Trifocal Tensor Constraint

    NASA Astrophysics Data System (ADS)

    Chen, Y. J.; Yang, G. L.; Jiang, Y. X.; Liu, X. Y.

    2018-02-01

    For the problem of real-time precise localization in the urban street, a monocular visual odometry based on Extend Kalman fusion of optical-flow tracking and trifocal tensor constraint is proposed. To diminish the influence of moving object, such as pedestrian, we estimate the motion of the camera by extracting the features on the ground, which improves the robustness of the system. The observation equation based on trifocal tensor constraint is derived, which can form the Kalman filter alone with the state transition equation. An Extend Kalman filter is employed to cope with the nonlinear system. Experimental results demonstrate that, compares with Yu’s 2-step EKF method, the algorithm is more accurate which meets the needs of real-time accurate localization in cities.

  11. Constraint Optimization Literature Review

    DTIC Science & Technology

    2015-11-01

    COPs. 15. SUBJECT TERMS high-performance computing, mobile ad hoc network, optimization, constraint, satisfaction 16. SECURITY CLASSIFICATION OF: 17...Optimization Problems 1 2.1 Constraint Satisfaction Problems 1 2.2 Constraint Optimization Problems 3 3. Constraint Optimization Algorithms 9 3.1...Constraint Satisfaction Algorithms 9 3.1.1 Brute-Force search 9 3.1.2 Constraint Propagation 10 3.1.3 Depth-First Search 13 3.1.4 Local Search 18

  12. Lower Tropospheric Ozone Retrievals from Infrared Satellite Observations Using a Self-Adapting Regularization Method

    NASA Astrophysics Data System (ADS)

    Eremenko, M.; Sgheri, L.; Ridolfi, M.; Dufour, G.; Cuesta, J.

    2017-12-01

    Lower tropospheric ozone (O3) retrievals from nadir sounders is challenging due to the lack of vertical sensitivity of the measurements and towards the lowest layers. If improvements have been made during the last decade, it is still important to explore possibilities to improve the retrieval algorithms themselves. O3 retrieval from nadir satellite observations is an ill-conditioned problem, which requires regularization using constraint matrices. Up to now, most of the retrieval algorithms rely on a fixed constraint. The constraint is determined and fixed beforehand, on the basis of sensitivity tests. This does not allow ones to take advantage of the entire capabilities of the satellite measurements, which vary with the thermal conditions of the observed scenes. To overcome this limitation, we developed a self-adapting and altitude-dependent regularization scheme. A crucial step is the choice of the strength of the constraint. This choice is done during an iterative process and depends on the measurement errors and on the sensitivity of the measurements to the target parameters at the different altitudes. The challenge is to limit the use of a priori constraints to the minimal amount needed to perform the inversion. The algorithm has been tested on synthetic observations matching the future IASI-NG satellite instrument. IASI-NG measurements are simulated on the basis of O3 concentrations taken from an atmospheric model and retrieved using two retrieval schemes (the standard and self-adapting ones). Comparison of the results shows that the sensitivity of the observations to the O3 amount in the lowest layers (given by the degrees of freedom for the solution) is increased, which allows a better description of the ozone distribution, especially in the case of large ozone plumes. Biases are reduced and the spatial correlation is improved. Tentative of application to real observations from IASI, currently onboard the Metop satellite will also be presented.

  13. Dispersal constraints for stream invertebrates: setting realistic timescales for biodiversity restoration.

    PubMed

    Parkyn, Stephanie M; Smith, Brian J

    2011-09-01

    Biodiversity goals are becoming increasingly important in stream restoration. Typical models of stream restoration are based on the assumption that if habitat is restored then species will return and ecological processes will re-establish. However, a range of constraints at different scales can affect restoration success. Much of the research in stream restoration ecology has focused on habitat constraints, namely the in-stream and riparian conditions required to restore biota. Dispersal constraints are also integral to determining the timescales, trajectory and potential endpoints of a restored ecosystem. Dispersal is both a means of organism recolonization of restored sites and a vital ecological process that maintains viable populations. We review knowledge of dispersal pathways and explore the factors influencing stream invertebrate dispersal. From empirical and modeling studies of restoration in warm-temperate zones of New Zealand, we make predictions about the timescales of stream ecological restoration under differing levels of dispersal constraints. This process of constraints identification and timescale prediction is proposed as a practical step for resource managers to prioritize and appropriately monitor restoration sites and highlights that in some instances, natural recolonization and achievement of biodiversity goals may not occur.

  14. Dispersal Constraints for Stream Invertebrates: Setting Realistic Timescales for Biodiversity Restoration

    NASA Astrophysics Data System (ADS)

    Parkyn, Stephanie M.; Smith, Brian J.

    2011-09-01

    Biodiversity goals are becoming increasingly important in stream restoration. Typical models of stream restoration are based on the assumption that if habitat is restored then species will return and ecological processes will re-establish. However, a range of constraints at different scales can affect restoration success. Much of the research in stream restoration ecology has focused on habitat constraints, namely the in-stream and riparian conditions required to restore biota. Dispersal constraints are also integral to determining the timescales, trajectory and potential endpoints of a restored ecosystem. Dispersal is both a means of organism recolonization of restored sites and a vital ecological process that maintains viable populations. We review knowledge of dispersal pathways and explore the factors influencing stream invertebrate dispersal. From empirical and modeling studies of restoration in warm-temperate zones of New Zealand, we make predictions about the timescales of stream ecological restoration under differing levels of dispersal constraints. This process of constraints identification and timescale prediction is proposed as a practical step for resource managers to prioritize and appropriately monitor restoration sites and highlights that in some instances, natural recolonization and achievement of biodiversity goals may not occur.

  15. Analysis, design, fabrication, and performance of three-dimensional braided composites

    NASA Astrophysics Data System (ADS)

    Kostar, Timothy D.

    1998-11-01

    Cartesian 3-D (track and column) braiding as a method of composite preforming has been investigated. A complete analysis of the process was conducted to understand the limitations and potentials of the process. Knowledge of the process was enhanced through development of a computer simulation, and it was discovered that individual control of each track and column and multiple-step braid cycles greatly increases possible braid architectures. Derived geometric constraints coupled with the fundamental principles of Cartesian braiding resulted in an algorithm to optimize preform geometry in relation to processing parameters. The design of complex and unusual 3-D braids was investigated in three parts: grouping of yarns to form hybrid composites via an iterative simulation; design of composite cross-sectional shape through implementation of the Universal Method; and a computer algorithm developed to determine the braid plan based on specified cross-sectional shape. Several 3-D braids, which are the result of variations or extensions to Cartesian braiding, are presented. An automated four-step braiding machine with axial yarn insertion has been constructed and used to fabricate two-step, double two-step, four-step, and four-step with axial and transverse yarn insertion braids. A working prototype of a multi-step braiding machine was used to fabricate four-step braids with surrogate material insertion, unique hybrid structures from multiple track and column displacement and multi-step cycles, and complex-shaped structures with constant or varying cross-sections. Braid materials include colored polyester yarn to study the yarn grouping phenomena, Kevlar, glass, and graphite for structural reinforcement, and polystyrene, silicone rubber, and fasteners for surrogate material insertion. A verification study for predicted yarn orientation and volume fraction was conducted, and a topological model of 3-D braids was developed. The solid model utilizes architectural parameters, generated from the process simulation, to determine the composite elastic properties. Methods of preform consolidation are investigated and the results documented. The extent of yarn deformation (packing) resulting from preform consolidation was investigated through cross-sectional micrographs. The fiber volume fraction of select hybrid composites was measured and representative unit cells are suggested. Finally, a comparison study of the elastic performance of Kevlar/epoxy and carbon/Kevlar hybrid composites was conducted.

  16. Advancing the climate data driven crop-modeling studies in the dry areas of Northern Syria and Lebanon: an important first step for assessing impact of future climate.

    PubMed

    Dixit, Prakash N; Telleria, Roberto

    2015-04-01

    Inter-annual and seasonal variability in climatic parameters, most importantly rainfall, have potential to cause climate-induced risk in long-term crop production. Short-term field studies do not capture the full nature of such risk and the extent to which modifications to crop, soil and water management recommendations may be made to mitigate the extent of such risk. Crop modeling studies driven by long-term daily weather data can predict the impact of climate-induced risk on crop growth and yield however, the availability of long-term daily weather data can present serious constraints to the use of crop models. To tackle this constraint, two weather generators namely, LARS-WG and MarkSim, were evaluated in order to assess their capabilities of reproducing frequency distributions, means, variances, dry spell and wet chains of observed daily precipitation, maximum and minimum temperature, and solar radiation for the eight locations across cropping areas of Northern Syria and Lebanon. Further, the application of generated long-term daily weather data, with both weather generators, in simulating barley growth and yield was also evaluated. We found that overall LARS-WG performed better than MarkSim in generating daily weather parameters and in 50 years continuous simulation of barley growth and yield. Our findings suggest that LARS-WG does not necessarily require long-term e.g., >30 years observed weather data for calibration as generated results proved to be satisfactory with >10 years of observed data except in area with higher altitude. Evaluating these weather generators and the ability of generated weather data to perform long-term simulation of crop growth and yield is an important first step to assess the impact of future climate on yields, and to identify promising technologies to make agricultural systems more resilient in the given region. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Planning a sports training program using Adaptive Particle Swarm Optimization with emphasis on physiological constraints.

    PubMed

    Kumyaito, Nattapon; Yupapin, Preecha; Tamee, Kreangsak

    2018-01-08

    An effective training plan is an important factor in sports training to enhance athletic performance. A poorly considered training plan may result in injury to the athlete, and overtraining. Good training plans normally require expert input, which may have a cost too great for many athletes, particularly amateur athletes. The objectives of this research were to create a practical cycling training plan that substantially improves athletic performance while satisfying essential physiological constraints. Adaptive Particle Swarm Optimization using ɛ-constraint methods were used to formulate such a plan and simulate the likely performance outcomes. The physiological constraints considered in this study were monotony, chronic training load ramp rate and daily training impulse. A comparison of results from our simulations against a training plan from British Cycling, which we used as our standard, showed that our training plan outperformed the benchmark in terms of both athletic performance and satisfying all physiological constraints.

  18. Chromatography process development in the quality by design paradigm I: Establishing a high-throughput process development platform as a tool for estimating "characterization space" for an ion exchange chromatography step.

    PubMed

    Bhambure, R; Rathore, A S

    2013-01-01

    This article describes the development of a high-throughput process development (HTPD) platform for developing chromatography steps. An assessment of the platform as a tool for establishing the "characterization space" for an ion exchange chromatography step has been performed by using design of experiments. Case studies involving use of a biotech therapeutic, granulocyte colony-stimulating factor have been used to demonstrate the performance of the platform. We discuss the various challenges that arise when working at such small volumes along with the solutions that we propose to alleviate these challenges to make the HTPD data suitable for empirical modeling. Further, we have also validated the scalability of this platform by comparing the results from the HTPD platform (2 and 6 μL resin volumes) against those obtained at the traditional laboratory scale (resin volume, 0.5 mL). We find that after integration of the proposed correction factors, the HTPD platform is capable of performing the process optimization studies at 170-fold higher productivity. The platform is capable of providing semi-quantitative assessment of the effects of the various input parameters under consideration. We think that platform such as the one presented is an excellent tool for examining the "characterization space" and reducing the extensive experimentation at the traditional lab scale that is otherwise required for establishing the "design space." Thus, this platform will specifically aid in successful implementation of quality by design in biotech process development. This is especially significant in view of the constraints with respect to time and resources that the biopharma industry faces today. Copyright © 2013 American Institute of Chemical Engineers.

  19. Configuration optimization of space structures

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos; Crivelli, Luis A.; Vandenbelt, David

    1991-01-01

    The objective is to develop a computer aid for the conceptual/initial design of aerospace structures, allowing configurations and shape to be apriori design variables. The topics are presented in viewgraph form and include the following: Kikuchi's homogenization method; a classical shape design problem; homogenization method steps; a 3D mechanical component design example; forming a homogenized finite element; a 2D optimization problem; treatment of volume inequality constraint; algorithms for the volume inequality constraint; object function derivatives--taking advantage of design locality; stiffness variations; variations of potential; and schematics of the optimization problem.

  20. A methodology for analysing lateral coupled behavior of high speed railway vehicles and structures

    NASA Astrophysics Data System (ADS)

    Antolín, P.; Goicolea, J. M.; Astiz, M. A.; Alonso, A.

    2010-06-01

    Continuous increment of the speed of high speed trains entails the increment of kinetic energy of the trains. The main goal of this article is to study the coupled lateral behavior of vehicle-structure systems for high speed trains. Non linear finite element methods are used for structures whereas multibody dynamics methods are employed for vehicles. Special attention must be paid when dealing with contact rolling constraints for coupling bridge decks and train wheels. The dynamic models must include mixed variables (displacements and creepages). Additionally special attention must be paid to the contact algorithms adequate to wheel-rail contact. The coupled vehicle-structure system is studied in a implicit dynamic framework. Due to the presence of very different systems (trains and bridges), different frequencies are involved in the problem leading to stiff systems. Regarding to contact methods, a main branch is studied in normal contact between train wheels and bridge decks: penalty method. According to tangential contact FastSim algorithm solves the tangential contact at each time step solving a differential equation involving relative displacements and creepage variables. Integration for computing the total forces in the contact ellipse domain is performed for each train wheel and each solver iteration. Coupling between trains and bridges requires a special treatment according to the kinetic constraints imposed in the wheel-rail pair and the load transmission. A numerical example is performed.

  1. Representative learning design in springboard diving: Is dry-land training representative of a pool dive?

    PubMed

    Barris, Sian; Davids, Keith; Farrow, Damian

    2013-01-01

    Two distinctly separate training facilities (dry-land and aquatic) are routinely used in springboard diving and pose an interesting problem for learning, given the inherent differences in landing (head first vs. feet first) imposed by the different task constraints. Although divers may practise the same preparation phase, take-off and initial aerial rotation in both environments, there is no evidence to suggest that the tasks completed in the dry-land training environment are representative of those performed in the aquatic competition environment. The aim of this study was to compare the kinematics of the preparation phase of reverse dives routinely practised in each environment. Despite their high skill level, it was predicted that individual analyses of elite springboard divers would reveal differences in the joint coordination and board-work between take-offs. The two-dimensional kinematic characteristics were recorded during normal training sessions and used for intra-individual analysis. Kinematic characteristics of the preparatory take-off phase revealed differences in board-work (step lengths, jump height, board depression angles) for all participants at key events. However, the presence of scaled global topological characteristics suggested that all participants adopted similar joint coordination patterns in both environments. These findings suggest that the task constraints of wet and dry training environments are not similar, and highlight the need for coaches to consider representative learning designs in high performance diving programmes.

  2. Transport coefficients of dense fluids composed of globular molecules. Equilibrium molecular dynamics investigations using more-center Lennard-Jones potentials

    NASA Astrophysics Data System (ADS)

    Hoheisel, C.

    1988-09-01

    Equilibrium molecular dynamics calculations with constraints have been performed for model liquids SF6 and CF4. The computations were carried out with four- and six-center Lennard-Jones potentials and up to 2×105 integration steps. Shear, bulk viscosity and the thermal conductivity have been calculated with use of Green-Kubo relations in the formulation of ``molecule variables.'' Various thermodynamic states were investigated. For SF6, a detailed comparison with experimental data was possible. For CF4, the MD results could only be compared with experiment for one liquid state. For the latter liquid, a complementary comparison was performed using MD results obtained with a one-center Lennard-Jones potential. A limited test of the particle number dependence of the results is presented. Partial and total correlations functions are shown and discussed with respect to findings obtained for the one-center Lennard-Jones liquid.

  3. Sensitivity analysis of an optimization-based trajectory planner for autonomous vehicles in urban environments

    NASA Astrophysics Data System (ADS)

    Hardy, Jason; Campbell, Mark; Miller, Isaac; Schimpf, Brian

    2008-10-01

    The local path planner implemented on Cornell's 2007 DARPA Urban Challenge entry vehicle Skynet utilizes a novel mixture of discrete and continuous path planning steps to facilitate a safe, smooth, and human-like driving behavior. The planner first solves for a feasible path through the local obstacle map using a grid based search algorithm. The resulting path is then refined using a cost-based nonlinear optimization routine with both hard and soft constraints. The behavior of this optimization is influenced by tunable weighting parameters which govern the relative cost contributions assigned to different path characteristics. This paper studies the sensitivity of the vehicle's performance to these path planner weighting parameters using a data driven simulation based on logged data from the National Qualifying Event. The performance of the path planner in both the National Qualifying Event and in the Urban Challenge is also presented and analyzed.

  4. Task Assignment Heuristics for Parallel and Distributed CFD Applications

    NASA Technical Reports Server (NTRS)

    Lopez-Benitez, Noe; Djomehri, M. Jahed; Biswas, Rupak

    2003-01-01

    This paper proposes a task graph (TG) model to represent a single discrete step of multi-block overset grid computational fluid dynamics (CFD) applications. The TG model is then used to not only balance the computational workload across the overset grids but also to reduce inter-grid communication costs. We have developed a set of task assignment heuristics based on the constraints inherent in this class of CFD problems. Two basic assignments, the smallest task first (STF) and the largest task first (LTF), are first presented. They are then systematically costs. To predict the performance of the proposed task assignment heuristics, extensive performance evaluations are conducted on a synthetic TG with tasks defined in terms of the number of grid points in predetermined overlapping grids. A TG derived from a realistic problem with eight million grid points is also used as a test case.

  5. HERMIES-3: A step toward autonomous mobility, manipulation, and perception

    NASA Technical Reports Server (NTRS)

    Weisbin, C. R.; Burks, B. L.; Einstein, J. R.; Feezell, R. R.; Manges, W. W.; Thompson, D. H.

    1989-01-01

    HERMIES-III is an autonomous robot comprised of a seven degree-of-freedom (DOF) manipulator designed for human scale tasks, a laser range finder, a sonar array, an omni-directional wheel-driven chassis, multiple cameras, and a dual computer system containing a 16-node hypercube expandable to 128 nodes. The current experimental program involves performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES-III). The environment in which the robots operate has been designed to include multiple valves, pipes, meters, obstacles on the floor, valves occluded from view, and multiple paths of differing navigation complexity. The ongoing research program supports the development of autonomous capability for HERMIES-IIB and III to perform complex navigation and manipulation under time constraints, while dealing with imprecise sensory information.

  6. Joint Chance-Constrained Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob

    2012-01-01

    This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.

  7. Data sensitivity in a hybrid STEP/Coulomb model for aftershock forecasting

    NASA Astrophysics Data System (ADS)

    Steacy, S.; Jimenez Lloret, A.; Gerstenberger, M.

    2014-12-01

    Operational earthquake forecasting is rapidly becoming a 'hot topic' as civil protection authorities seek quantitative information on likely near future earthquake distributions during seismic crises. At present, most of the models in public domain are statistical and use information about past and present seismicity as well as b-value and Omori's law to forecast future rates. A limited number of researchers, however, are developing hybrid models which add spatial constraints from Coulomb stress modeling to existing statistical approaches. Steacy et al. (2013), for instance, recently tested a model that combines Coulomb stress patterns with the STEP (short-term earthquake probability) approach against seismicity observed during the 2010-2012 Canterbury earthquake sequence. They found that the new model performed at least as well as, and often better than, STEP when tested against retrospective data but that STEP was generally better in pseudo-prospective tests that involved data actually available within the first 10 days of each event of interest. They suggested that the major reason for this discrepancy was uncertainty in the slip models and, in particular, in the geometries of the faults involved in each complex major event. Here we test this hypothesis by developing a number of retrospective forecasts for the Landers earthquake using hypothetical slip distributions developed by Steacy et al. (2004) to investigate the sensitivity of Coulomb stress models to fault geometry and earthquake slip, and we also examine how the choice of receiver plane geometry affects the results. We find that the results are strongly sensitive to the slip models and moderately sensitive to the choice of receiver orientation. We further find that comparison of the stress fields (resulting from the slip models) with the location of events in the learning period provides advance information on whether or not a particular hybrid model will perform better than STEP.

  8. Evaluation of atomic pressure in the multiple time-step integration algorithm.

    PubMed

    Andoh, Yoshimichi; Yoshii, Noriyuki; Yamada, Atsushi; Okazaki, Susumu

    2017-04-15

    In molecular dynamics (MD) calculations, reduction in calculation time per MD loop is essential. A multiple time-step (MTS) integration algorithm, the RESPA (Tuckerman and Berne, J. Chem. Phys. 1992, 97, 1990-2001), enables reductions in calculation time by decreasing the frequency of time-consuming long-range interaction calculations. However, the RESPA MTS algorithm involves uncertainties in evaluating the atomic interaction-based pressure (i.e., atomic pressure) of systems with and without holonomic constraints. It is not clear which intermediate forces and constraint forces in the MTS integration procedure should be used to calculate the atomic pressure. In this article, we propose a series of equations to evaluate the atomic pressure in the RESPA MTS integration procedure on the basis of its equivalence to the Velocity-Verlet integration procedure with a single time step (STS). The equations guarantee time-reversibility even for the system with holonomic constrants. Furthermore, we generalize the equations to both (i) arbitrary number of inner time steps and (ii) arbitrary number of force components (RESPA levels). The atomic pressure calculated by our equations with the MTS integration shows excellent agreement with the reference value with the STS, whereas pressures calculated using the conventional ad hoc equations deviated from it. Our equations can be extended straightforwardly to the MTS integration algorithm for the isothermal NVT and isothermal-isobaric NPT ensembles. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  9. Methodology for the specification of communication activities within the framework of a multi-layered architecture: Toward the definition of a knowledge base

    NASA Astrophysics Data System (ADS)

    Amyay, Omar

    A method defined in terms of synthesis and verification steps is presented. The specification of the services and protocols of communication within a multilayered architecture of the Open Systems Interconnection (OSI) type is an essential issue for the design of computer networks. The aim is to obtain an operational specification of the protocol service couple of a given layer. Planning synthesis and verification steps constitute a specification trajectory. The latter is based on the progressive integration of the 'initial data' constraints and verification of the specification originating from each synthesis step, through validity constraints that characterize an admissible solution. Two types of trajectories are proposed according to the style of the initial specification of the service protocol couple: operational type and service supplier viewpoint; knowledge property oriented type and service viewpoint. Synthesis and verification activities were developed and formalized in terms of labeled transition systems, temporal logic and epistemic logic. The originality of the second specification trajectory and the use of the epistemic logic are shown. An 'artificial intelligence' approach enables a conceptual model to be defined for a knowledge base system for implementing the method proposed. It is structured in three levels of representation of the knowledge relating to the domain, the reasoning characterizing synthesis and verification activities and the planning of the steps of a specification trajectory.

  10. Enforcement of entailment constraints in distributed service-based business processes.

    PubMed

    Hummer, Waldemar; Gaubatz, Patrick; Strembeck, Mark; Zdun, Uwe; Dustdar, Schahram

    2013-11-01

    A distributed business process is executed in a distributed computing environment. The service-oriented architecture (SOA) paradigm is a popular option for the integration of software services and execution of distributed business processes. Entailment constraints, such as mutual exclusion and binding constraints, are important means to control process execution. Mutually exclusive tasks result from the division of powerful rights and responsibilities to prevent fraud and abuse. In contrast, binding constraints define that a subject who performed one task must also perform the corresponding bound task(s). We aim to provide a model-driven approach for the specification and enforcement of task-based entailment constraints in distributed service-based business processes. Based on a generic metamodel, we define a domain-specific language (DSL) that maps the different modeling-level artifacts to the implementation-level. The DSL integrates elements from role-based access control (RBAC) with the tasks that are performed in a business process. Process definitions are annotated using the DSL, and our software platform uses automated model transformations to produce executable WS-BPEL specifications which enforce the entailment constraints. We evaluate the impact of constraint enforcement on runtime performance for five selected service-based processes from existing literature. Our evaluation demonstrates that the approach correctly enforces task-based entailment constraints at runtime. The performance experiments illustrate that the runtime enforcement operates with an overhead that scales well up to the order of several ten thousand logged invocations. Using our DSL annotations, the user-defined process definition remains declarative and clean of security enforcement code. Our approach decouples the concerns of (non-technical) domain experts from technical details of entailment constraint enforcement. The developed framework integrates seamlessly with WS-BPEL and the Web services technology stack. Our prototype implementation shows the feasibility of the approach, and the evaluation points to future work and further performance optimizations.

  11. Linking Bibliographic Data Bases: A Discussion of the Battelle Technical Report.

    ERIC Educational Resources Information Center

    Jones, C. Lee

    This document establishes the context, summarizes the contents, and discusses the Battelle technical report, noting certain constraints of the study. Further steps for the linking of bibliographic databases for use by academic and public libraries are suggested. (RAA)

  12. Physical constraints, fundamental limits, and optimal locus of operating points for an inverted pendulum based actuated dynamic walker.

    PubMed

    Patnaik, Lalit; Umanand, Loganathan

    2015-10-26

    The inverted pendulum is a popular model for describing bipedal dynamic walking. The operating point of the walker can be specified by the combination of initial mid-stance velocity (v0) and step angle (φm) chosen for a given walk. In this paper, using basic mechanics, a framework of physical constraints that limit the choice of operating points is proposed. The constraint lines thus obtained delimit the allowable region of operation of the walker in the v0-φm plane. A given average forward velocity vx,avg can be achieved by several combinations of v0 and φm. Only one of these combinations results in the minimum mechanical power consumption and can be considered the optimum operating point for the given vx,avg. This paper proposes a method for obtaining this optimal operating point based on tangency of the power and velocity contours. Putting together all such operating points for various vx,avg, a family of optimum operating points, called the optimal locus, is obtained. For the energy loss and internal energy models chosen, the optimal locus obtained has a largely constant step angle with increasing speed but tapers off at non-dimensional speeds close to unity.

  13. Constraints influencing sports wheelchair propulsion performance and injury risk.

    PubMed

    Churton, Emily; Keogh, Justin Wl

    2013-03-28

    The Paralympic Games are the pinnacle of sport for many athletes with a disability. A potential issue for many wheelchair athletes is how to train hard to maximise performance while also reducing the risk of injuries, particularly to the shoulder due to the accumulation of stress placed on this joint during activities of daily living, training and competition. The overall purpose of this narrative review was to use the constraints-led approach of dynamical systems theory to examine how various constraints acting upon the wheelchair-user interface may alter hand rim wheelchair performance during sporting activities, and to a lesser extent, their injury risk. As we found no studies involving Paralympic athletes that have directly utilised the dynamical systems approach to interpret their data, we have used this approach to select some potential constraints and discussed how they may alter wheelchair performance and/or injury risk. Organism constraints examined included player classifications, wheelchair setup, training and intrinsic injury risk factors. Task constraints examined the influence of velocity and types of locomotion (court sports vs racing) in wheelchair propulsion, while environmental constraints focused on forces that tend to oppose motion such as friction and surface inclination. Finally, the ecological validity of the research studies assessing wheelchair propulsion was critiqued prior to recommendations for practice and future research being given.

  14. Constraints influencing sports wheelchair propulsion performance and injury risk

    PubMed Central

    2013-01-01

    The Paralympic Games are the pinnacle of sport for many athletes with a disability. A potential issue for many wheelchair athletes is how to train hard to maximise performance while also reducing the risk of injuries, particularly to the shoulder due to the accumulation of stress placed on this joint during activities of daily living, training and competition. The overall purpose of this narrative review was to use the constraints-led approach of dynamical systems theory to examine how various constraints acting upon the wheelchair-user interface may alter hand rim wheelchair performance during sporting activities, and to a lesser extent, their injury risk. As we found no studies involving Paralympic athletes that have directly utilised the dynamical systems approach to interpret their data, we have used this approach to select some potential constraints and discussed how they may alter wheelchair performance and/or injury risk. Organism constraints examined included player classifications, wheelchair setup, training and intrinsic injury risk factors. Task constraints examined the influence of velocity and types of locomotion (court sports vs racing) in wheelchair propulsion, while environmental constraints focused on forces that tend to oppose motion such as friction and surface inclination. Finally, the ecological validity of the research studies assessing wheelchair propulsion was critiqued prior to recommendations for practice and future research being given. PMID:23557065

  15. Kinetic measures of restabilisation during volitional stepping reveal age-related alterations in the control of mediolateral dynamic stability.

    PubMed

    Singer, Jonathan C; McIlroy, William E; Prentice, Stephen D

    2014-11-07

    Research examining age-related changes in dynamic stability during stepping has recognised the importance of the restabilisation phase, subsequent to foot-contact. While regulation of the net ground reaction force (GRFnet) line of action is believed to influence dynamic stability during steady-state locomotion, such control during restabilisation remains unknown. This work explored the origins of age-related decline in mediolateral dynamic stability by examining the line of action of GRFnet relative to the centre of mass (COM) during restabilisation following voluntary stepping. Healthy younger and older adults (n=20 per group) performed three single-step tasks (varying speed and step placement), altering the challenge to stability control. Age-related differences in magnitude and intertrial variability of the angle of divergence of GRFnet line of action relative to the COM were quantified, along with the peak mediolateral and vertical GRFnet components. The angle of divergence was further examined at discrete points during restabilisation, to uncover events of potential importance to stability control. Older adults exhibited a reduced angle of divergence throughout restabilisation. Temporal and spatial constraints on stepping increased the magnitude and intertrial variability of the angle of divergence, although not differentially among the older adults. Analysis of the time-varying angle of divergence revealed age-related reductions in magnitude, with increases in timing and intertrial timing variability during the later phase of restabilisation. This work further supports the idea that age-related challenges in lateral stability control emerge during restabilisation. Age-related alterations during the later phase of restabilisation may signify challenges with reactive control. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. A Multi-Scale, Multi-Physics Optimization Framework for Additively Manufactured Structural Components

    NASA Astrophysics Data System (ADS)

    El-Wardany, Tahany; Lynch, Mathew; Gu, Wenjiong; Hsu, Arthur; Klecka, Michael; Nardi, Aaron; Viens, Daniel

    This paper proposes an optimization framework enabling the integration of multi-scale / multi-physics simulation codes to perform structural optimization design for additively manufactured components. Cold spray was selected as the additive manufacturing (AM) process and its constraints were identified and included in the optimization scheme. The developed framework first utilizes topology optimization to maximize stiffness for conceptual design. The subsequent step applies shape optimization to refine the design for stress-life fatigue. The component weight was reduced by 20% while stresses were reduced by 75% and the rigidity was improved by 37%. The framework and analysis codes were implemented using Altair software as well as an in-house loading code. The optimized design was subsequently produced by the cold spray process.

  17. Brief assessments and screening for geriatric conditions in older primary care patients: a pragmatic approach.

    PubMed

    Seematter-Bagnoud, Laurence; Büla, Christophe

    2018-01-01

    This paper discusses the rationale behind performing a brief geriatric assessment as a first step in the management of older patients in primary care practice. While geriatric conditions are considered by older patients and health professionals as particularly relevant for health and well-being, they remain too often overlooked due to many patient- and physician-related factors. These include time constraints and lack of specific training to undertake comprehensive geriatric assessment. This article discusses the epidemiologic rationale for screening functional, cognitive, affective, hearing and visual impairments, and nutritional status as well as fall risk and social status. It proposes using brief screening tests in primary care practice to identify patients who may need further comprehensive geriatric assessment or specific interventions.

  18. Earth Observatory Satellite system definition study. Report no. 2: Instrument constraints and interface specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The instruments to be flown on the Earth Observatory Satellite (EOS) system are defined. The instruments will be used to support the Land Resources Management (LRM) mission of the EOS. Program planning information and suggested acquisition activities for obtaining the instruments are presented. The subjects considered are as follows: (1) the performance and interface of the Thematic Mapper (TM) and the High Resolution Pointing Imager (HRPI), (2) procedure for interfacing the TM and HRPI with the EOS satellite, (3) a space vehicle integration plan suggesting the steps and sequence of events required to carry out the interface activities, and (4) suggested agreements between the contractors for providing timely and equitable solution of problems at minimum cost.

  19. A New Reynolds Stress Algebraic Equation Model

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Zhu, Jiang; Lumley, John L.

    1994-01-01

    A general turbulent constitutive relation is directly applied to propose a new Reynolds stress algebraic equation model. In the development of this model, the constraints based on rapid distortion theory and realizability (i.e. the positivity of the normal Reynolds stresses and the Schwarz' inequality between turbulent velocity correlations) are imposed. Model coefficients are calibrated using well-studied basic flows such as homogeneous shear flow and the surface flow in the inertial sublayer. The performance of this model is then tested in complex turbulent flows including the separated flow over a backward-facing step and the flow in a confined jet. The calculation results are encouraging and point to the success of the present model in modeling turbulent flows with complex geometries.

  20. Pattern-Based Inverse Modeling for Characterization of Subsurface Flow Models with Complex Geologic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.

    2017-12-01

    Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.

  1. Dynamic Non-Rigid Objects Reconstruction with a Single RGB-D Sensor

    PubMed Central

    Zuo, Xinxin; Du, Chao; Wang, Runxiao; Zheng, Jiangbin; Yang, Ruigang

    2018-01-01

    This paper deals with the 3D reconstruction problem for dynamic non-rigid objects with a single RGB-D sensor. It is a challenging task as we consider the almost inevitable accumulation error issue in some previous sequential fusion methods and also the possible failure of surface tracking in a long sequence. Therefore, we propose a global non-rigid registration framework and tackle the drifting problem via an explicit loop closure. Our novel scheme starts with a fusion step to get multiple partial scans from the input sequence, followed by a pairwise non-rigid registration and loop detection step to obtain correspondences between neighboring partial pieces and those pieces that form a loop. Then, we perform a global registration procedure to align all those pieces together into a consistent canonical space as guided by those matches that we have established. Finally, our proposed model-update step helps fixing potential misalignments that still exist after the global registration. Both geometric and appearance constraints are enforced during our alignment; therefore, we are able to get the recovered model with accurate geometry as well as high fidelity color maps for the mesh. Experiments on both synthetic and various real datasets have demonstrated the capability of our approach to reconstruct complete and watertight deformable objects. PMID:29547562

  2. Insight into the ten-penny problem: guiding search by constraints and maximization.

    PubMed

    Öllinger, Michael; Fedor, Anna; Brodt, Svenja; Szathmáry, Eörs

    2017-09-01

    For a long time, insight problem solving has been either understood as nothing special or as a particular class of problem solving. The first view implicates the necessity to find efficient heuristics that restrict the search space, the second, the necessity to overcome self-imposed constraints. Recently, promising hybrid cognitive models attempt to merge both approaches. In this vein, we were interested in the interplay of constraints and heuristic search, when problem solvers were asked to solve a difficult multi-step problem, the ten-penny problem. In three experimental groups and one control group (N = 4 × 30) we aimed at revealing, what constraints drive problem difficulty in this problem, and how relaxing constraints, and providing an efficient search criterion facilitates the solution. We also investigated how the search behavior of successful problem solvers and non-solvers differ. We found that relaxing constraints was necessary but not sufficient to solve the problem. Without efficient heuristics that facilitate the restriction of the search space, and testing the progress of the problem solving process, the relaxation of constraints was not effective. Relaxing constraints and applying the search criterion are both necessary to effectively increase solution rates. We also found that successful solvers showed promising moves earlier and had a higher maximization and variation rate across solution attempts. We propose that this finding sheds light on how different strategies contribute to solving difficult problems. Finally, we speculate about the implications of our findings for insight problem solving.

  3. An adaptive model for vanadium redox flow battery and its application for online peak power estimation

    NASA Astrophysics Data System (ADS)

    Wei, Zhongbao; Meng, Shujuan; Tseng, King Jet; Lim, Tuti Mariana; Soong, Boon Hee; Skyllas-Kazacos, Maria

    2017-03-01

    An accurate battery model is the prerequisite for reliable state estimate of vanadium redox battery (VRB). As the battery model parameters are time varying with operating condition variation and battery aging, the common methods where model parameters are empirical or prescribed offline lacks accuracy and robustness. To address this issue, this paper proposes to use an online adaptive battery model to reproduce the VRB dynamics accurately. The model parameters are online identified with both the recursive least squares (RLS) and the extended Kalman filter (EKF). Performance comparison shows that the RLS is superior with respect to the modeling accuracy, convergence property, and computational complexity. Based on the online identified battery model, an adaptive peak power estimator which incorporates the constraints of voltage limit, SOC limit and design limit of current is proposed to fully exploit the potential of the VRB. Experiments are conducted on a lab-scale VRB system and the proposed peak power estimator is verified with a specifically designed "two-step verification" method. It is shown that different constraints dominate the allowable peak power at different stages of cycling. The influence of prediction time horizon selection on the peak power is also analyzed.

  4. Analysis of Cell Wall-Related Genes in Organs of Medicago sativa L. under Different Abiotic Stresses.

    PubMed

    Behr, Marc; Legay, Sylvain; Hausman, Jean-Francois; Guerriero, Gea

    2015-07-16

    Abiotic constraints are a source of concern in agriculture, because they can have a strong impact on plant growth and development, thereby affecting crop yield. The response of plants to abiotic constraints varies depending on the type of stress, on the species and on the organs. Although many studies have addressed different aspects of the plant response to abiotic stresses, only a handful has focused on the role of the cell wall. A targeted approach has been used here to study the expression of cell wall-related genes in different organs of alfalfa plants subjected for four days to three different abiotic stress treatments, namely salt, cold and heat stress. Genes involved in different steps of cell wall formation (cellulose biosynthesis, monolignol biosynthesis and polymerization) have been analyzed in different organs of Medicago sativa L. Prior to this analysis, an in silico classification of dirigent/dirigent-like proteins and class III peroxidases has been performed in Medicago truncatula and M. sativa. The final goal of this study is to infer and compare the expression patterns of cell wall-related genes in response to different abiotic stressors in the organs of an important legume crop.

  5. Analysis of Cell Wall-Related Genes in Organs of Medicago sativa L. under Different Abiotic Stresses

    PubMed Central

    Behr, Marc; Legay, Sylvain; Hausman, Jean-Francois; Guerriero, Gea

    2015-01-01

    Abiotic constraints are a source of concern in agriculture, because they can have a strong impact on plant growth and development, thereby affecting crop yield. The response of plants to abiotic constraints varies depending on the type of stress, on the species and on the organs. Although many studies have addressed different aspects of the plant response to abiotic stresses, only a handful has focused on the role of the cell wall. A targeted approach has been used here to study the expression of cell wall-related genes in different organs of alfalfa plants subjected for four days to three different abiotic stress treatments, namely salt, cold and heat stress. Genes involved in different steps of cell wall formation (cellulose biosynthesis, monolignol biosynthesis and polymerization) have been analyzed in different organs of Medicago sativa L. Prior to this analysis, an in silico classification of dirigent/dirigent-like proteins and class III peroxidases has been performed in Medicago truncatula and M. sativa. The final goal of this study is to infer and compare the expression patterns of cell wall-related genes in response to different abiotic stressors in the organs of an important legume crop. PMID:26193255

  6. Performance constraints and compensation for teleoperation with delay

    NASA Technical Reports Server (NTRS)

    Mclaughlin, J. S.; Staunton, B. D.

    1989-01-01

    A classical control perspective is used to characterize performance constraints and evaluate compensation techniques for teleoperation with delay. Use of control concepts such as open and closed loop performance, stability, and bandwidth yield insight to the delay problem. Teleoperator performance constraints are viewed as an open loop time delay lag and as a delay-induced closed loop bandwidth constraint. These constraints are illustrated with a simple analytical tracking example which is corroborated by a real time, 'man-in-the-loop' tracking experiment. The experiment also provides insight to those controller characteristics which are unique to a human operator. Predictive displays and feedforward commands are shown to provide open loop compensation for delay lag. Low pass filtering of telemetry or feedback signals is interpreted as closed loop compensation used to maintain a sufficiently low bandwidth for stability. A new closed loop compensation approach is proposed that uses a reactive (or force feedback) hand controller to restrict system bandwidth by impeding operator inputs.

  7. A Discrete Constraint for Entropy Conservation and Sound Waves in Cloud-Resolving Modeling

    NASA Technical Reports Server (NTRS)

    Zeng, Xi-Ping; Tao, Wei-Kuo; Simpson, Joanne

    2003-01-01

    Ideal cloud-resolving models contain little-accumulative errors. When their domain is so large that synoptic large-scale circulations are accommodated, they can be used for the simulation of the interaction between convective clouds and the large-scale circulations. This paper sets up a framework for the models, using moist entropy as a prognostic variable and employing conservative numerical schemes. The models possess no accumulative errors of thermodynamic variables when they comply with a discrete constraint on entropy conservation and sound waves. Alternatively speaking, the discrete constraint is related to the correct representation of the large-scale convergence and advection of moist entropy. Since air density is involved in entropy conservation and sound waves, the challenge is how to compute sound waves efficiently under the constraint. To address the challenge, a compensation method is introduced on the basis of a reference isothermal atmosphere whose governing equations are solved analytically. Stability analysis and numerical experiments show that the method allows the models to integrate efficiently with a large time step.

  8. "Wood already touched by fire is not hard to set alight": Comment on "Constraints to applying systems thinking concepts in health systems: A regional perspective from surveying stakeholders in Eastern Mediterranean countries".

    PubMed

    Agyepong, Irene Akua

    2015-03-01

    A major constraint to the application of any form of knowledge and principles is the awareness, understanding and acceptance of the knowledge and principles. Systems Thinking (ST) is a way of understanding and thinking about the nature of health systems and how to make and implement decisions within health systems to maximize desired and minimize undesired effects. A major constraint to applying ST within health systems in Low- and Middle-Income Countries (LMICs) would appear to be an awareness and understanding of ST and how to apply it. This is a fundamental constraint and in the increasing desire to enable the application of ST concepts in health systems in LMIC and understand and evaluate the effects; an essential first step is going to be enabling of a wide spread as well as deeper understanding of ST and how to apply this understanding.

  9. Open | SpeedShop: An Open Source Infrastructure for Parallel Performance Analysis

    DOE PAGES

    Schulz, Martin; Galarowicz, Jim; Maghrak, Don; ...

    2008-01-01

    Over the last decades a large number of performance tools has been developed to analyze and optimize high performance applications. Their acceptance by end users, however, has been slow: each tool alone is often limited in scope and comes with widely varying interfaces and workflow constraints, requiring different changes in the often complex build and execution infrastructure of the target application. We started the Open | SpeedShop project about 3 years ago to overcome these limitations and provide efficient, easy to apply, and integrated performance analysis for parallel systems. Open | SpeedShop has two different faces: it provides an interoperable tool set covering themore » most common analysis steps as well as a comprehensive plugin infrastructure for building new tools. In both cases, the tools can be deployed to large scale parallel applications using DPCL/Dyninst for distributed binary instrumentation. Further, all tools developed within or on top of Open | SpeedShop are accessible through multiple fully equivalent interfaces including an easy-to-use GUI as well as an interactive command line interface reducing the usage threshold for those tools.« less

  10. Image Reconstruction from Highly Undersampled (k, t)-Space Data with Joint Partial Separability and Sparsity Constraints

    PubMed Central

    Zhao, Bo; Haldar, Justin P.; Christodoulou, Anthony G.; Liang, Zhi-Pei

    2012-01-01

    Partial separability (PS) and sparsity have been previously used to enable reconstruction of dynamic images from undersampled (k, t)-space data. This paper presents a new method to use PS and sparsity constraints jointly for enhanced performance in this context. The proposed method combines the complementary advantages of PS and sparsity constraints using a unified formulation, achieving significantly better reconstruction performance than using either of these constraints individually. A globally convergent computational algorithm is described to efficiently solve the underlying optimization problem. Reconstruction results from simulated and in vivo cardiac MRI data are also shown to illustrate the performance of the proposed method. PMID:22695345

  11. Premature Mobility of Boulders in Constructed Step-pool River Structures in the Carmel River, CA: The Role of Fish-centric Design Constraints, and Flow on Structural Stability

    NASA Astrophysics Data System (ADS)

    Smith, D. P.; Chow, K.; Luna, L.

    2017-12-01

    The 32 m tall San Clemente Dam (Carmel River, CA) was removed in 2015 to eliminate seismic risk and to improve fish passage for all life stages of steelhead (O. mykiss). Reservoir sediment was sequestered in place, rather than released, and a new 1000 m long channel/floodplain system was constructed to circumvent the stored sediment. The channel comprised a 250 m long, meandering low-gradient reach and a 750 m reach with alternating step-pool sections, plane beds, and resting pools. The floodprone surfaces were compacted, wrapped in geotechnical fabric and vegetated. This study analyzes the geomorphic evolution of the new channel system during its first two years of service based upon detailed field inspection, SfM photogrammetry, orthophoto analysis, and 2d hydraulic modeling. A significant proportion of the step-pool structures experienced premature mobility and several reaches of engineered stream banks were eroded in the first year. Individual, six-tonne boulders were mobilized despite experiencing less than the 3 yr flow. The channel and floodplain were fully repaired following the first year. Strong flows (two 10-yr floods and a 30-yr flood) during the second year catastrophically altered the constructed channel and floodplain. While the low-gradient reach remained intact, each of the original step-pool structures was either completely mobilized and destroyed, buried by gravel, or bypassed by the subsequent channel. Despite the overall structural failure of the constructed channel, the new channel does not block steelhead migration, and can be serendipitously considered an ecological success. Step-pool design was constrained by a fish-centric requirement that steps be 1 ft tall or less. Some constructed "resting pools" filled rather than transport sediment. Using fish-centric constraints in the design, rather than strictly fluvial geomorphic principles may have contributed to early failure of the step-pool structures and other parts of the system.

  12. System engineering techniques for establishing balanced design and performance guidelines for the advanced telerobotic testbed

    NASA Technical Reports Server (NTRS)

    Zimmerman, W. F.; Matijevic, J. R.

    1987-01-01

    Novel system engineering techniques have been developed and applied to establishing structured design and performance objectives for the Telerobotics Testbed that reduce technical risk while still allowing the testbed to demonstrate an advancement in state-of-the-art robotic technologies. To estblish the appropriate tradeoff structure and balance of technology performance against technical risk, an analytical data base was developed which drew on: (1) automation/robot-technology availability projections, (2) typical or potential application mission task sets, (3) performance simulations, (4) project schedule constraints, and (5) project funding constraints. Design tradeoffs and configuration/performance iterations were conducted by comparing feasible technology/task set configurations against schedule/budget constraints as well as original program target technology objectives. The final system configuration, task set, and technology set reflected a balanced advancement in state-of-the-art robotic technologies, while meeting programmatic objectives and schedule/cost constraints.

  13. High-throughput process development: I. Process chromatography.

    PubMed

    Rathore, Anurag S; Bhambure, Rahul

    2014-01-01

    Chromatographic separation serves as "a workhorse" for downstream process development and plays a key role in removal of product-related, host cell-related, and process-related impurities. Complex and poorly characterized raw materials and feed material, low feed concentration, product instability, and poor mechanistic understanding of the processes are some of the critical challenges that are faced during development of a chromatographic step. Traditional process development is performed as trial-and-error-based evaluation and often leads to a suboptimal process. High-throughput process development (HTPD) platform involves an integration of miniaturization, automation, and parallelization and provides a systematic approach for time- and resource-efficient chromatography process development. Creation of such platforms requires integration of mechanistic knowledge of the process with various statistical tools for data analysis. The relevance of such a platform is high in view of the constraints with respect to time and resources that the biopharma industry faces today. This protocol describes the steps involved in performing HTPD of process chromatography step. It described operation of a commercially available device (PreDictor™ plates from GE Healthcare). This device is available in 96-well format with 2 or 6 μL well size. We also discuss the challenges that one faces when performing such experiments as well as possible solutions to alleviate them. Besides describing the operation of the device, the protocol also presents an approach for statistical analysis of the data that is gathered from such a platform. A case study involving use of the protocol for examining ion-exchange chromatography of granulocyte colony-stimulating factor (GCSF), a therapeutic product, is briefly discussed. This is intended to demonstrate the usefulness of this protocol in generating data that is representative of the data obtained at the traditional lab scale. The agreement in the data is indeed very significant (regression coefficient 0.93). We think that this protocol will be of significant value to those involved in performing high-throughput process development of process chromatography.

  14. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE PAGES

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...

    2018-04-17

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  15. Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models

    NASA Astrophysics Data System (ADS)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.

    2018-04-01

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.

  16. Form of the compensatory stepping response to repeated laterally directed postural disturbances.

    PubMed

    Hurt, Christopher P; Rosenblatt, Noah J; Grabiner, Mark D

    2011-10-01

    A compensatory stepping response (CSR) is a common strategy to restore dynamic stability in response to a postural disturbance. Currently, few studies have investigated the CSR to laterally directed disturbances delivered to subjects during quiet standing. The purpose of this study was to characterize the CSR of younger adults following exposure to a series of similar laterally directed disturbances for which no instructions were given with regard to the recovery response. We hypothesized that in the absence of externally applied constraints to the recovery response, subjects would be equally as likely to perform a crossover step as a sidestep sequence (SSS). We further hypothesized that there would be an asymmetry in arm abduction that would be dependent on the disturbance direction. Finally, we were interested in characterizing the effect of practice on the CSR to repeated disturbances. Ten younger adults were exposed to thirty laterally directed platform disturbances that forced a stepping response. Subjects responded by primarily utilizing a SSS that differs from previously reported results. Further, five of the ten subjects utilized a different recovery response that was dependent on the direction of the disturbance (i.e., left or right). Greater arm abduction was characterized for the arm in the direction of the external disturbance in comparison with the contralateral arm. Lastly, subjects modified their recovery response to this task within 12 disturbances. Taken together, these results suggest that recovery responses to laterally directed disturbances can be quickly modified but can be quite variable between and within subjects.

  17. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  18. Relations between affective music and speech: evidence from dynamics of affective piano performance and speech production.

    PubMed

    Liu, Xiaoluan; Xu, Yi

    2015-01-01

    This study compares affective piano performance with speech production from the perspective of dynamics: unlike previous research, this study uses finger force and articulatory effort as indexes reflecting the dynamics of affective piano performance and speech production respectively. Moreover, for the first time physical constraints such as piano fingerings and speech articulatory constraints are included due to their potential contribution to different patterns of dynamics. A piano performance experiment and speech production experiment were conducted in four emotions: anger, fear, happiness and sadness. The results show that in both piano performance and speech production, anger and happiness generally have high dynamics while sadness has the lowest dynamics. Fingerings interact with fear in the piano experiment and articulatory constraints interact with anger in the speech experiment, i.e., large physical constraints produce significantly higher dynamics than small physical constraints in piano performance under the condition of fear and in speech production under the condition of anger. Using production experiments, this study firstly supports previous perception studies on relations between affective music and speech. Moreover, this is the first study to show quantitative evidence for the importance of considering motor aspects such as dynamics in comparing music performance and speech production in which motor mechanisms play a crucial role.

  19. Relations between affective music and speech: evidence from dynamics of affective piano performance and speech production

    PubMed Central

    Liu, Xiaoluan; Xu, Yi

    2015-01-01

    This study compares affective piano performance with speech production from the perspective of dynamics: unlike previous research, this study uses finger force and articulatory effort as indexes reflecting the dynamics of affective piano performance and speech production respectively. Moreover, for the first time physical constraints such as piano fingerings and speech articulatory constraints are included due to their potential contribution to different patterns of dynamics. A piano performance experiment and speech production experiment were conducted in four emotions: anger, fear, happiness and sadness. The results show that in both piano performance and speech production, anger and happiness generally have high dynamics while sadness has the lowest dynamics. Fingerings interact with fear in the piano experiment and articulatory constraints interact with anger in the speech experiment, i.e., large physical constraints produce significantly higher dynamics than small physical constraints in piano performance under the condition of fear and in speech production under the condition of anger. Using production experiments, this study firstly supports previous perception studies on relations between affective music and speech. Moreover, this is the first study to show quantitative evidence for the importance of considering motor aspects such as dynamics in comparing music performance and speech production in which motor mechanisms play a crucial role. PMID:26217252

  20. Appointment Template Redesign in a Women's Health Clinic Using Clinical Constraints to Improve Service Quality and Efficiency.

    PubMed

    Huang, Y; Verduzco, S

    2015-01-01

    Patient wait time is a critical element of access to care that has long been recognized as a major problem in modern outpatient health care delivery systems. It impacts patient and medical staff productivity, stress, quality and efficiency of medical care, as well as health-care cost and availability. This study was conducted in a Women's Health Clinic. The objective was to improve clinic service quality by redesigning patient appointment template using the clinical constraints. The proposed scheduling template consisted of two key elements: the redesign of appointment types and the determination of the length of time slots using defined constraints. The re-classification technique was used for the redesign of appointment visit types to capture service variation for scheduling purposes. Then, the appointment length was determined by incorporating clinic constraints or goals, such as patient wait time, physician idle time, overtime, finish time, lunch hours, when the last appointment was scheduled, and the desired number of appointment slots, to converge the optimal length of appointment slots for each visit type. The redesigned template was implemented and the results indicated a 73% reduction in average patient waiting from the reported 40 to 11 minutes. The patient no-show rate was reduced by 4% from 24% to 20%. The morning section on average finished about 11:50 am. The clinic day was finished around 4:45 pm. Provider average idle time was estimated to be about 5 minutes, which can be used for charting/documenting patients. This study provided an alternative method of redesigning appointment scheduling templates using only the clinical constraints rather than the traditional way that required an objective function. This paper also documented the employed methods step by step in a real clinic setting. The implementation results concluded a significant improvement on patient wait time and no-show rate.

  1. Appointment Template Redesign in a Women’s Health Clinic Using Clinical Constraints to Improve Service Quality and Efficiency

    PubMed Central

    Verduzco, S.

    2015-01-01

    Summary Background Patient wait time is a critical element of access to care that has long been recognized as a major problem in modern outpatient health care delivery systems. It impacts patient and medical staff productivity, stress, quality and efficiency of medical care, as well as health-care cost and availability. Objectives This study was conducted in a Women’s Health Clinic. The objective was to improve clinic service quality by redesigning patient appointment template using the clinical constraints. Methods The proposed scheduling template consisted of two key elements: the redesign of appointment types and the determination of the length of time slots using defined constraints. The re-classification technique was used for the redesign of appointment visit types to capture service variation for scheduling purposes. Then, the appointment length was determined by incorporating clinic constraints or goals, such as patient wait time, physician idle time, overtime, finish time, lunch hours, when the last appointment was scheduled, and the desired number of appointment slots, to converge the optimal length of appointment slots for each visit type. Results The redesigned template was implemented and the results indicated a 73% reduction in average patient waiting from the reported 40 to 11 minutes. The patient no-show rate was reduced by 4% from 24% to 20%. The morning section on average finished about 11:50 am. The clinic day was finished around 4:45 pm. Provider average idle time was estimated to be about 5 minutes, which can be used for charting/documenting patients. Conclusions This study provided an alternative method of redesigning appointment scheduling templates using only the clinical constraints rather than the traditional way that required an objective function. This paper also documented the employed methods step by step in a real clinic setting. The implementation results concluded a significant improvement on patient wait time and no-show rate. PMID:26171075

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fath, L., E-mail: lukas.fath@kit.edu; Hochbruck, M., E-mail: marlis.hochbruck@kit.edu; Singh, C.V., E-mail: chandraveer.singh@utoronto.ca

    Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementationmore » in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.« less

  3. Adaptive [theta]-methods for pricing American options

    NASA Astrophysics Data System (ADS)

    Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran

    2008-12-01

    We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.

  4. STEPS: lean thinking, theory of constraints and identifying bottlenecks in an emergency department.

    PubMed

    Ryan, A; Hunter, K; Cunningham, K; Williams, J; O'Shea, H; Rooney, P; Hickey, F

    2013-04-01

    This study aimed to identify the bottlenecks in patients' journeys through an emergency department (ED). For each stage of the patient journey, the average times were compared between two groups divided according to the four hour time frame and disproportionate delays were identified using a significance test These bottlenecks were evaluated with reference to a lean thinking value-stream map and the five focusing steps of the theory of constraints. A total of 434 (72.5%) ED patients were tracked over one week. Logistic regression showed that patients who had radiological tests, blood tests or who were admitted were 4.4, 4.1 and 7.7 times more likely, respectively, to stay over four hours in the ED than those who didn't The stages that were significantly delayed were the time spent waiting for radiology (p = 0.001), waiting for the in-patient team (p = 0.004), waiting for a bed (p < 0.001) and ED doctor turnaround time (p < 0.001).

  5. Time scale of random sequential adsorption.

    PubMed

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

  6. Exshall: A Turkel-Zwas explicit large time-step FORTRAN program for solving the shallow-water equations in spherical coordinates

    NASA Astrophysics Data System (ADS)

    Navon, I. M.; Yu, Jian

    A FORTRAN computer program is presented and documented applying the Turkel-Zwas explicit large time-step scheme to a hemispheric barotropic model with constraint restoration of integral invariants of the shallow-water equations. We then proceed to detail the algorithms embodied in the code EXSHALL in this paper, particularly algorithms related to the efficiency and stability of T-Z scheme and the quadratic constraint restoration method which is based on a variational approach. In particular we provide details about the high-latitude filtering, Shapiro filtering, and Robert filtering algorithms used in the code. We explain in detail the various subroutines in the EXSHALL code with emphasis on algorithms implemented in the code and present the flowcharts of some major subroutines. Finally, we provide a visual example illustrating a 4-day run using real initial data, along with a sample printout and graphic isoline contours of the height field and velocity fields.

  7. Solving the MHD equations by the space time conservation element and solution element method

    NASA Astrophysics Data System (ADS)

    Zhang, Moujin; John Yu, S.-T.; Henry Lin, S.-C.; Chang, Sin-Chung; Blankson, Isaiah

    2006-05-01

    We apply the space-time conservation element and solution element (CESE) method to solve the ideal MHD equations with special emphasis on satisfying the divergence free constraint of magnetic field, i.e., ∇ · B = 0. In the setting of the CESE method, four approaches are employed: (i) the original CESE method without any additional treatment, (ii) a simple corrector procedure to update the spatial derivatives of magnetic field B after each time marching step to enforce ∇ · B = 0 at all mesh nodes, (iii) a constraint-transport method by using a special staggered mesh to calculate magnetic field B, and (iv) the projection method by solving a Poisson solver after each time marching step. To demonstrate the capabilities of these methods, two benchmark MHD flows are calculated: (i) a rotated one-dimensional MHD shock tube problem and (ii) a MHD vortex problem. The results show no differences between different approaches and all results compare favorably with previously reported data.

  8. Robust fuzzy control subject to state variance and passivity constraints for perturbed nonlinear systems with multiplicative noises.

    PubMed

    Chang, Wen-Jer; Huang, Bo-Jyun

    2014-11-01

    The multi-constrained robust fuzzy control problem is investigated in this paper for perturbed continuous-time nonlinear stochastic systems. The nonlinear system considered in this paper is represented by a Takagi-Sugeno fuzzy model with perturbations and state multiplicative noises. The multiple performance constraints considered in this paper include stability, passivity and individual state variance constraints. The Lyapunov stability theory is employed to derive sufficient conditions to achieve the above performance constraints. By solving these sufficient conditions, the contribution of this paper is to develop a parallel distributed compensation based robust fuzzy control approach to satisfy multiple performance constraints for perturbed nonlinear systems with multiplicative noises. At last, a numerical example for the control of perturbed inverted pendulum system is provided to illustrate the applicability and effectiveness of the proposed multi-constrained robust fuzzy control method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Energy Performance Monitoring and Optimization System for DoD Campuses

    DTIC Science & Technology

    2014-02-01

    EPMO system exceeded the energy consumption reduction target of 20% and improved occupant thermal comfort by reducing the number of instances outside... thermal comfort constraints, and plant efficiency EW2011-42 Final Report 8 February 2014 in the same framework [30-33]. In this framework, 4-hour...conjunction with information such as: thermal comfort constraints, equipment constraints, energy performance objectives. All the information is

  10. Support vector methods for survival analysis: a comparison between ranking and regression approaches.

    PubMed

    Van Belle, Vanya; Pelckmans, Kristiaan; Van Huffel, Sabine; Suykens, Johan A K

    2011-10-01

    To compare and evaluate ranking, regression and combined machine learning approaches for the analysis of survival data. The literature describes two approaches based on support vector machines to deal with censored observations. In the first approach the key idea is to rephrase the task as a ranking problem via the concordance index, a problem which can be solved efficiently in a context of structural risk minimization and convex optimization techniques. In a second approach, one uses a regression approach, dealing with censoring by means of inequality constraints. The goal of this paper is then twofold: (i) introducing a new model combining the ranking and regression strategy, which retains the link with existing survival models such as the proportional hazards model via transformation models; and (ii) comparison of the three techniques on 6 clinical and 3 high-dimensional datasets and discussing the relevance of these techniques over classical approaches fur survival data. We compare svm-based survival models based on ranking constraints, based on regression constraints and models based on both ranking and regression constraints. The performance of the models is compared by means of three different measures: (i) the concordance index, measuring the model's discriminating ability; (ii) the logrank test statistic, indicating whether patients with a prognostic index lower than the median prognostic index have a significant different survival than patients with a prognostic index higher than the median; and (iii) the hazard ratio after normalization to restrict the prognostic index between 0 and 1. Our results indicate a significantly better performance for models including regression constraints above models only based on ranking constraints. This work gives empirical evidence that svm-based models using regression constraints perform significantly better than svm-based models based on ranking constraints. Our experiments show a comparable performance for methods including only regression or both regression and ranking constraints on clinical data. On high dimensional data, the former model performs better. However, this approach does not have a theoretical link with standard statistical models for survival data. This link can be made by means of transformation models when ranking constraints are included. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Dynamical aspects of behavior generation under constraints

    PubMed Central

    Harter, Derek; Achunala, Srinivas

    2007-01-01

    Dynamic adaptation is a key feature of brains helping to maintain the quality of their performance in the face of increasingly difficult constraints. How to achieve high-quality performance under demanding real-time conditions is an important question in the study of cognitive behaviors. Animals and humans are embedded in and constrained by their environments. Our goal is to improve the understanding of the dynamics of the interacting brain–environment system by studying human behaviors when completing constrained tasks and by modeling the observed behavior. In this article we present results of experiments with humans performing tasks on the computer under variable time and resource constraints. We compare various models of behavior generation in order to describe the observed human performance. Finally we speculate on mechanisms how chaotic neurodynamics can contribute to the generation of flexible human behaviors under constraints. PMID:19003514

  12. Do not Lose Your Students in Large Lectures: A Five-Step Paper-Based Model to Foster Students’ Participation

    PubMed Central

    Aburahma, Mona Hassan

    2015-01-01

    Like most of the pharmacy colleges in developing countries with high population growth, public pharmacy colleges in Egypt are experiencing a significant increase in students’ enrollment annually due to the large youth population, accompanied with the keenness of students to join pharmacy colleges as a step to a better future career. In this context, large lectures represent a popular approach for teaching the students as economic and logistic constraints prevent splitting them into smaller groups. Nevertheless, the impact of large lectures in relation to student learning has been widely questioned due to their educational limitations, which are related to the passive role the students maintain in lectures. Despite the reported feebleness underlying large lectures and lecturing in general, large lectures will likely continue to be taught in the same format in these countries. Accordingly, to soften the negative impacts of large lectures, this article describes a simple and feasible 5-step paper-based model to transform lectures from a passive information delivery space into an active learning environment. This model mainly suits educational establishments with financial constraints, nevertheless, it can be applied in lectures presented in any educational environment to improve active participation of students. The components and the expected advantages of employing the 5-step paper-based model in large lectures as well as its limitations and ways to overcome them are presented briefly. The impact of applying this model on students’ engagement and learning is currently being investigated. PMID:28975906

  13. Incorporating Demand and Supply Constraints into Economic Evaluations in Low‐Income and Middle‐Income Countries

    PubMed Central

    Mangham‐Jefferies, Lindsay; Gomez, Gabriela B.; Pitt, Catherine; Foster, Nicola

    2016-01-01

    Abstract Global guidelines for new technologies are based on cost and efficacy data from a limited number of trial locations. Country‐level decision makers need to consider whether cost‐effectiveness analysis used to inform global guidelines are sufficient for their situation or whether to use models that adjust cost‐effectiveness results taking into account setting‐specific epidemiological and cost heterogeneity. However, demand and supply constraints will also impact cost‐effectiveness by influencing the standard of care and the use and implementation of any new technology. These constraints may also vary substantially by setting. We present two case studies of economic evaluations of the introduction of new diagnostics for malaria and tuberculosis control. These case studies are used to analyse how the scope of economic evaluations of each technology expanded to account for and then address demand and supply constraints over time. We use these case studies to inform a conceptual framework that can be used to explore the characteristics of intervention complexity and the influence of demand and supply constraints. Finally, we describe a number of feasible steps that researchers who wish to apply our framework in cost‐effectiveness analyses. PMID:26786617

  14. [Addictions: Motivated or forced care].

    PubMed

    Cottencin, Olivier; Bence, Camille

    2016-12-01

    Patients presenting with addictions are often obliged to consult. This constraint can be explicit (partner, children, parents, doctor, police, justice) or can be implicit (for their children, for their families, or for their health). Thus, beyond the fact that the caregiver faces the paradox of caring for subjects who do not ask treatment, he faces as well a double bind considered to be supporter of the social order or helper of patients. The transtheoretical model of change is complex showing us that it was neither fixed in time, nor perpetual for a given individual. This model includes ambivalence, resistance and even relapse, but it still considers constraint as a brake than an effective tool. Therapist must have adequate communication tools to enable everyone (forced or not) understand that involvement in care will enable him/her to regain his free will, even though it took to go through coercion. We propose in this article to detail the first steps with the patient presenting with addiction looking for constraint (implicit or explicit), how to work with constraint, avoid making resistances ourselves and make of constraint a powerful motivator for change. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  15. Simplicity constraints: A 3D toy model for loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Charles, Christoph

    2018-05-01

    In loop quantum gravity, tremendous progress has been made using the Ashtekar-Barbero variables. These variables, defined in a gauge fixing of the theory, correspond to a parametrization of the solutions of the so-called simplicity constraints. Their geometrical interpretation is however unsatisfactory as they do not constitute a space-time connection. It would be possible to resolve this point by using a full Lorentz connection or, equivalently, by using the self-dual Ashtekar variables. This leads however to simplicity constraints or reality conditions which are notoriously difficult to implement in the quantum theory. We explore in this paper the possibility of using completely degenerate actions to impose such constraints at the quantum level in the context of canonical quantization. To do so, we define a simpler model, in 3D, with similar constraints by extending the phase space to include an independent vielbein. We define the classical model and show that a precise quantum theory by gauge unfixing can be defined out of it, completely equivalent to the standard 3D Euclidean quantum gravity. We discuss possible future explorations around this model as it could help as a stepping stone to define full-fledged covariant loop quantum gravity.

  16. Performance Constraints in Early Language: The Case of Subjectless Sentences.

    ERIC Educational Resources Information Center

    Gerken, LouAnn

    A discussion of English-speaking children's use of subjectless sentences contrasts the competence and performance explanations for the phenomenon. In particular, it reviews evidence indicating that the phenomenon does not reflect linguistic competence, but rather performance constraints. A tentative model of children's production is presented…

  17. Performance analysis of cross-layer design with average PER constraint over MIMO fading channels

    NASA Astrophysics Data System (ADS)

    Dang, Xiaoyu; Liu, Yan; Yu, Xiangbin

    2015-12-01

    In this article, a cross-layer design (CLD) scheme for multiple-input and multiple-output system with the dual constraints of imperfect feedback and average packet error rate (PER) is presented, which is based on the combination of the adaptive modulation and the automatic repeat request protocols. The design performance is also evaluated over wireless Rayleigh fading channel. With the constraint of target PER and average PER, the optimum switching thresholds (STs) for attaining maximum spectral efficiency (SE) are developed. An effective iterative algorithm for finding the optimal STs is proposed via Lagrange multiplier optimisation. With different thresholds available, the analytical expressions of the average SE and PER are provided for the performance evaluation. To avoid the performance loss caused by the conventional single estimate, multiple outdated estimates (MOE) method, which utilises multiple previous channel estimation information, is presented for CLD to improve the system performance. It is shown that numerical simulations for average PER and SE are in consistent with the theoretical analysis and that the developed CLD with average PER constraint can meet the target PER requirement and show better performance in comparison with the conventional CLD with instantaneous PER constraint. Especially, the CLD based on the MOE method can obviously increase the system SE and reduce the impact of feedback delay greatly.

  18. A universal constraint-based formulation for freely moving immersed bodies in fluids

    NASA Astrophysics Data System (ADS)

    Patankar, Neelesh A.

    2012-11-01

    Numerical simulation of moving immersed bodies in fluids is now practiced routinely. A variety of variants of these approaches have been published, most of which rely on using a background mesh for the fluid equations and tracking the body using Lagrangian points. In this talk, generalized constraint-based governing equations will be presented that provide a unified framework for various immersed body techniques. The key idea that is common to these methods is to assume that the entire fluid-body domain is a ``fluid'' and then to constrain the body domain to move in accordance with its governing equations. The immersed body can be rigid or deforming. The governing equations are developed so that they are independent of the nature of temporal or spatial discretization schemes. Specific choices of time stepping and spatial discretization then lead to techniques developed in prior literature ranging from freely moving rigid to elastic self-propelling bodies. To simulate Brownian systems, thermal fluctuations can be included in the fluid equations via additional random stress terms. Solving the fluctuating hydrodynamic equations coupled with the immersed body results in the Brownian motion of that body. The constraint-based formulation leads to fractional time stepping algorithms a la Chorin-type schemes that are suitable for fast computations of rigid or self-propelling bodies whose deformation kinematics are known. Support from NSF is gratefully acknowledged.

  19. Self-organized Evaluation of Dynamic Hand Gestures for Sign Language Recognition

    NASA Astrophysics Data System (ADS)

    Buciu, Ioan; Pitas, Ioannis

    Two main theories exist with respect to face encoding and representation in the human visual system (HVS). The first one refers to the dense (holistic) representation of the face, where faces have "holon"-like appearance. The second one claims that a more appropriate face representation is given by a sparse code, where only a small fraction of the neural cells corresponding to face encoding is activated. Theoretical and experimental evidence suggest that the HVS performs face analysis (encoding, storing, face recognition, facial expression recognition) in a structured and hierarchical way, where both representations have their own contribution and goal. According to neuropsychological experiments, it seems that encoding for face recognition, relies on holistic image representation, while a sparse image representation is used for facial expression analysis and classification. From the computer vision perspective, the techniques developed for automatic face and facial expression recognition fall into the same two representation types. Like in Neuroscience, the techniques which perform better for face recognition yield a holistic image representation, while those techniques suitable for facial expression recognition use a sparse or local image representation. The proposed mathematical models of image formation and encoding try to simulate the efficient storing, organization and coding of data in the human cortex. This is equivalent with embedding constraints in the model design regarding dimensionality reduction, redundant information minimization, mutual information minimization, non-negativity constraints, class information, etc. The presented techniques are applied as a feature extraction step followed by a classification method, which also heavily influences the recognition results.

  20. Effect of Task Constraint on Reaching Performance in Children with Spastic Diplegic Cerebral Palsy

    ERIC Educational Resources Information Center

    Ju, Yun-Huei; You, Jia-Yuan; Cherng, Rong-Ju

    2010-01-01

    The purposes of the study were to examine the effect of task constraint on the reaching performance in children with spastic cerebral palsy (CP) and to examine the correlations between the reaching performance and postural control. Eight children with CP and 16 typically developing (TD) children participated in the study. They performed a…

  1. A proto-Data Processing Center for LISA

    NASA Astrophysics Data System (ADS)

    Cavet, Cécile; Petiteau, Antoine; Le Jeune, Maude; Plagnol, Eric; Marin-Martholaz, Etienne; Bayle, Jean-Baptiste

    2017-05-01

    The LISA project preparation requires to study and define a new data analysis framework, capable of dealing with highly heterogeneous CPU needs and of exploiting the emergent information technologies. In this context, a prototype of the mission’s Data Processing Center (DPC) has been initiated. The DPC is designed to efficiently manage computing constraints and to offer a common infrastructure where the whole collaboration can contribute to development work. Several tools such as continuous integration (CI) have already been delivered to the collaboration and are presently used for simulations and performance studies. This article presents the progress made regarding this collaborative environment and discusses also the possible next steps towards an on-demand computing infrastructure. This activity is supported by CNES as part of the French contribution to LISA.

  2. Ramses-GPU: Second order MUSCL-Handcock finite volume fluid solver

    NASA Astrophysics Data System (ADS)

    Kestener, Pierre

    2017-10-01

    RamsesGPU is a reimplementation of RAMSES (ascl:1011.007) which drops the adaptive mesh refinement (AMR) features to optimize 3D uniform grid algorithms for modern graphics processor units (GPU) to provide an efficient software package for astrophysics applications that do not need AMR features but do require a very large number of integration time steps. RamsesGPU provides an very efficient C++/CUDA/MPI software implementation of a second order MUSCL-Handcock finite volume fluid solver for compressible hydrodynamics as a magnetohydrodynamics solver based on the constraint transport technique. Other useful modules includes static gravity, dissipative terms (viscosity, resistivity), and forcing source term for turbulence studies, and special care was taken to enhance parallel input/output performance by using state-of-the-art libraries such as HDF5 and parallel-netcdf.

  3. GEM detector development for tokamak plasma radiation diagnostics: SXR poloidal tomography

    NASA Astrophysics Data System (ADS)

    Chernyshova, Maryna; Malinowski, Karol; Ziółkowski, Adam; Kowalska-Strzeciwilk, Ewa; Czarski, Tomasz; Poźniak, Krzysztof T.; Kasprowicz, Grzegorz; Zabołotny, Wojciech; Wojeński, Andrzej; Kolasiński, Piotr; Krawczyk, Rafał D.

    2015-09-01

    An increased attention to tungsten material is related to a fact that it became a main candidate for the plasma facing material in ITER and future fusion reactor. The proposed work refers to the studies of W influence on the plasma performances by developing new detectors based on Gas Electron Multiplier GEM) technology for tomographic studies of tungsten transport in ITER-oriented tokamaks, e.g. WEST project. It presents current stage of design and developing of cylindrically bent SXR GEM detector construction for horizontal port implementation. Concept to overcome an influence of constraints on vertical port has been also presented. It is expected that the detecting unit under development, when implemented, will add to the safe operation of tokamak bringing creation of sustainable nuclear fusion reactors a step closer.

  4. The promise of advanced technology for future air transports

    NASA Technical Reports Server (NTRS)

    Bower, R. E.

    1978-01-01

    Progress in all weather 4-D navigation and wake vortex attenuation research is discussed and the concept of time based metering of aircraft is recommended for increased emphasis. The far term advances in aircraft efficiency were shown to be skin friction reduction and advanced configuration types. The promise of very large aircraft, possibly all wing aircraft is discussed, as is an advanced concept for an aerial relay transportation system. Very significant technological developments were identified that can improve supersonic transport performance and reduce noise. The hypersonic transport was proposed as the ultimate step in air transportation in the atmosphere. Progress in the key technology areas of propulsion and structures was reviewed. Finally, the impact of alternate fuels on future air transports was considered and shown not to be a growth constraint.

  5. Variations in task constraints shape emergent performance outcomes and complexity levels in balancing.

    PubMed

    Caballero Sánchez, Carla; Barbado Murillo, David; Davids, Keith; Moreno Hernández, Francisco J

    2016-06-01

    This study investigated the extent to which specific interacting constraints of performance might increase or decrease the emergent complexity in a movement system, and whether this could affect the relationship between observed movement variability and the central nervous system's capacity to adapt to perturbations during balancing. Fifty-two healthy volunteers performed eight trials where different performance constraints were manipulated: task difficulty (three levels) and visual biofeedback conditions (with and without the center of pressure (COP) displacement and a target displayed). Balance performance was assessed using COP-based measures: mean velocity magnitude (MVM) and bivariate variable error (BVE). To assess the complexity of COP, fuzzy entropy (FE) and detrended fluctuation analysis (DFA) were computed. ANOVAs showed that MVM and BVE increased when task difficulty increased. During biofeedback conditions, individuals showed higher MVM but lower BVE at the easiest level of task difficulty. Overall, higher FE and lower DFA values were observed when biofeedback was available. On the other hand, FE reduced and DFA increased as difficulty level increased, in the presence of biofeedback. However, when biofeedback was not available, the opposite trend in FE and DFA values was observed. Regardless of changes to task constraints and the variable investigated, balance performance was positively related to complexity in every condition. Data revealed how specificity of task constraints can result in an increase or decrease in complexity emerging in a neurobiological system during balance performance.

  6. Updated tomographic analysis of the integrated Sachs-Wolfe effect and implications for dark energy

    NASA Astrophysics Data System (ADS)

    Stölzner, Benjamin; Cuoco, Alessandro; Lesgourgues, Julien; Bilicki, Maciej

    2018-03-01

    We derive updated constraints on the integrated Sachs-Wolfe (ISW) effect through cross-correlation of the cosmic microwave background with galaxy surveys. We improve with respect to similar previous analyses in several ways. First, we use the most recent versions of extragalactic object catalogs, SDSS DR12 photometric redshift (photo-z ) and 2MASS Photo-z data sets, as well as those employed earlier for ISW, SDSS QSO photo-z and NVSS samples. Second, we use for the first time the WISE × SuperCOSMOS catalog, which allows us to perform an all-sky analysis of the ISW up to z ˜0.4 . Third, thanks to the use of photo-z s , we separate each data set into different redshift bins, deriving the cross-correlation in each bin. This last step leads to a significant improvement in sensitivity. We remove cross-correlation between catalogs using masks which mutually exclude common regions of the sky. We use two methods to quantify the significance of the ISW effect. In the first one, we fix the cosmological model, derive linear galaxy biases of the catalogs, and then evaluate the significance of the ISW using a single parameter. In the second approach we perform a global fit of the ISW and of the galaxy biases varying the cosmological model. We find significances of the ISW in the range 4.7 - 5.0 σ thus reaching, for the first time in such an analysis, the threshold of 5 σ . Without the redshift tomography we find a significance of ˜4.0 σ , which shows the importance of the binning method. Finally we use the ISW data to infer constraints on the dark energy redshift evolution and equation of state. We find that the redshift range covered by the catalogs is still not optimal to derive strong constraints, although this goal will be likely reached using future datasets such as from Euclid, LSST, and SKA.

  7. FINAL MASTER PLAN FOR STELLA, MISSOURI

    EPA Science Inventory

    The application of sustainability to place is the outcome of responding to human needs and expectations within economic, social, and environmental constraints and desired performance of these systems. These constraints and performance requirements of these systems provides a way ...

  8. Reformulating Constraints for Compilability and Efficiency

    NASA Technical Reports Server (NTRS)

    Tong, Chris; Braudaway, Wesley; Mohan, Sunil; Voigt, Kerstin

    1992-01-01

    KBSDE is a knowledge compiler that uses a classification-based approach to map solution constraints in a task specification onto particular search algorithm components that will be responsible for satisfying those constraints (e.g., local constraints are incorporated in generators; global constraints are incorporated in either testers or hillclimbing patchers). Associated with each type of search algorithm component is a subcompiler that specializes in mapping constraints into components of that type. Each of these subcompilers in turn uses a classification-based approach, matching a constraint passed to it against one of several schemas, and applying a compilation technique associated with that schema. While much progress has occurred in our research since we first laid out our classification-based approach [Ton91], we focus in this paper on our reformulation research. Two important reformulation issues that arise out of the choice of a schema-based approach are: (1) compilability-- Can a constraint that does not directly match any of a particular subcompiler's schemas be reformulated into one that does? and (2) Efficiency-- If the efficiency of the compiled search algorithm depends on the compiler's performance, and the compiler's performance depends on the form in which the constraint was expressed, can we find forms for constraints which compile better, or reformulate constraints whose forms can be recognized as ones that compile poorly? In this paper, we describe a set of techniques we are developing for partially addressing these issues.

  9. An Optimal Method for Detecting Internal and External Intrusion in MANET

    NASA Astrophysics Data System (ADS)

    Rafsanjani, Marjan Kuchaki; Aliahmadipour, Laya; Javidi, Mohammad M.

    Mobile Ad hoc Network (MANET) is formed by a set of mobile hosts which communicate among themselves through radio waves. The hosts establish infrastructure and cooperate to forward data in a multi-hop fashion without a central administration. Due to their communication type and resources constraint, MANETs are vulnerable to diverse types of attacks and intrusions. In this paper, we proposed a method for prevention internal intruder and detection external intruder by using game theory in mobile ad hoc network. One optimal solution for reducing the resource consumption of detection external intruder is to elect a leader for each cluster to provide intrusion service to other nodes in the its cluster, we call this mode moderate mode. Moderate mode is only suitable when the probability of attack is low. Once the probability of attack is high, victim nodes should launch their own IDS to detect and thwart intrusions and we call robust mode. In this paper leader should not be malicious or selfish node and must detect external intrusion in its cluster with minimum cost. Our proposed method has three steps: the first step building trust relationship between nodes and estimation trust value for each node to prevent internal intrusion. In the second step we propose an optimal method for leader election by using trust value; and in the third step, finding the threshold value for notifying the victim node to launch its IDS once the probability of attack exceeds that value. In first and third step we apply Bayesian game theory. Our method due to using game theory, trust value and honest leader can effectively improve the network security, performance and reduce resource consumption.

  10. Mobility and Position Error Analysis of a Complex Planar Mechanism with Redundant Constraints

    NASA Astrophysics Data System (ADS)

    Sun, Qipeng; Li, Gangyan

    2018-03-01

    Nowadays mechanisms with redundant constraints have been created and attracted much attention for their merits. The mechanism of the redundant constraints in a mechanical system is analyzed in this paper. A analysis method of Planar Linkage with a repetitive structure is proposed to get the number and type of constraints. According to the difference of applications and constraint characteristics, the redundant constraints are divided into the theoretical planar redundant constraints and the space-planar redundant constraints. And the calculation formula for the number of redundant constraints and type of judging method are carried out. And a complex mechanism with redundant constraints is analyzed of the influence about redundant constraints on mechanical performance. With the combination of theoretical derivation and simulation research, a mechanism analysis method is put forward about the position error of complex mechanism with redundant constraints. It points out the direction on how to eliminate or reduce the influence of redundant constraints.

  11. Gas Chromatographic Determination of Fatty Acid Compositions.

    ERIC Educational Resources Information Center

    Heinzen, Horacio; And Others

    1985-01-01

    Describes an experiment that: (1) has a derivation step using readily available reagents; (2) requires limited manipulative skills, centering attention on methodology; (3) can be completed within the time constraints of a normal laboratory period; and (4) investigates materials that are easy to acquire and are of great technical/biological…

  12. Opportunity Foregone: Education in Brazil.

    ERIC Educational Resources Information Center

    Birdsall, Nancy, Ed.; Sabot, Richard H., Ed.

    The studies presented in this volume help readers to understand the constraints faced in addressing the key problems within the Brazilian education system. Steps to address the issues and benefits to be gained by addressing those issues are discussed. Forty-two authors reiterate that the success of Brazil's education reform will have an important…

  13. Stoichiometric network constraints on xylose metabolism by recombinant Saccharomyces cerevisiae

    Treesearch

    Yong-Su Jin; Thomas W. Jeffries

    2004-01-01

    Metabolic pathway engineering is constrained by the thermodynamic and stoichiometric feasibility of enzymatic activities of introduced genes. Engineering of xylose metabolism in Saccharomyces cerevisiae has focused on introducing genes for the initial xylose assimilation steps from Pichia stipitis, a xylose-fermenting yeast, into S. cerevisiae, a yeast raditionally...

  14. Theory of constraints for publicly funded health systems.

    PubMed

    Sadat, Somayeh; Carter, Michael W; Golden, Brian

    2013-03-01

    Originally developed in the context of publicly traded for-profit companies, theory of constraints (TOC) improves system performance through leveraging the constraint(s). While the theory seems to be a natural fit for resource-constrained publicly funded health systems, there is a lack of literature addressing the modifications required to adopt TOC and define the goal and performance measures. This paper develops a system dynamics representation of the classical TOC's system-wide goal and performance measures for publicly traded for-profit companies, which forms the basis for developing a similar model for publicly funded health systems. The model is then expanded to include some of the factors that affect system performance, providing a framework to apply TOC's process of ongoing improvement in publicly funded health systems. Future research is required to more accurately define the factors affecting system performance and populate the model with evidence-based estimates for various parameters in order to use the model to guide TOC's process of ongoing improvement.

  15. Hot hands, cold feet? Investigating effects of interacting constraints on place kicking performance at the 2015 Rugby Union World Cup.

    PubMed

    Pocock, Chris; Bezodis, Neil E; Davids, Keith; North, Jamie S

    2018-06-23

    Place kicks in Rugby Union present valuable opportunities to score points outside the spatiotemporal dynamics of open play but are executed under varying performance constraints. We analysed effects of specific task constraints and relevant contextual factors on place kick performance in the 2015 Rugby Union World Cup. Data were collected from television broadcasts for each place kick. In addition to kick outcomes, contextual factors, including time of the kick in the match, score margin at the time of the kick, and outcome of the kicker's previous kick, were recorded. Effects of spatial task constraints were analysed for each kick, using distance (m) and angle (°) of the kick to the goalposts. A binomial logistic regression model revealed that distance from, and angle to, the goalposts were significant predictors of place kick outcome. Furthermore, the success percentage of kickers who missed their previous kick was 7% lower than those who scored their previous kick. Place kick success percentage in the 10 minutes before half-time was 8% lower than the mean tournament success percentage, which was 75% (95% CI 71-78%). The highest kick success percentage was recorded when scores were level (83%; 95% CI 72-91%). Our data highlighted how subtle changes in task constraints and contextual factors can influence performance outcomes in elite performers in international competition. Fluctuations in place kick success suggested that individual constraints, such as thoughts, emotions and fatigue, induced during competition, could interact with perceptions to influence emergent performance behaviours.

  16. Effects of modified constraint-induced movement therapy on reach-to-grasp movements and functional performance after chronic stroke: a randomized controlled study.

    PubMed

    Lin, K-C; Wu, C-Y; Wei, T-H; Lee, C-Y; Liu, J-S

    2007-12-01

    To evaluate changes in (1) motor control characteristics of the hemiparetic hand during the performance of a functional reach-to-grasp task and (2) functional performance of daily activities in patients with stroke treated with modified constraint-induced movement therapy. Two-group randomized controlled trial with pretreatment and posttreatment measures. Rehabilitation clinics. Thirty-two chronic stroke patients (21 men, 11 women; mean age=57.9 years, range=43-81 years) 13-26 months (mean 16.3 months) after onset of a first-ever cerebrovascular accident. Thirty-two patients were randomized to receive modified constraint-induced movement therapy (restraint of the unaffected limb combined with intensive training of the affected limb) or traditional rehabilitation for three weeks. Kinematic analysis was used to assess motor control characteristics as patients reached to grasp a beverage can. Functional outcomes were evaluated using the Motor Activity Log and Functional Independence Measure. There were moderate and significant effects of modified constraint-induced movement therapy on some aspects of motor control of reach-to-grasp and on functional ability. The modified constraint-induced movement therapy group preplanned reaching and grasping (P=0.018) more efficiently and depended more on the feedforward control of reaching (P=0.046) than did the traditional rehabilitation group. The modified constraint-induced movement therapy group also showed significantly improved functional performance on the Motor Activity Log (P<0.0001) and the Functional Independence Measure (P=0.016). In addition to improving functional use of the affected arm and daily functioning, modified constraint-induced movement therapy improved motor control strategy during goal-directed reaching, a possible mechanism for the improved movement performance of stroke patients undergoing this therapy.

  17. Constraining the ensemble Kalman filter for improved streamflow forecasting

    NASA Astrophysics Data System (ADS)

    Maxwell, Deborah H.; Jackson, Bethanna M.; McGregor, James

    2018-05-01

    Data assimilation techniques such as the Ensemble Kalman Filter (EnKF) are often applied to hydrological models with minimal state volume/capacity constraints enforced during ensemble generation. Flux constraints are rarely, if ever, applied. Consequently, model states can be adjusted beyond physically reasonable limits, compromising the integrity of model output. In this paper, we investigate the effect of constraining the EnKF on forecast performance. A "free run" in which no assimilation is applied is compared to a completely unconstrained EnKF implementation, a 'typical' hydrological implementation (in which mass constraints are enforced to ensure non-negativity and capacity thresholds of model states are not exceeded), and then to a more tightly constrained implementation where flux as well as mass constraints are imposed to force the rate of water movement to/from ensemble states to be within physically consistent boundaries. A three year period (2008-2010) was selected from the available data record (1976-2010). This was specifically chosen as it had no significant data gaps and represented well the range of flows observed in the longer dataset. Over this period, the standard implementation of the EnKF (no constraints) contained eight hydrological events where (multiple) physically inconsistent state adjustments were made. All were selected for analysis. Mass constraints alone did little to improve forecast performance; in fact, several were significantly degraded compared to the free run. In contrast, the combined use of mass and flux constraints significantly improved forecast performance in six events relative to all other implementations, while the remaining two events showed no significant difference in performance. Placing flux as well as mass constraints on the data assimilation framework encourages physically consistent state estimation and results in more accurate and reliable forward predictions of streamflow for robust decision-making. We also experiment with the observation error, which has a profound effect on filter performance. We note an interesting tension exists between specifying an error which reflects known uncertainties and errors in the measurement versus an error that allows "optimal" filter updating.

  18. Variable Step Integration Coupled with the Method of Characteristics Solution for Water-Hammer Analysis, A Case Study

    NASA Technical Reports Server (NTRS)

    Turpin, Jason B.

    2004-01-01

    One-dimensional water-hammer modeling involves the solution of two coupled non-linear hyperbolic partial differential equations (PDEs). These equations result from applying the principles of conservation of mass and momentum to flow through a pipe, and usually the assumption that the speed at which pressure waves propagate through the pipe is constant. In order to solve these equations for the interested quantities (i.e. pressures and flow rates), they must first be converted to a system of ordinary differential equations (ODEs) by either approximating the spatial derivative terms with numerical techniques or using the Method of Characteristics (MOC). The MOC approach is ideal in that no numerical approximation errors are introduced in converting the original system of PDEs into an equivalent system of ODEs. Unfortunately this resulting system of ODEs is bound by a time step constraint so that when integrating the equations the solution can only be obtained at fixed time intervals. If the fluid system to be modeled also contains dynamic components (i.e. components that are best modeled by a system of ODEs), it may be necessary to take extremely small time steps during certain points of the model simulation in order to achieve stability and/or accuracy in the solution. Coupled together, the fixed time step constraint invoked by the MOC, and the occasional need for extremely small time steps in order to obtain stability and/or accuracy, can greatly increase simulation run times. As one solution to this problem, a method for combining variable step integration (VSI) algorithms with the MOC was developed for modeling water-hammer in systems with highly dynamic components. A case study is presented in which reverse flow through a dual-flapper check valve introduces a water-hammer event. The predicted pressure responses upstream of the check-valve are compared with test data.

  19. Walking-adaptability assessments with the Interactive Walkway: Between-systems agreement and sensitivity to task and subject variations.

    PubMed

    Geerse, Daphne J; Coolen, Bert H; Roerdink, Melvyn

    2017-05-01

    The ability to adapt walking to environmental circumstances is an important aspect of walking, yet difficult to assess. The Interactive Walkway was developed to assess walking adaptability by augmenting a multi-Kinect-v2 10-m walkway with gait-dependent visual context (stepping targets, obstacles) using real-time processed markerless full-body kinematics. In this study we determined Interactive Walkway's usability for walking-adaptability assessments in terms of between-systems agreement and sensitivity to task and subject variations. Under varying task constraints, 21 healthy subjects performed obstacle-avoidance, sudden-stops-and-starts and goal-directed-stepping tasks. Various continuous walking-adaptability outcome measures were concurrently determined with the Interactive Walkway and a gold-standard motion-registration system: available response time, obstacle-avoidance and sudden-stop margins, step length, stepping accuracy and walking speed. The same holds for dichotomous classifications of success and failure for obstacle-avoidance and sudden-stops tasks and performed short-stride versus long-stride obstacle-avoidance strategies. Continuous walking-adaptability outcome measures generally agreed well between systems (high intraclass correlation coefficients for absolute agreement, low biases and narrow limits of agreement) and were highly sensitive to task and subject variations. Success and failure ratings varied with available response times and obstacle types and agreed between systems for 85-96% of the trials while obstacle-avoidance strategies were always classified correctly. We conclude that Interactive Walkway walking-adaptability outcome measures are reliable and sensitive to task and subject variations, even in high-functioning subjects. We therefore deem Interactive Walkway walking-adaptability assessments usable for obtaining an objective and more task-specific examination of one's ability to walk, which may be feasible for both high-functioning and fragile populations since walking adaptability can be assessed at various levels of difficulty. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Free energy from molecular dynamics with multiple constraints

    NASA Astrophysics Data System (ADS)

    den Otter, W. K.; Briels, W. J.

    In molecular dynamics simulations of reacting systems, the key step to determining the equilibrium constant and the reaction rate is the calculation of the free energy as a function of the reaction coordinate. Intuitively the derivative of the free energy is equal to the average force needed to constrain the reaction coordinate to a constant value, but the metric tensor effect of the constraint on the sampled phase space distribution complicates this relation. The appropriately corrected expression for the potential of mean constraint force method (PMCF) for systems in which only the reaction coordinate is constrained was published recently. Here we will consider the general case of a system with multiple constraints. This situation arises when both the reaction coordinate and the 'hard' coordinates are constrained, and also in systems with several reaction coordinates. The obvious advantage of this method over the established thermodynamic integration and free energy perturbation methods is that it avoids the cumbersome introduction of a full set of generalized coordinates complementing the constrained coordinates. Simulations of n -butane and n -pentane in vacuum illustrate the method.

  1. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less

  2. Self-constrained inversion of potential fields

    NASA Astrophysics Data System (ADS)

    Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.

    2013-11-01

    We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.

  3. Optimization of Stereo Matching in 3D Reconstruction Based on Binocular Vision

    NASA Astrophysics Data System (ADS)

    Gai, Qiyang

    2018-01-01

    Stereo matching is one of the key steps of 3D reconstruction based on binocular vision. In order to improve the convergence speed and accuracy in 3D reconstruction based on binocular vision, this paper adopts the combination method of polar constraint and ant colony algorithm. By using the line constraint to reduce the search range, an ant colony algorithm is used to optimize the stereo matching feature search function in the proposed search range. Through the establishment of the stereo matching optimization process analysis model of ant colony algorithm, the global optimization solution of stereo matching in 3D reconstruction based on binocular vision system is realized. The simulation results show that by the combining the advantage of polar constraint and ant colony algorithm, the stereo matching range of 3D reconstruction based on binocular vision is simplified, and the convergence speed and accuracy of this stereo matching process are improved.

  4. In Silico Constraint-Based Strain Optimization Methods: the Quest for Optimal Cell Factories

    PubMed Central

    Maia, Paulo; Rocha, Miguel

    2015-01-01

    SUMMARY Shifting from chemical to biotechnological processes is one of the cornerstones of 21st century industry. The production of a great range of chemicals via biotechnological means is a key challenge on the way toward a bio-based economy. However, this shift is occurring at a pace slower than initially expected. The development of efficient cell factories that allow for competitive production yields is of paramount importance for this leap to happen. Constraint-based models of metabolism, together with in silico strain design algorithms, promise to reveal insights into the best genetic design strategies, a step further toward achieving that goal. In this work, a thorough analysis of the main in silico constraint-based strain design strategies and algorithms is presented, their application in real-world case studies is analyzed, and a path for the future is discussed. PMID:26609052

  5. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models

    DOE PAGES

    Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines; ...

    2017-01-31

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less

  6. GROMACS 4:  Algorithms for Highly Efficient, Load-Balanced, and Scalable Molecular Simulation.

    PubMed

    Hess, Berk; Kutzner, Carsten; van der Spoel, David; Lindahl, Erik

    2008-03-01

    Molecular simulation is an extremely useful, but computationally very expensive tool for studies of chemical and biomolecular systems. Here, we present a new implementation of our molecular simulation toolkit GROMACS which now both achieves extremely high performance on single processors from algorithmic optimizations and hand-coded routines and simultaneously scales very well on parallel machines. The code encompasses a minimal-communication domain decomposition algorithm, full dynamic load balancing, a state-of-the-art parallel constraint solver, and efficient virtual site algorithms that allow removal of hydrogen atom degrees of freedom to enable integration time steps up to 5 fs for atomistic simulations also in parallel. To improve the scaling properties of the common particle mesh Ewald electrostatics algorithms, we have in addition used a Multiple-Program, Multiple-Data approach, with separate node domains responsible for direct and reciprocal space interactions. Not only does this combination of algorithms enable extremely long simulations of large systems but also it provides that simulation performance on quite modest numbers of standard cluster nodes.

  7. Concurrent planning and execution for a walking robot

    NASA Astrophysics Data System (ADS)

    Simmons, Reid

    1990-07-01

    The Planetary Rover project is developing the Ambler, a novel legged robot, and an autonomous software system for walking the Ambler over rough terrain. As part of the project, we have developed a system that integrates perception, planning, and real-time control to navigate a single leg of the robot through complex obstacle courses. The system is integrated using the Task Control Architecture (TCA), a general-purpose set of utilities for building and controlling distributed mobile robot systems. The walking system, as originally implemented, utilized a sequential sense-plan-act control cycle. This report describes efforts to improve the performance of the system by concurrently planning and executing steps. Concurrency was achieved by modifying the existing sequential system to utilize TCA features such as resource management, monitors, temporal constraints, and hierarchical task trees. Performance was increased in excess of 30 percent with only a relatively modest effort to convert and test the system. The results lend support to the utility of using TCA to develop complex mobile robot systems.

  8. Design and Benchmarking of a Network-In-the-Loop Simulation for Use in a Hardware-In-the-Loop System

    NASA Technical Reports Server (NTRS)

    Aretskin-Hariton, Eliot; Thomas, George; Culley, Dennis; Kratz, Jonathan

    2017-01-01

    Distributed engine control (DEC) systems alter aircraft engine design constraints because of fundamental differences in the input and output communication between DEC and centralized control architectures. The change in the way communication is implemented may create new optimum engine-aircraft configurations. This paper continues the exploration of digital network communication by demonstrating a Network-In-the-Loop simulation at the NASA Glenn Research Center. This simulation incorporates a real-time network protocol, the Engine Area Distributed Interconnect Network Lite (EADIN Lite), with the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) software. The objective of this study is to assess digital control network impact to the control system. Performance is evaluated relative to a truth model for large transient maneuvers and a typical flight profile for commercial aircraft. Results show that a decrease in network bandwidth from 250 Kbps (sampling all sensors every time step) to 40 Kbps, resulted in very small differences in control system performance.

  9. Rapid design and optimization of low-thrust rendezvous/interception trajectory for asteroid deflection missions

    NASA Astrophysics Data System (ADS)

    Li, Shuang; Zhu, Yongsheng; Wang, Yukai

    2014-02-01

    Asteroid deflection techniques are essential in order to protect the Earth from catastrophic impacts by hazardous asteroids. Rapid design and optimization of low-thrust rendezvous/interception trajectories is considered as one of the key technologies to successfully deflect potentially hazardous asteroids. In this paper, we address a general framework for the rapid design and optimization of low-thrust rendezvous/interception trajectories for future asteroid deflection missions. The design and optimization process includes three closely associated steps. Firstly, shape-based approaches and genetic algorithm (GA) are adopted to perform preliminary design, which provides a reasonable initial guess for subsequent accurate optimization. Secondly, Radau pseudospectral method is utilized to transcribe the low-thrust trajectory optimization problem into a discrete nonlinear programming (NLP) problem. Finally, sequential quadratic programming (SQP) is used to efficiently solve the nonlinear programming problem and obtain the optimal low-thrust rendezvous/interception trajectories. The rapid design and optimization algorithms developed in this paper are validated by three simulation cases with different performance indexes and boundary constraints.

  10. Design and Benchmarking of a Network-In-the-Loop Simulation for Use in a Hardware-In-the-Loop System

    NASA Technical Reports Server (NTRS)

    Aretskin-Hariton, Eliot D.; Thomas, George Lindsey; Culley, Dennis E.; Kratz, Jonathan L.

    2017-01-01

    Distributed engine control (DEC) systems alter aircraft engine design constraints be- cause of fundamental differences in the input and output communication between DEC and centralized control architectures. The change in the way communication is implemented may create new optimum engine-aircraft configurations. This paper continues the exploration of digital network communication by demonstrating a Network-In-the-Loop simulation at the NASA Glenn Research Center. This simulation incorporates a real-time network protocol, the Engine Area Distributed Interconnect Network Lite (EADIN Lite), with the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) software. The objective of this study is to assess digital control network impact to the control system. Performance is evaluated relative to a truth model for large transient maneuvers and a typical flight profile for commercial aircraft. Results show that a decrease in network bandwidth from 250 Kbps (sampling all sensors every time step) to 40 Kbps, resulted in very small differences in control system performance.

  11. Planning energy-efficient bipedal locomotion on patterned terrain

    NASA Astrophysics Data System (ADS)

    Zamani, Ali; Bhounsule, Pranav A.; Taha, Ahmad

    2016-05-01

    Energy-efficient bipedal walking is essential in realizing practical bipedal systems. However, current energy-efficient bipedal robots (e.g., passive-dynamics-inspired robots) are limited to walking at a single speed and step length. The objective of this work is to address this gap by developing a method of synthesizing energy-efficient bipedal locomotion on patterned terrain consisting of stepping stones using energy-efficient primitives. A model of Cornell Ranger (a passive-dynamics inspired robot) is utilized to illustrate our technique. First, an energy-optimal trajectory control problem for a single step is formulated and solved. The solution minimizes the Total Cost Of Transport (TCOT is defined as the energy used per unit weight per unit distance travelled) subject to various constraints such as actuator limits, foot scuffing, joint kinematic limits, ground reaction forces. The outcome of the optimization scheme is a table of TCOT values as a function of step length and step velocity. Next, we parameterize the terrain to identify the location of the stepping stones. Finally, the TCOT table is used in conjunction with the parameterized terrain to plan an energy-efficient stepping strategy.

  12. A study of the use of linear programming techniques to improve the performance in design optimization problems

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.

  13. Genetic programming over context-free languages with linear constraints for the knapsack problem: first results.

    PubMed

    Bruhn, Peter; Geyer-Schulz, Andreas

    2002-01-01

    In this paper, we introduce genetic programming over context-free languages with linear constraints for combinatorial optimization, apply this method to several variants of the multidimensional knapsack problem, and discuss its performance relative to Michalewicz's genetic algorithm with penalty functions. With respect to Michalewicz's approach, we demonstrate that genetic programming over context-free languages with linear constraints improves convergence. A final result is that genetic programming over context-free languages with linear constraints is ideally suited to modeling complementarities between items in a knapsack problem: The more complementarities in the problem, the stronger the performance in comparison to its competitors.

  14. Modified constraint-induced movement therapy for clients with chronic stroke: interrupted time series (ITS) design.

    PubMed

    Park, JuHyung; Lee, NaYun; Cho, YongHo; Yang, YeongAe

    2015-03-01

    [Purpose] The purpose of this study was to investigate the impact that modified constraint-induced movement therapy has on upper extremity function and the daily life of chronic stroke patients. [Subjects and Methods] Modified constraint-induced movement therapy was conduct for 2 stroke patients with hemiplegia. It was performed 5 days a week for 2 weeks, and the participants performed their daily living activities wearing mittens for 6 hours a day, including the 2 hours of the therapy program. The assessment was conducted 5 times in 3 weeks before and after intervention. The upper extremity function was measured using the box and block test and a dynamometer, and performance daily of living activities was assessed using the modified Barthel index. The results were analyzed using a scatterplot and linear regression. [Results] All the upper extremity functions of the participants all improved after the modified constraint-induced movement therapy. Performance of daily living activities by participant 1 showed no change, but the results of participant 2 had improved after the intervention. [Conclusion] Through the results of this research, it was identified that modified constraint-induced movement therapy is effective at improving the upper extremity functions and the performance of daily living activities of chronic stroke patients.

  15. The effects of modified constraint-induced movement therapy and mirror therapy on upper extremity function and its influence on activities of daily living.

    PubMed

    Ju, Yumi; Yoon, In-Jin

    2018-01-01

    [Purpose] Modified constraint-induced movement therapy and mirror therapy are recognized as stroke rehabilitation methods. The aim of the present study was to determine whether these therapies influence upper extremity function and whether upper extremity function influences the ability to perform activities of daily living in further. [Subjects and Methods] Twenty-eight stroke patients participated in the study. Interventions were administered five times per week for 3 weeks. Activities of daily living or self-exercise were performed after modified constraint-induced movement therapy or mirror therapy, respectively. Analyses were performed on the results of the Manual Function Test and the Korean version of the Modified Barthel Index to determine the factors influencing activities of daily living. [Results] Both groups showed improvement in upper extremity function, but only the modified constraint-induced movement therapy group showed a correlation between upper extremity function and performance in the hygiene, eating, and dressing. The improved hand manipulation function found in the modified constraint-induced movement therapy had statistically significant influences on eating and dressing. [Conclusion] Our results suggest that a patient's attempts to move the affected side result in improved performance in activities of daily living as well as physical function.

  16. The effects of modified constraint-induced movement therapy and mirror therapy on upper extremity function and its influence on activities of daily living

    PubMed Central

    Ju, Yumi; Yoon, In-Jin

    2018-01-01

    [Purpose] Modified constraint-induced movement therapy and mirror therapy are recognized as stroke rehabilitation methods. The aim of the present study was to determine whether these therapies influence upper extremity function and whether upper extremity function influences the ability to perform activities of daily living in further. [Subjects and Methods] Twenty-eight stroke patients participated in the study. Interventions were administered five times per week for 3 weeks. Activities of daily living or self-exercise were performed after modified constraint-induced movement therapy or mirror therapy, respectively. Analyses were performed on the results of the Manual Function Test and the Korean version of the Modified Barthel Index to determine the factors influencing activities of daily living. [Results] Both groups showed improvement in upper extremity function, but only the modified constraint-induced movement therapy group showed a correlation between upper extremity function and performance in the hygiene, eating, and dressing. The improved hand manipulation function found in the modified constraint-induced movement therapy had statistically significant influences on eating and dressing. [Conclusion] Our results suggest that a patient’s attempts to move the affected side result in improved performance in activities of daily living as well as physical function. PMID:29410571

  17. DRAFT MASTER PLAN FOR STELLA, MISSOURI - 20 FEBRUARY 2007, 1900 HOURS

    EPA Science Inventory

    The application of sustainability to place is the outcome of responding to human needs and expectations within economic, social, and environmental constraints and desired performance of these systems. These constraints and performance requirements of these systems provides a way ...

  18. Incorporating Demand and Supply Constraints into Economic Evaluations in Low-Income and Middle-Income Countries.

    PubMed

    Vassall, Anna; Mangham-Jefferies, Lindsay; Gomez, Gabriela B; Pitt, Catherine; Foster, Nicola

    2016-02-01

    Global guidelines for new technologies are based on cost and efficacy data from a limited number of trial locations. Country-level decision makers need to consider whether cost-effectiveness analysis used to inform global guidelines are sufficient for their situation or whether to use models that adjust cost-effectiveness results taking into account setting-specific epidemiological and cost heterogeneity. However, demand and supply constraints will also impact cost-effectiveness by influencing the standard of care and the use and implementation of any new technology. These constraints may also vary substantially by setting. We present two case studies of economic evaluations of the introduction of new diagnostics for malaria and tuberculosis control. These case studies are used to analyse how the scope of economic evaluations of each technology expanded to account for and then address demand and supply constraints over time. We use these case studies to inform a conceptual framework that can be used to explore the characteristics of intervention complexity and the influence of demand and supply constraints. Finally, we describe a number of feasible steps that researchers who wish to apply our framework in cost-effectiveness analyses. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.

  19. Lightning Charge Retrievals: Dimensional Reduction, LDAR Constraints, and a First Comparison w/ LIS Satellite Data

    NASA Technical Reports Server (NTRS)

    Koshak, William; Krider, E. Philip; Murray, Natalie; Boccippio, Dennis

    2007-01-01

    A "dimensional reduction" (DR) method is introduced for analyzing lightning field changes whereby the number of unknowns in a discrete two-charge model is reduced from the standard eight to just four. The four unknowns are found by performing a numerical minimization of a chi-squared goodness-of-fit function. At each step of the minimization, an Overdetermined Fixed Matrix (OFM) method is used to immediately retrieve the best "residual source". In this way, all 8 parameters are found, yet a numerical search of only 4 parameters is required. The inversion method is applied to the understanding of lightning charge retrievals. The accuracy of the DR method has been assessed by comparing retrievals with data provided by the Lightning Detection And Ranging (LDAR) instrument. Because lightning effectively deposits charge within thundercloud charge centers and because LDAR traces the geometrical development of the lightning channel with high precision, the LDAR data provides an ideal constraint for finding the best model charge solutions. In particular, LDAR data can be used to help determine both the horizontal and vertical positions of the model charges, thereby eliminating dipole ambiguities. The results of the LDAR-constrained charge retrieval method have been compared to the locations of optical pulses/flash locations detected by the Lightning Imaging Sensor (LIS).

  20. incaRNAfbinv: a web server for the fragment-based design of RNA sequences

    PubMed Central

    Drory Retwitzer, Matan; Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme; Barash, Danny

    2016-01-01

    Abstract In recent years, new methods for computational RNA design have been developed and applied to various problems in synthetic biology and nanotechnology. Lately, there is considerable interest in incorporating essential biological information when solving the inverse RNA folding problem. Correspondingly, RNAfbinv aims at including biologically meaningful constraints and is the only program to-date that performs a fragment-based design of RNA sequences. In doing so it allows the design of sequences that do not necessarily exactly fold into the target, as long as the overall coarse-grained tree graph shape is preserved. Augmented by the weighted sampling algorithm of incaRNAtion, our web server called incaRNAfbinv implements the method devised in RNAfbinv and offers an interactive environment for the inverse folding of RNA using a fragment-based design approach. It takes as input: a target RNA secondary structure; optional sequence and motif constraints; optional target minimum free energy, neutrality and GC content. In addition to the design of synthetic regulatory sequences, it can be used as a pre-processing step for the detection of novel natural occurring RNAs. The two complementary methodologies RNAfbinv and incaRNAtion are merged together and fully implemented in our web server incaRNAfbinv, available at http://www.cs.bgu.ac.il/incaRNAfbinv. PMID:27185893

  1. Performance of Low-Density Parity-Check Coded Modulation

    NASA Astrophysics Data System (ADS)

    Hamkins, J.

    2011-02-01

    This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt

  2. Modeling the main fungal diseases of winter wheat: constraints and possible solutions

    USDA-ARS?s Scientific Manuscript database

    The first step in the formulation of disease management strategy for any cropping system is to identify the most important risk factors among those on the long list of possible candidates. This is facilitated by basic epidemiological studies of pathogen life cycles, and an understanding of the way i...

  3. Calibrating Urgency: Triage Decision-Making in a Pediatric Emergency Department

    ERIC Educational Resources Information Center

    Patel, Vimla L.; Gutnik, Lily A.; Karlin, Daniel R.; Pusic, Martin

    2008-01-01

    Triage, the first step in the assessment of emergency department patients, occurs in a highly dynamic environment that functions under constraints of time, physical space, and patient needs that may exceed available resources. Through triage, patients are placed into one of a limited number of categories using a subset of diagnostic information.…

  4. 78 FR 23778 - Quivira National Wildlife Refuge, Stafford, KS; Comprehensive Conservation Plan and Environmental...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-22

    ..., Parks and Tourism. Level of Service staffing at the GPNC would remain the same. Alternative B--Proposed... the constraints imposed by biological, economic, social, political, and legal considerations... meetings are yet to be determined, but will be announced via local media and a planning update. Next Steps...

  5. rSPACE: Spatially based power analysis for conservation and ecology

    Treesearch

    Martha M. Ellis; Jacob S. Ivan; Jody M. Tucker; Michael K. Schwartz

    2015-01-01

    1.) Power analysis is an important step in designing effective monitoring programs to detect trends in plant or animal populations. Although project goals often focus on detecting changes in population abundance, logistical constraints may require data collection on population indices, such as detection/non-detection data for occupancy estimation. 2.) We describe the...

  6. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    NASA Astrophysics Data System (ADS)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.

  7. Optimum structural design with static aeroelastic constraints

    NASA Technical Reports Server (NTRS)

    Bowman, Keith B; Grandhi, Ramana V.; Eastep, F. E.

    1989-01-01

    The static aeroelastic performance characteristics, divergence velocity, control effectiveness and lift effectiveness are considered in obtaining an optimum weight structure. A typical swept wing structure is used with upper and lower skins, spar and rib thicknesses, and spar cap and vertical post cross-sectional areas as the design parameters. Incompressible aerodynamic strip theory is used to derive the constraint formulations, and aerodynamic load matrices. A Sequential Unconstrained Minimization Technique (SUMT) algorithm is used to optimize the wing structure to meet the desired performance constraints.

  8. Constraints in distortion-invariant target recognition system simulation

    NASA Astrophysics Data System (ADS)

    Iftekharuddin, Khan M.; Razzaque, Md A.

    2000-11-01

    Automatic target recognition (ATR) is a mature but active research area. In an earlier paper, we proposed a novel ATR approach for recognition of targets varying in fine details, rotation, and translation using a Learning Vector Quantization (LVQ) Neural Network (NN). The proposed approach performed segmentation of multiple objects and the identification of the objects using LVQNN. In this current paper, we extend the previous approach for recognition of targets varying in rotation, translation, scale, and combination of all three distortions. We obtain the analytical results of the system level design to show that the approach performs well with some constraints. The first constraint determines the size of the input images and input filters. The second constraint shows the limits on amount of rotation, translation, and scale of input objects. We present the simulation verification of the constraints using DARPA's Moving and Stationary Target Recognition (MSTAR) images with different depression and pose angles. The simulation results using MSTAR images verify the analytical constraints of the system level design.

  9. Biomechanical influences on balance recovery by stepping.

    PubMed

    Hsiao, E T; Robinovitch, S N

    1999-10-01

    Stepping represents a common means for balance recovery after a perturbation to upright posture. Yet little is known regarding the biomechanical factors which determine whether a step succeeds in preventing a fall. In the present study, we developed a simple pendulum-spring model of balance recovery by stepping, and used this to assess how step length and step contact time influence the effort (leg contact force) and feasibility of balance recovery by stepping. We then compared model predictions of step characteristics which minimize leg contact force to experimentally observed values over a range of perturbation strengths. At all perturbation levels, experimentally observed step execution times were higher than optimal, and step lengths were smaller than optimal. However, the predicted increase in leg contact force associated with these deviations was substantial only for large perturbations. Furthermore, increases in the strength of the perturbation caused subjects to take larger, quicker steps, which reduced their predicted leg contact force. We interpret these data to reflect young subjects' desire to minimize recovery effort, subject to neuromuscular constraints on step execution time and step length. Finally, our model predicts that successful balance recovery by stepping is governed by a coupling between step length, step execution time, and leg strength, so that the feasibility of balance recovery decreases unless declines in one capacity are offset by enhancements in the others. This suggests that one's risk for falls may be affected more by small but diffuse neuromuscular impairments than by larger impairment in a single motor capacity.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines

    Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less

  11. Shuttle/TDRSS modelling and link simulation study

    NASA Technical Reports Server (NTRS)

    Braun, W. R.; Mckenzie, T. M.; Biederman, L.; Lindsey, W. C.

    1979-01-01

    A Shuttle/TDRSS S-band and Ku-band link simulation package called LinCsim was developed for the evaluation of link performance for specific Shuttle signal designs. The link models were described in detail and the transmitter distortion parameters or user constraints were carefully defined. The overall link degradation (excluding hardware degradations) relative to an ideal BPSK channel were given for various sets of user constraint values. The performance sensitivity to each individual user constraint was then illustrated. The effect of excessive Spacelab clock jitter on the return link BER performance was also investigated as was the problem of subcarrier recovery for the K-band Shuttle return link signal.

  12. Multi-period equilibrium/near-equilibrium in electricity markets based on locational marginal prices

    NASA Astrophysics Data System (ADS)

    Garcia Bertrand, Raquel

    In this dissertation we propose an equilibrium procedure that coordinates the point of view of every market agent resulting in an equilibrium that simultaneously maximizes the independent objective of every market agent and satisfies network constraints. Therefore, the activities of the generating companies, consumers and an independent system operator are modeled: (1) The generating companies seek to maximize profits by specifying hourly step functions of productions and minimum selling prices, and bounds on productions. (2) The goals of the consumers are to maximize their economic utilities by specifying hourly step functions of demands and maximum buying prices, and bounds on demands. (3) The independent system operator then clears the market taking into account consistency conditions as well as capacity and line losses so as to achieve maximum social welfare. Then, we approach this equilibrium problem using complementarity theory in order to have the capability of imposing constraints on dual variables, i.e., on prices, such as minimum profit conditions for the generating units or maximum cost conditions for the consumers. In this way, given the form of the individual optimization problems, the Karush-Kuhn-Tucker conditions for the generating companies, the consumers and the independent system operator are both necessary and sufficient. The simultaneous solution to all these conditions constitutes a mixed linear complementarity problem. We include minimum profit constraints imposed by the units in the market equilibrium model. These constraints are added as additional constraints to the equivalent quadratic programming problem of the mixed linear complementarity problem previously described. For the sake of clarity, the proposed equilibrium or near-equilibrium is first developed for the particular case considering only one time period. Afterwards, we consider an equilibrium or near-equilibrium applied to a multi-period framework. This model embodies binary decisions, i.e., on/off status for the units, and therefore optimality conditions cannot be directly applied. To avoid limitations provoked by binary variables, while retaining the advantages of using optimality conditions, we define the multi-period market equilibrium using Benders decomposition, which allows computing binary variables through the master problem and continuous variables through the subproblem. Finally, we illustrate these market equilibrium concepts through several case studies.

  13. Starting a new residency program: a step-by-step guide for institutions, hospitals, and program directors

    PubMed Central

    Barajaz, Michelle; Turner, Teri

    2016-01-01

    Although our country faces a looming shortage of doctors, constraints of space, funding, and patient volume in many existing residency programs limit training opportunities for medical graduates. New residency programs need to be created for the expansion of graduate medical education training positions. Partnerships between existing academic institutions and community hospitals with a need for physicians can be a very successful means toward this end. Baylor College of Medicine and The Children's Hospital of San Antonio were affiliated in 2012, and subsequently, we developed and received accreditation for a new categorical pediatric residency program at that site in 2014. We share below a step-by-step guide through the process that includes building of the infrastructure, educational development, accreditation, marketing, and recruitment. It is our hope that the description of this process will help others to spur growth in graduate medical training positions. PMID:27507541

  14. Patch-based frame interpolation for old films via the guidance of motion paths

    NASA Astrophysics Data System (ADS)

    Xia, Tianran; Ding, Youdong; Yu, Bing; Huang, Xi

    2018-04-01

    Due to improper preservation, traditional films will appear frame loss after digital. To deal with this problem, this paper presents a new adaptive patch-based method of frame interpolation via the guidance of motion paths. Our method is divided into three steps. Firstly, we compute motion paths between two reference frames using optical flow estimation. Then, the adaptive bidirectional interpolation with holes filled is applied to generate pre-intermediate frames. Finally, using patch match to interpolate intermediate frames with the most similar patches. Since the patch match is based on the pre-intermediate frames that contain the motion paths constraint, we show a natural and inartificial frame interpolation. We test different types of old film sequences and compare with other methods, the results prove that our method has a desired performance without hole or ghost effects.

  15. Application of Energy Function as a Measure of Error in the Numerical Solution for Online Transient Stability Assessment

    NASA Astrophysics Data System (ADS)

    Sarojkumar, K.; Krishna, S.

    2016-08-01

    Online dynamic security assessment (DSA) is a computationally intensive task. In order to reduce the amount of computation, screening of contingencies is performed. Screening involves analyzing the contingencies with the system described by a simpler model so that computation requirement is reduced. Screening identifies those contingencies which are sure to not cause instability and hence can be eliminated from further scrutiny. The numerical method and the step size used for screening should be chosen with a compromise between speed and accuracy. This paper proposes use of energy function as a measure of error in the numerical solution used for screening contingencies. The proposed measure of error can be used to determine the most accurate numerical method satisfying the time constraint of online DSA. Case studies on 17 generator system are reported.

  16. An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization

    DOE PAGES

    Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.

    2017-04-17

    We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less

  17. An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.

    We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less

  18. Designing a multiroute synthesis scheme in combinatorial chemistry.

    PubMed

    Akavia, Adi; Senderowitz, Hanoch; Lerner, Alon; Shamir, Ron

    2004-01-01

    Solid-phase mix-and-split combinatorial synthesis is often used to produce large arrays of compounds to be tested during the various stages of the drug development process. This method can be represented by a synthesis graph in which nodes correspond to grow operations and arcs to beads transferred among the different reaction vessels. In this work, we address the problem of designing such a graph which maximizes the number of produced target compounds (namely, compounds out of an input library of desired molecules), given constraints on the number of beads used for library synthesis and on the number of reaction vessels available for concurrent grow steps. We present a heuristic based on a discrete search for solving this problem, test our solution on several data sets, explore its behavior, and show that it achieves good performance.

  19. Probability-based constrained MPC for structured uncertain systems with state and random input delays

    NASA Astrophysics Data System (ADS)

    Lu, Jianbo; Li, Dewei; Xi, Yugeng

    2013-07-01

    This article is concerned with probability-based constrained model predictive control (MPC) for systems with both structured uncertainties and time delays, where a random input delay and multiple fixed state delays are included. The process of input delay is governed by a discrete-time finite-state Markov chain. By invoking an appropriate augmented state, the system is transformed into a standard structured uncertain time-delay Markov jump linear system (MJLS). For the resulting system, a multi-step feedback control law is utilised to minimise an upper bound on the expected value of performance objective. The proposed design has been proved to stabilise the closed-loop system in the mean square sense and to guarantee constraints on control inputs and system states. Finally, a numerical example is given to illustrate the proposed results.

  20. Performance measures for transform data coding.

    NASA Technical Reports Server (NTRS)

    Pearl, J.; Andrews, H. C.; Pratt, W. K.

    1972-01-01

    This paper develops performance criteria for evaluating transform data coding schemes under computational constraints. Computational constraints that conform with the proposed basis-restricted model give rise to suboptimal coding efficiency characterized by a rate-distortion relation R(D) similar in form to the theoretical rate-distortion function. Numerical examples of this performance measure are presented for Fourier, Walsh, Haar, and Karhunen-Loeve transforms.

  1. Reinforcement, Behavior Constraint, and the Overjustification Effect.

    ERIC Educational Resources Information Center

    Williams, Bruce W.

    1980-01-01

    Four levels of the behavior constraint-reinforcement variable were manipulated: attractive reward, unattractive reward, request to perform, and a no-reward control. Only the unattractive reward and request groups showed the performance decrements that suggest the overjustification effect. It is concluded that reinforcement does not cause the…

  2. Constraint Preserving Schemes Using Potential-Based Fluxes. I. Multidimensional Transport Equations (PREPRINT)

    DTIC Science & Technology

    2010-01-01

    i,j−∆t nEni ,j , u∗∗i,j =u ∗ i,j−∆t nEni ,j , un+1i,j = 1 2 (uni,j+u ∗∗ i,j ). (2.26) An alternative first-order accurate genuinely multi-dimensional...time stepping is the ex- tended Lax-Friedrichs type time stepping, un+1i,j = 1 8 (4uni,j+u n i+1,j+u n i,j+1+u n i−1,j+u n i,j−1)−∆t nEni ,j . (2.27) 13

  3. Finite-time sliding surface constrained control for a robot manipulator with an unknown deadzone and disturbance.

    PubMed

    Ik Han, Seong; Lee, Jangmyung

    2016-11-01

    This paper presents finite-time sliding mode control (FSMC) with predefined constraints for the tracking error and sliding surface in order to obtain robust positioning of a robot manipulator with input nonlinearity due to an unknown deadzone and external disturbance. An assumed model feedforward FSMC was designed to avoid tedious identification procedures for the manipulator parameters and to obtain a fast response time. Two constraint switching control functions based on the tracking error and finite-time sliding surface were added to the FSMC to guarantee the predefined tracking performance despite the presence of an unknown deadzone and disturbance. The tracking error due to the deadzone and disturbance can be suppressed within the predefined error boundary simply by tuning the gain value of the constraint switching function and without the addition of an extra compensator. Therefore, the designed constraint controller has a simpler structure than conventional transformed error constraint methods and the sliding surface constraint scheme can also indirectly guarantee the tracking error constraint while being more stable than the tracking error constraint control. A simulation and experiment were performed on an articulated robot manipulator to validate the proposed control schemes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Trade-off between Multiple Constraints Enables Simultaneous Formation of Modules and Hubs in Neural Systems

    PubMed Central

    Chen, Yuhan; Wang, Shengjun; Hilgetag, Claus C.; Zhou, Changsong

    2013-01-01

    The formation of the complex network architecture of neural systems is subject to multiple structural and functional constraints. Two obvious but apparently contradictory constraints are low wiring cost and high processing efficiency, characterized by short overall wiring length and a small average number of processing steps, respectively. Growing evidence shows that neural networks are results from a trade-off between physical cost and functional value of the topology. However, the relationship between these competing constraints and complex topology is not well understood quantitatively. We explored this relationship systematically by reconstructing two known neural networks, Macaque cortical connectivity and C. elegans neuronal connections, from combinatory optimization of wiring cost and processing efficiency constraints, using a control parameter , and comparing the reconstructed networks to the real networks. We found that in both neural systems, the reconstructed networks derived from the two constraints can reveal some important relations between the spatial layout of nodes and the topological connectivity, and match several properties of the real networks. The reconstructed and real networks had a similar modular organization in a broad range of , resulting from spatial clustering of network nodes. Hubs emerged due to the competition of the two constraints, and their positions were close to, and partly coincided, with the real hubs in a range of values. The degree of nodes was correlated with the density of nodes in their spatial neighborhood in both reconstructed and real networks. Generally, the rebuilt network matched a significant portion of real links, especially short-distant ones. These findings provide clear evidence to support the hypothesis of trade-off between multiple constraints on brain networks. The two constraints of wiring cost and processing efficiency, however, cannot explain all salient features in the real networks. The discrepancy suggests that there are further relevant factors that are not yet captured here. PMID:23505352

  5. Segmentation of radiographic images under topological constraints: application to the femur.

    PubMed

    Gamage, Pavan; Xie, Sheng Quan; Delmas, Patrice; Xu, Wei Liang

    2010-09-01

    A framework for radiographic image segmentation under topological control based on two-dimensional (2D) image analysis was developed. The system is intended for use in common radiological tasks including fracture treatment analysis, osteoarthritis diagnostics and osteotomy management planning. The segmentation framework utilizes a generic three-dimensional (3D) model of the bone of interest to define the anatomical topology. Non-rigid registration is performed between the projected contours of the generic 3D model and extracted edges of the X-ray image to achieve the segmentation. For fractured bones, the segmentation requires an additional step where a region-based active contours curve evolution is performed with a level set Mumford-Shah method to obtain the fracture surface edge. The application of the segmentation framework to analysis of human femur radiographs was evaluated. The proposed system has two major innovations. First, definition of the topological constraints does not require a statistical learning process, so the method is generally applicable to a variety of bony anatomy segmentation problems. Second, the methodology is able to handle both intact and fractured bone segmentation. Testing on clinical X-ray images yielded an average root mean squared distance (between the automatically segmented femur contour and the manual segmented ground truth) of 1.10 mm with a standard deviation of 0.13 mm. The proposed point correspondence estimation algorithm was benchmarked against three state-of-the-art point matching algorithms, demonstrating successful non-rigid registration for the cases of interest. A topologically constrained automatic bone contour segmentation framework was developed and tested, providing robustness to noise, outliers, deformations and occlusions.

  6. Effects of scaling task constraints on emergent behaviours in children's racquet sports performance.

    PubMed

    Fitzpatrick, Anna; Davids, Keith; Stone, Joseph A

    2018-04-01

    Manipulating task constraints by scaling key features like space and equipment is considered an effective method for enhancing performance development and refining movement patterns in sport. Despite this, it is currently unclear whether scaled manipulation of task constraints would impact emergent movement behaviours in young children, affording learners opportunities to develop relevant skills. Here, we sought to investigate how scaling task constraints during 8 weeks of mini tennis training shaped backhand stroke development. Two groups, control (n = 8, age = 7.2 ± 0.6 years) and experimental (n = 8, age 7.4 ± 0.4 years), underwent practice using constraints-based manipulations, with a specific field of affordances designed for backhand strokes as the experimental treatment. To evaluate intervention effects, pre- and post-test match-play characteristics (e.g. forehand and backhand percentage strokes) and measures from a tennis-specific skills test (e.g. forehand and backhand technical proficiency), were evaluated. Post intervention, the experimental group performed a greater percentage of backhand strokes out of total number of shots played (46.7 ± 3.3%). There was also a significantly greater percentage of backhand winners out of total backhand strokes observed (5.5 ± 3.0%), compared to the control group during match-play (backhands = 22.4 ± 6.5%; backhand winners = 1.0 ± 3.6%). The experimental group also demonstrated improvements in forehand and backhand technical proficiency and the ability to maintain a rally with a coach, compared to the control group. In conclusion, scaled manipulations implemented here elicited more functional performance behaviours than standard Mini Tennis Red constraints. Results suggested how human movement scientists may scale task constraint manipulations to augment young athletes' performance development. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.

  7. A comparison of American and Nepalese children's concepts of freedom of choice and social constraint.

    PubMed

    Chernyak, Nadia; Kushnir, Tamar; Sullivan, Katherine M; Wang, Qi

    2013-01-01

    Recent work has shown that preschool-aged children and adults understand freedom of choice regardless of culture, but that adults across cultures differ in perceiving social obligations as constraints on action. To investigate the development of these cultural differences and universalities, we interviewed school-aged children (4-11) in Nepal and the United States regarding beliefs about people's freedom of choice and constraint to follow preferences, perform impossible acts, and break social obligations. Children across cultures and ages universally endorsed the choice to follow preferences but not to perform impossible acts. Age and culture effects also emerged: Young children in both cultures viewed social obligations as constraints on action, but American children did so less as they aged. These findings suggest that while basic notions of free choice are universal, recognitions of social obligations as constraints on action may be culturally learned. © 2013 Cognitive Science Society, Inc.

  8. The Efficacy of Multidimensional Constraint Keys in Database Query Performance

    ERIC Educational Resources Information Center

    Cardwell, Leslie K.

    2012-01-01

    This work is intended to introduce a database design method to resolve the two-dimensional complexities inherent in the relational data model and its resulting performance challenges through abstract multidimensional constructs. A multidimensional constraint is derived and utilized to implement an indexed Multidimensional Key (MK) to abstract a…

  9. Optimal modified tracking performance for MIMO networked control systems with communication constraints.

    PubMed

    Wu, Jie; Zhou, Zhu-Jun; Zhan, Xi-Sheng; Yan, Huai-Cheng; Ge, Ming-Feng

    2017-05-01

    This paper investigates the optimal modified tracking performance of multi-input multi-output (MIMO) networked control systems (NCSs) with packet dropouts and bandwidth constraints. Some explicit expressions are obtained by using co-prime factorization and the spectral decomposition technique. The obtained results show that the optimal modified tracking performance is related to the intrinsic properties of a given plant such as non-minimum phase (NMP) zeros, unstable poles, and their directions. Furthermore, the modified factor, packet dropouts probability and bandwidth also impact the optimal modified tracking performance of the NCSs. The optimal modified tracking performance with channel input power constraint is obtained by searching through all stabilizing two-parameter compensator. Finally, some typical examples are given to illustrate the effectiveness of the theoretical results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  10. Improving spacecraft design using a multidisciplinary design optimization methodology

    NASA Astrophysics Data System (ADS)

    Mosher, Todd Jon

    2000-10-01

    Spacecraft design has gone from maximizing performance under technology constraints to minimizing cost under performance constraints. This is characteristic of the "faster, better, cheaper" movement that has emerged within NASA. Currently spacecraft are "optimized" manually through a tool-assisted evaluation of a limited set of design alternatives. With this approach there is no guarantee that a systems-level focus will be taken and "feasibility" rather than "optimality" is commonly all that is achieved. To improve spacecraft design in the "faster, better, cheaper" era, a new approach using multidisciplinary design optimization (MDO) is proposed. Using MDO methods brings structure to conceptual spacecraft design by casting a spacecraft design problem into an optimization framework. Then, through the construction of a model that captures design and cost, this approach facilitates a quicker and more straightforward option synthesis. The final step is to automatically search the design space. As computer processor speed continues to increase, enumeration of all combinations, while not elegant, is one method that is straightforward to perform. As an alternative to enumeration, genetic algorithms are used and find solutions by reviewing fewer possible solutions with some limitations. Both methods increase the likelihood of finding an optimal design, or at least the most promising area of the design space. This spacecraft design methodology using MDO is demonstrated on three examples. A retrospective test for validation is performed using the Near Earth Asteroid Rendezvous (NEAR) spacecraft design. For the second example, the premise that aerobraking was needed to minimize mission cost and was mission enabling for the Mars Global Surveyor (MGS) mission is challenged. While one might expect no feasible design space for an MGS without aerobraking mission, a counterintuitive result is discovered. Several design options that don't use aerobraking are feasible and cost effective. The third example is an original commercial lunar mission entitled Eagle-eye. This example shows how an MDO approach is applied to an original mission with a larger feasible design space. It also incorporates a simplified business case analysis.

  11. Managing temporal relations

    NASA Technical Reports Server (NTRS)

    Britt, Daniel L.; Geoffroy, Amy L.; Gohring, John R.

    1990-01-01

    Various temporal constraints on the execution of activities are described, and their representation in the scheduling system MAESTRO is discussed. Initial examples are presented using a sample activity described. Those examples are expanded to include a second activity, and the types of temporal constraints that can obtain between two activities are explored. Soft constraints, or preferences, in activity placement are discussed. Multiple performances of activities are considered, with respect to both hard and soft constraints. The primary methods used in MAESTRO to handle temporal constraints are described as are certain aspects of contingency handling with respect to temporal constraints. A discussion of the overall approach, with indications of future directions for this research, concludes the study.

  12. A new algorithm for stand table projection models.

    Treesearch

    Quang V. Cao; V. Clark Baldwin

    1999-01-01

    The constrained least squares method is proposed as an algorithm for projecting stand tables through time. This method consists of three steps: (1) predict survival in each diameter class, (2) predict diameter growth, and (3) use the least squares approach to adjust the stand table to satisfy the constraints of future survival, average diameter, and stand basal area....

  13. 78 FR 69079 - Midcontinent Independent System Operator, Inc.; Supplemental Notice of Technical Conference

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-18

    ... Schedule 46. a. For step one, define the terms ``Hourly Real-Time RSG MWP'' and ``Resource CMC Real-time... RSG credits and the difference between one and the Constraint Management Charge Allocation Factor... and Headroom Need is (1) less than or equal to zero, (2) greater than or equal to the Economic...

  14. Dynamic control of moisture during hot pressing of wood composites

    Treesearch

    Cheng Piao; Todd F. Shupe; Chung Y. Hse

    2006-01-01

    Hot pressing is an important step in the manufacture of wood composites. In the conventional pressing system, hot press output often acts as a constraint to increased production. Severe drying of the furnish (e.g., particles, flakes, or fibers) required by this process substantially increases the manufacturing cost and creates air-polluting emissions of volatile...

  15. Kriging Direct and Indirect Estimates of Sulfate Deposition: A Comparison

    Treesearch

    Gregory A. Reams; Manuela M.P. Huso; Richard J. Vong; Joseph M. McCollum

    1997-01-01

    Due to logistical and cost constraints, acidic deposition is rarely measured at forest research or sampling locations. A crucial first step to assessing the effects of acid rain on forests is an accurate estimate of acidic deposition at forest sample sites. We examine two methods (direct and indirect) for estimating sulfate deposition at atmospherically unmonitored...

  16. Running DNA Mini-Gels in 20 Minutes or Less Using Sodium Boric Acid Buffer

    ERIC Educational Resources Information Center

    Jenkins, Kristin P.; Bielec, Barbara

    2006-01-01

    Providing a biotechnology experience for students can be challenging on several levels, and time is a real constraint for many experiments. Many DNA based methods require a gel electrophoresis step, and although some biotechnology procedures have convenient break points, gel electrophoresis does not. In addition to the time required for loading…

  17. Materials Selection for Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Arnold, Steven M.; Cebon, David; Ashby, Mike

    2012-01-01

    A systematic design-oriented, five-step approach to material selection is described: 1) establishing design requirements, 2) material screening, 3) ranking, 4) researching specific candidates and 5) applying specific cultural constraints to the selection process. At the core of this approach is the definition performance indices (i.e., particular combinations of material properties that embody the performance of a given component) in conjunction with material property charts. These material selection charts, which plot one property against another, are introduced and shown to provide a powerful graphical environment wherein one can apply and analyze quantitative selection criteria, such as those captured in performance indices, and make trade-offs between conflicting objectives. Finding a material with a high value of these indices maximizes the performance of the component. Two specific examples pertaining to aerospace (engine blades and pressure vessels) are examined, both at room temperature and elevated temperature (where time-dependent effects are important) to demonstrate the methodology. The discussion then turns to engineered/hybrid materials and how these can be effectively tailored to fill in holes in the material property space, so as to enable innovation and increases in performance as compared to monolithic materials. Finally, a brief discussion is presented on managing the data needed for materials selection, including collection, analysis, deployment, and maintenance issues.

  18. MIMO radar waveform design with peak and sum power constraints

    NASA Astrophysics Data System (ADS)

    Arulraj, Merline; Jeyaraman, Thiruvengadam S.

    2013-12-01

    Optimal power allocation for multiple-input multiple-output radar waveform design subject to combined peak and sum power constraints using two different criteria is addressed in this paper. The first one is by maximizing the mutual information between the random target impulse response and the reflected waveforms, and the second one is by minimizing the mean square error in estimating the target impulse response. It is assumed that the radar transmitter has knowledge of the target's second-order statistics. Conventionally, the power is allocated to transmit antennas based on the sum power constraint at the transmitter. However, the wide power variations across the transmit antenna pose a severe constraint on the dynamic range and peak power of the power amplifier at each antenna. In practice, each antenna has the same absolute peak power limitation. So it is desirable to consider the peak power constraint on the transmit antennas. A generalized constraint that jointly meets both the peak power constraint and the average sum power constraint to bound the dynamic range of the power amplifier at each transmit antenna is proposed recently. The optimal power allocation using the concept of waterfilling, based on the sum power constraint, is the special case of p = 1. The optimal solution for maximizing the mutual information and minimizing the mean square error is obtained through the Karush-Kuhn-Tucker (KKT) approach, and the numerical solutions are found through a nested Newton-type algorithm. The simulation results show that the detection performance of the system with both sum and peak power constraints gives better detection performance than considering only the sum power constraint at low signal-to-noise ratio.

  19. Reanalysis of Clause Boundaries in Japanese as a Constraint-Driven Process.

    ERIC Educational Resources Information Center

    Miyamoto, Edson T.

    2003-01-01

    Reports on two experiments that focus on clause boundaries in Japanese that suggest that minimal change restriction is unnecessary to characterize reanalysis. Proposes that the data and previous observations are more naturally explained by a constraint-driven model in which revisions are performed only when required by parsing constraints.…

  20. Temporal Constraints of the Word Blindness Posthypnotic Suggestion on Stroop Task Performance

    ERIC Educational Resources Information Center

    Parris, Benjamin A.; Dienes, Zoltan; Hodgson, Timothy L.

    2012-01-01

    The present work investigated possible temporal constraints on the posthypnotic word blindness suggestion effect. In a completely within-subjects and counterbalanced design 19 highly suggestible individuals performed the Stroop task both with and without a posthypnotic suggestion that they would be unable to read the word dimension of the Stroop…

  1. Robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming.

    PubMed

    Baran, Richard; Northen, Trent R

    2013-10-15

    Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.

  2. Variational Optimization of the Second-Order Density Matrix Corresponding to a Seniority-Zero Configuration Interaction Wave Function.

    PubMed

    Poelmans, Ward; Van Raemdonck, Mario; Verstichel, Brecht; De Baerdemacker, Stijn; Torre, Alicia; Lain, Luis; Massaccesi, Gustavo E; Alcoba, Diego R; Bultinck, Patrick; Van Neck, Dimitri

    2015-09-08

    We perform a direct variational determination of the second-order (two-particle) density matrix corresponding to a many-electron system, under a restricted set of the two-index N-representability P-, Q-, and G-conditions. In addition, we impose a set of necessary constraints that the two-particle density matrix must be derivable from a doubly occupied many-electron wave function, i.e., a singlet wave function for which the Slater determinant decomposition only contains determinants in which spatial orbitals are doubly occupied. We rederive the two-index N-representability conditions first found by Weinhold and Wilson and apply them to various benchmark systems (linear hydrogen chains, He, N2, and CN(-)). This work is motivated by the fact that a doubly occupied many-electron wave function captures in many cases the bulk of the static correlation. Compared to the general case, the structure of doubly occupied two-particle density matrices causes the associate semidefinite program to have a very favorable scaling as L(3), where L is the number of spatial orbitals. Since the doubly occupied Hilbert space depends on the choice of the orbitals, variational calculation steps of the two-particle density matrix are interspersed with orbital-optimization steps (based on Jacobi rotations in the space of the spatial orbitals). We also point to the importance of symmetry breaking of the orbitals when performing calculations in a doubly occupied framework.

  3. Issues in knowledge representation to support maintainability: A case study in scientific data preparation

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Kandt, R. Kirk; Roden, Joseph; Burleigh, Scott; King, Todd; Joy, Steve

    1992-01-01

    Scientific data preparation is the process of extracting usable scientific data from raw instrument data. This task involves noise detection (and subsequent noise classification and flagging or removal), extracting data from compressed forms, and construction of derivative or aggregate data (e.g. spectral densities or running averages). A software system called PIPE provides intelligent assistance to users developing scientific data preparation plans using a programming language called Master Plumber. PIPE provides this assistance capability by using a process description to create a dependency model of the scientific data preparation plan. This dependency model can then be used to verify syntactic and semantic constraints on processing steps to perform limited plan validation. PIPE also provides capabilities for using this model to assist in debugging faulty data preparation plans. In this case, the process model is used to focus the developer's attention upon those processing steps and data elements that were used in computing the faulty output values. Finally, the dependency model of a plan can be used to perform plan optimization and runtime estimation. These capabilities allow scientists to spend less time developing data preparation procedures and more time on scientific analysis tasks. Because the scientific data processing modules (called fittings) evolve to match scientists' needs, issues regarding maintainability are of prime importance in PIPE. This paper describes the PIPE system and describes how issues in maintainability affected the knowledge representation used in PIPE to capture knowledge about the behavior of fittings.

  4. Dynamic whole-body robotic manipulation

    NASA Astrophysics Data System (ADS)

    Abe, Yeuhi; Stephens, Benjamin; Murphy, Michael P.; Rizzi, Alfred A.

    2013-05-01

    The creation of dynamic manipulation behaviors for high degree of freedom, mobile robots will allow them to accomplish increasingly difficult tasks in the field. We are investigating how the coordinated use of the body, legs, and integrated manipulator, on a mobile robot, can improve the strength, velocity, and workspace when handling heavy objects. We envision that such a capability would aid in a search and rescue scenario when clearing obstacles from a path or searching a rubble pile quickly. Manipulating heavy objects is especially challenging because the dynamic forces are high and a legged system must coordinate all its degrees of freedom to accomplish tasks while maintaining balance. To accomplish these types of manipulation tasks, we use trajectory optimization techniques to generate feasible open-loop behaviors for our 28 dof quadruped robot (BigDog) by planning trajectories in a 13 dimensional space. We apply the Covariance Matrix Adaptation (CMA) algorithm to solve for trajectories that optimize task performance while also obeying important constraints such as torque and velocity limits, kinematic limits, and center of pressure location. These open-loop behaviors are then used to generate desired feed-forward body forces and foot step locations, which enable tracking on the robot. Some hardware results for cinderblock throwing are demonstrated on the BigDog quadruped platform augmented with a human-arm-like manipulator. The results are analogous to how a human athlete maximizes distance in the discus event by performing a precise sequence of choreographed steps.

  5. Specific arithmetic calculation deficits in children with Turner syndrome.

    PubMed

    Rovet, J; Szekely, C; Hockenberry, M N

    1994-12-01

    Study 1 compared arithmetic processing skills on the WRAT-R in 45 girls with Turner syndrome (TS) and 92 age-matched female controls. Results revealed significant underachievement by subjects with TS, which reflected their poorer performance on problems requiring the retrieval of addition and multiplication facts and procedural knowledge for addition and division operations. TS subjects did not differ qualitatively from controls in type of procedural error committed. Study 2, which compared the performance of 10 subjects with TS and 31 controls on the Keymath Diagnostic Arithmetic Test, showed that the TS group had less adequate knowledge of arithmetic, subtraction, and multiplication procedures but did not differ from controls on Fact items. Error analyses revealed that TS subjects were more likely to confuse component steps or fail to separate intermediate steps or to complete problems. TS subjects relied to a greater degree on verbal than visual-spatial abilities in arithmetic processing while their visual-spatial abilities were associated with retrieval of simple multidigit addition facts and knowledge of subtraction, multiplication, and division procedures. Differences between the TS and control groups increased with age for Keymath, but not WRAT-R, procedures. Discrepant findings are related to the different task constraints (timed vs. untimed, single vs. alternate versions, size of item pool) and the use of different strategies (counting vs. fact retrieval). It is concluded that arithmetic difficulties in females with TS are due to less adequate procedural skills, combined with poorer fact retrieval in timed testing situations, rather than to inadequate visual-spatial abilities.

  6. Modulating Excitonic Recombination Effects through One-Step Synthesis of Perovskite Nanoparticles for Light-Emitting Diodes.

    PubMed

    Kulkarni, Sneha A; Muduli, Subas; Xing, Guichuan; Yantara, Natalia; Li, Mingjie; Chen, Shi; Sum, Tze Chien; Mathews, Nripan; White, Tim J; Mhaisalkar, Subodh G

    2017-10-09

    The primary advantages of halide perovskites for light-emitting diodes (LEDs) are solution processability, direct band gap, good charge-carrier diffusion lengths, low trap density, and reasonable carrier mobility. The luminescence in 3 D halide perovskite thin films originates from free electron-hole bimolecular recombination. However, the slow bimolecular recombination rate is a fundamental performance limitation. Perovskite nanoparticles could result in improved performance but processability and cumbersome synthetic procedures remain challenges. Herein, these constraints are overcome by tailoring the 3 D perovskite as a near monodisperse nanoparticle film prepared through a one-step in situ deposition method. Replacing methyl ammonium bromide (CH 3 NH 3 Br, MABr) partially by octyl ammonium bromide [CH 3 (CH 2 ) 7 NH 3 Br, OABr] in defined mole ratios in the perovskite precursor proved crucial for the nanoparticle formation. Films consisting of the in situ formed nanoparticles displayed signatures associated with excitonic recombination, rather than that of bimolecular recombination associated with 3 D perovskites. This transition was accompanied by enhanced photoluminescence quantum yield (PLQY≈20.5 % vs. 3.40 %). Perovskite LEDs fabricated from the nanoparticle films exhibit a one order of magnitude improvement in current efficiency and doubling in luminance efficiency. The material processing systematics derived from this study provides the means to control perovskite morphologies through the selection and mixing of appropriate additives. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task.

    PubMed

    Akam, Thomas; Costa, Rui; Dayan, Peter

    2015-12-01

    The recently developed 'two-step' behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects' investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues.

  8. Reproductive management of dairy herds in New Zealand: attitudes, priorities and constraints perceived by farmers managing seasonal-calving, pasture-based herds in four regions.

    PubMed

    Brownlie, T S; Weir, A M; Tarbotton, I; Morton, J M; Heuer, C; McDougall, S

    2011-01-01

    To examine attitudes, priorities, and constraints pertaining to herd reproductive management perceived by farmers managing seasonal-calving, pasture-based dairy herds in four regions of New Zealand, and to explore how these varied with demographic and biophysical factors. Key decision makers (KDM) on 133 dairy herds in four dairy regions (Waikato, Taranaki, and north and south Canterbury) were interviewed between May and July 2009. They were asked to provide demographic and biophysical data about the farm, and to rate their attitude in relation to their own personality traits, management issues and priorities, and likely constraints affecting reproductive performance in their herds. Associations between demographic factors and attitudes, priorities and constraints were analysed using univariable and multivariable proportional-odds regression models. Farms in the regions studied in the South Island were larger, had larger herds and more staff than farms in the regions studied in the North Island. The farms in the South Island were more likely to be owned by a corporation, managed by younger people or people who had more education, and the herds were more likely to be fed a higher percentage of supplementary feed. The majority of KDM rated the current genetics, milksolids performance and reproductive performance of their herds as high or very high, and >70% believed that the reproductive performance had remained the same or improved over the preceding 3 years. Despite this, improving reproductive performance was the most highly rated priority for the next 3 years. The constraints considered most likely to have affected reproductive performance in the last 2 years were anoestrous cows, protracted calving periods, and low body condition scores; those considered least likely were artificial breeding and heat detection. Of the variables examined related to attitudes, priorities and likely constraints, there were significant differences between region for 10/40, and with age and occupation of the KDM for 24/40 and 5/40, respectively (p<0.05). The majority of KDM reported the current reproductive performance of their herds to be high or very high, yet rated improving reproductive performance as a very high priority for the next 3 years. Mismatch between perceived and actual performance may result in reduced uptake of extension programmes designed to improve performance, and accurate benchmarking may help increase uptake and engagement. Further work is needed to determine whether the attitudes and perceptions about performance of farmers affect the likelihood of changes in their management behaviour which translate to measurable change in the actual reproductive performance of their herds. The variation in attitude, priorities and perceived constraints among age groups and region indicates that design of extension programmes may need to vary with these demographics.

  9. Gold-Catalyzed Solid-Phase Synthesis of 3,4-Dihydropyrazin-2(1H)-ones: Relevant Pharmacophores and Peptide Backbone Constraints.

    PubMed

    Přibylka, Adam; Krchňák, Viktor

    2017-11-13

    Here, we report the efficient solid-phase synthesis of N-propargyl peptides using Fmoc-amino acids and propargyl alcohol as key building blocks. Gold-catalyzed nucleophilic addition to the triple bond induced C-N bond formation, which triggered intramolecular cyclization, yielding 1,3,4-trisubstituted-5-methyl-3,4-dihydropyrazin-2(1H)-ones. Conformations of acyclic and constrained peptides were compared using a two-step conformer distribution analysis at the molecular mechanics level and density functional theory. The results indicated that the incorporation of heterocyclic molecular scaffold into a short peptide sequence adopted extended conformation of peptide chain. The amide bond adjacent to the constraint did not show significant preference for either cis or trans isomerism. Prepared model compounds demonstrate a proof of concept for gold-catalyzed polymer-supported synthesis of variously substituted 3,4-dihydropyrazin-2(1H)-ones for applications in drug discovery and peptide backbone constraints.

  10. Non-iterative distance constraints enforcement for cloth drapes simulation

    NASA Astrophysics Data System (ADS)

    Hidajat, R. L. L. G.; Wibowo, Arifin, Z.; Suyitno

    2016-03-01

    A cloth simulation represents the behavior of cloth objects such as flag, tablecloth, or even garments has application in clothing animation for games and virtual shops. Elastically deformable models have widely used to provide realistic and efficient simulation, however problem of overstretching is encountered. We introduce a new cloth simulation algorithm that replaces iterative distance constraint enforcement steps with non-iterative ones for preventing over stretching in a spring-mass system for cloth modeling. Our method is based on a simple position correction procedure applied at one end of a spring. In our experiments, we developed a rectangle cloth model which is initially at a horizontal position with one point is fixed, and it is allowed to drape by its own weight. Our simulation is able to achieve a plausible cloth drapes as in reality. This paper aims to demonstrate the reliability of our approach to overcome overstretches while decreasing the computational cost of the constraint enforcement process due to an iterative procedure that is eliminated.

  11. Solving an inverse eigenvalue problem with triple constraints on eigenvalues, singular values, and diagonal elements

    NASA Astrophysics Data System (ADS)

    Wu, Sheng-Jhih; Chu, Moody T.

    2017-08-01

    An inverse eigenvalue problem usually entails two constraints, one conditioned upon the spectrum and the other on the structure. This paper investigates the problem where triple constraints of eigenvalues, singular values, and diagonal entries are imposed simultaneously. An approach combining an eclectic mix of skills from differential geometry, optimization theory, and analytic gradient flow is employed to prove the solvability of such a problem. The result generalizes the classical Mirsky, Sing-Thompson, and Weyl-Horn theorems concerning the respective majorization relationships between any two of the arrays of main diagonal entries, eigenvalues, and singular values. The existence theory fills a gap in the classical matrix theory. The problem might find applications in wireless communication and quantum information science. The technique employed can be implemented as a first-step numerical method for constructing the matrix. With slight modification, the approach might be used to explore similar types of inverse problems where the prescribed entries are at general locations.

  12. A Method for Optimal Load Dispatch of a Multi-zone Power System with Zonal Exchange Constraints

    NASA Astrophysics Data System (ADS)

    Hazarika, Durlav; Das, Ranjay

    2018-04-01

    This paper presented a method for economic generation scheduling of a multi-zone power system having inter zonal operational constraints. For this purpose, the generator rescheduling for a multi area power system having inter zonal operational constraints has been represented as a two step optimal generation scheduling problem. At first, the optimal generation scheduling has been carried out for the zone having surplus or deficient generation with proper spinning reserve using co-ordination equation. The power exchange required for the deficit zones and zones having no generation are estimated based on load demand and generation for the zone. The incremental transmission loss formulas for the transmission lines participating in the power transfer process among the zones are formulated. Using these, incremental transmission loss expression in co-ordination equation, the optimal generation scheduling for the zonal exchange has been determined. Simulation is carried out on IEEE 118 bus test system to examine the applicability and validity of the method.

  13. Integration of 3D anatomical data obtained by CT imaging and 3D optical scanning for computer aided implant surgery

    PubMed Central

    2011-01-01

    Background A precise placement of dental implants is a crucial step to optimize both prosthetic aspects and functional constraints. In this context, the use of virtual guiding systems has been recognized as a fundamental tool to control the ideal implant position. In particular, complex periodontal surgeries can be performed using preoperative planning based on CT data. The critical point of the procedure relies on the lack of accuracy in transferring CT planning information to surgical field through custom-made stereo-lithographic surgical guides. Methods In this work, a novel methodology is proposed for monitoring loss of accuracy in transferring CT dental information into periodontal surgical field. The methodology is based on integrating 3D data of anatomical (impression and cast) and preoperative (radiographic template) models, obtained by both CT and optical scanning processes. Results A clinical case, relative to a fully edentulous jaw patient, has been used as test case to assess the accuracy of the various steps concurring in manufacturing surgical guides. In particular, a surgical guide has been designed to place implants in the bone structure of the patient. The analysis of the results has allowed the clinician to monitor all the errors, which have been occurring step by step manufacturing the physical templates. Conclusions The use of an optical scanner, which has a higher resolution and accuracy than CT scanning, has demonstrated to be a valid support to control the precision of the various physical models adopted and to point out possible error sources. A case study regarding a fully edentulous patient has confirmed the feasibility of the proposed methodology. PMID:21338504

  14. An algorithm for fast elastic wave simulation using a vectorized finite difference operator

    NASA Astrophysics Data System (ADS)

    Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna

    2018-07-01

    Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.

  15. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial decrease of the required number of function evaluations for detecting the optimal management policy, using an innovative, surrogate-assisted global optimization approach.

  16. A new method for solving routing and wavelength assignment problems under inaccurate routing information in optical networks with conversion capability

    NASA Astrophysics Data System (ADS)

    Luo, Yanting; Zhang, Yongjun; Gu, Wanyi

    2009-11-01

    In large dynamic networks it is extremely difficult to maintain accurate routing information on all network nodes. The existing studies have illustrated the impact of imprecise state information on the performance of dynamic routing and wavelength assignment (RWA) algorithms. An algorithm called Bypass Based Optical Routing (BBOR) proposed by Xavier Masip-Bruin et al can reduce the effects of having inaccurate routing information in networks operating under the wavelength-continuity constraint. Then they extended the BBOR mechanism (for convenience it's called EBBOR mechanism below) to be applied to the networks with sparse and limited wavelength conversion. But it only considers the characteristic of wavelength conversion in the step of computing the bypass-paths so that its performance may decline with increasing the degree of wavelength translation (this concept will be explained in the section of introduction again). We will demonstrate the issue through theoretical analysis and introduce a novel algorithm which modifies both the lightpath selection and the bypass-paths computation in comparison to EBBOR algorithm. Simulations show that the Modified EBBOR (MEBBOR) algorithm improves the blocking performance significantly in optical networks with Conversion Capability.

  17. Achieving energy efficiency during collective communications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundriyal, Vaibhav; Sosonkina, Masha; Zhang, Zhao

    2012-09-13

    Energy consumption has become a major design constraint in modern computing systems. With the advent of petaflops architectures, power-efficient software stacks have become imperative for scalability. Techniques such as dynamic voltage and frequency scaling (called DVFS) and CPU clock modulation (called throttling) are often used to reduce the power consumption of the compute nodes. To avoid significant performance losses, these techniques should be used judiciously during parallel application execution. For example, its communication phases may be good candidates to apply the DVFS and CPU throttling without incurring a considerable performance loss. They are often considered as indivisible operations although littlemore » attention is being devoted to the energy saving potential of their algorithmic steps. In this work, two important collective communication operations, all-to-all and allgather, are investigated as to their augmentation with energy saving strategies on the per-call basis. The experiments prove the viability of such a fine-grain approach. They also validate a theoretical power consumption estimate for multicore nodes proposed here. While keeping the performance loss low, the obtained energy savings were always significantly higher than those achieved when DVFS or throttling were switched on across the entire application run« less

  18. Flexible Fusion Structure-Based Performance Optimization Learning for Multisensor Target Tracking

    PubMed Central

    Ge, Quanbo; Wei, Zhongliang; Cheng, Tianfa; Chen, Shaodong; Wang, Xiangfeng

    2017-01-01

    Compared with the fixed fusion structure, the flexible fusion structure with mixed fusion methods has better adjustment performance for the complex air task network systems, and it can effectively help the system to achieve the goal under the given constraints. Because of the time-varying situation of the task network system induced by moving nodes and non-cooperative target, and limitations such as communication bandwidth and measurement distance, it is necessary to dynamically adjust the system fusion structure including sensors and fusion methods in a given adjustment period. Aiming at this, this paper studies the design of a flexible fusion algorithm by using an optimization learning technology. The purpose is to dynamically determine the sensors’ numbers and the associated sensors to take part in the centralized and distributed fusion processes, respectively, herein termed sensor subsets selection. Firstly, two system performance indexes are introduced. Especially, the survivability index is presented and defined. Secondly, based on the two indexes and considering other conditions such as communication bandwidth and measurement distance, optimization models for both single target tracking and multi-target tracking are established. Correspondingly, solution steps are given for the two optimization models in detail. Simulation examples are demonstrated to validate the proposed algorithms. PMID:28481243

  19. Reproductive constraints, direct fitness and indirect fitness benefits explain helping behaviour in the primitively eusocial wasp, Polistes canadensis.

    PubMed

    Sumner, Seirian; Kelstrup, Hans; Fanelli, Daniele

    2010-06-07

    A key step in the evolution of sociality is the abandonment of independent breeding in favour of helping. In cooperatively breeding vertebrates and primitively eusocial insects, helpers are capable of leaving the group and reproducing independently, and yet many do not. A fundamental question therefore is why do helpers help? Helping behaviour may be explained by constraints on independent reproduction and/or benefits to individuals from helping. Here, we examine simultaneously the reproductive constraints and fitness benefits underlying helping behaviour in a primitively eusocial paper wasp. We gave 31 helpers the opportunity to become egg-layers on their natal nests by removing nestmates. This allowed us to determine whether helpers are reproductively constrained in any way. We found that age strongly influenced whether an ex-helper could become an egg-layer, such that young ex-helpers could become egg-layers while old ex-helpers were less able. These differential reproductive constraints enabled us to make predictions about the behaviours of ex-helpers, depending on the relative importance of direct and indirect fitness benefits. We found little evidence that indirect fitness benefits explain helping behaviour, as 71 per cent of ex-helpers left their nests before the end of the experiment. In the absence of reproductive constraints, however, young helpers value direct fitness opportunities over indirect fitness. We conclude that a combination of reproductive constraints and potential for future direct reproduction explain helping behaviour in this species. Testing several competing explanations for helping behaviour simultaneously promises to advance our understanding of social behaviour in animal groups.

  20. Impacts of Base-Case and Post-Contingency Constraint Relaxations on Static and Dynamic Operational Security

    NASA Astrophysics Data System (ADS)

    Salloum, Ahmed

    Constraint relaxation by definition means that certain security, operational, or financial constraints are allowed to be violated in the energy market model for a predetermined penalty price. System operators utilize this mechanism in an effort to impose a price-cap on shadow prices throughout the market. In addition, constraint relaxations can serve as corrective approximations that help in reducing the occurrence of infeasible or extreme solutions in the day-ahead markets. This work aims to capture the impact constraint relaxations have on system operational security. Moreover, this analysis also provides a better understanding of the correlation between DC market models and AC real-time systems and analyzes how relaxations in market models propagate to real-time systems. This information can be used not only to assess the criticality of constraint relaxations, but also as a basis for determining penalty prices more accurately. Constraint relaxations practice was replicated in this work using a test case and a real-life large-scale system, while capturing both energy market aspects and AC real-time system performance. System performance investigation included static and dynamic security analysis for base-case and post-contingency operating conditions. PJM peak hour loads were dynamically modeled in order to capture delayed voltage recovery and sustained depressed voltage profiles as a result of reactive power deficiency caused by constraint relaxations. Moreover, impacts of constraint relaxations on operational system security were investigated when risk based penalty prices are used. Transmission lines in the PJM system were categorized according to their risk index and each category was as-signed a different penalty price accordingly in order to avoid real-time overloads on high risk lines. This work also extends the investigation of constraint relaxations to post-contingency relaxations, where emergency limits are allowed to be relaxed in energy market models. Various scenarios were investigated to capture and compare between the impacts of base-case and post-contingency relaxations on real-time system performance, including the presence of both relaxations simultaneously. The effect of penalty prices on the number and magnitude of relaxations was investigated as well.

  1. Multiconstrained gene clustering based on generalized projections

    PubMed Central

    2010-01-01

    Background Gene clustering for annotating gene functions is one of the fundamental issues in bioinformatics. The best clustering solution is often regularized by multiple constraints such as gene expressions, Gene Ontology (GO) annotations and gene network structures. How to integrate multiple pieces of constraints for an optimal clustering solution still remains an unsolved problem. Results We propose a novel multiconstrained gene clustering (MGC) method within the generalized projection onto convex sets (POCS) framework used widely in image reconstruction. Each constraint is formulated as a corresponding set. The generalized projector iteratively projects the clustering solution onto these sets in order to find a consistent solution included in the intersection set that satisfies all constraints. Compared with previous MGC methods, POCS can integrate multiple constraints from different nature without distorting the original constraints. To evaluate the clustering solution, we also propose a new performance measure referred to as Gene Log Likelihood (GLL) that considers genes having more than one function and hence in more than one cluster. Comparative experimental results show that our POCS-based gene clustering method outperforms current state-of-the-art MGC methods. Conclusions The POCS-based MGC method can successfully combine multiple constraints from different nature for gene clustering. Also, the proposed GLL is an effective performance measure for the soft clustering solutions. PMID:20356386

  2. Viterbi decoding for satellite and space communication.

    NASA Technical Reports Server (NTRS)

    Heller, J. A.; Jacobs, I. M.

    1971-01-01

    Convolutional coding and Viterbi decoding, along with binary phase-shift keyed modulation, is presented as an efficient system for reliable communication on power limited satellite and space channels. Performance results, obtained theoretically and through computer simulation, are given for optimum short constraint length codes for a range of code constraint lengths and code rates. System efficiency is compared for hard receiver quantization and 4 and 8 level soft quantization. The effects on performance of varying of certain parameters relevant to decoder complexity and cost are examined. Quantitative performance degradation due to imperfect carrier phase coherence is evaluated and compared to that of an uncoded system. As an example of decoder performance versus complexity, a recently implemented 2-Mbit/sec constraint length 7 Viterbi decoder is discussed. Finally a comparison is made between Viterbi and sequential decoding in terms of suitability to various system requirements.

  3. Ammonia oxidizer populations vary with nitrogen cycling across a tropical montane mean annual temperature gradient

    Treesearch

    S. Pierre; I. Hewson; J. P. Sparks; C. M. Litton; C. Giardina; P. M. Groffman; T. J. Fahey

    2017-01-01

    Functional gene approaches have been used to better understand the roles of microbes in driving forest soil nitrogen (N) cycling rates and bioavailability. Ammonia oxidation is a rate limiting step in nitrification, and is a key area for understanding environmental constraints on N availability in forests. We studied how increasing temperature affects the role of...

  4. Assessing Constraints on Soldier Cognitive and Perceptual Motor Performance During Vehicle Motion

    DTIC Science & Technology

    2008-05-01

    vehicle systems are biomechanical (Sirouspour & Salcudean, 2003; Sövényi & Gillespie, 2007), cognitive (Parasuraman & Riley, 1997), and psychomotor...vs. velocity), pedals for braking/acceleration Environmental constraints associated with the support surface (Seat): Damping, inclination...steering and secondarily, performance differences between a joystick and pedals for throttle and brake control. Eleven participants com- pleted three

  5. The Application of the Monte Carlo Approach to Cognitive Diagnostic Computerized Adaptive Testing With Content Constraints

    ERIC Educational Resources Information Center

    Mao, Xiuzhen; Xin, Tao

    2013-01-01

    The Monte Carlo approach which has previously been implemented in traditional computerized adaptive testing (CAT) is applied here to cognitive diagnostic CAT to test the ability of this approach to address multiple content constraints. The performance of the Monte Carlo approach is compared with the performance of the modified maximum global…

  6. State estimation with incomplete nonlinear constraint

    NASA Astrophysics Data System (ADS)

    Huang, Yuan; Wang, Xueying; An, Wei

    2017-10-01

    A problem of state estimation with a new constraints named incomplete nonlinear constraint is considered. The targets are often move in the curve road, if the width of road is neglected, the road can be considered as the constraint, and the position of sensors, e.g., radar, is known in advance, this info can be used to enhance the performance of the tracking filter. The problem of how to incorporate the priori knowledge is considered. In this paper, a second-order sate constraint is considered. A fitting algorithm of ellipse is adopted to incorporate the priori knowledge by estimating the radius of the trajectory. The fitting problem is transformed to the nonlinear estimation problem. The estimated ellipse function is used to approximate the nonlinear constraint. Then, the typical nonlinear constraint methods proposed in recent works can be used to constrain the target state. Monte-Carlo simulation results are presented to illustrate the effectiveness proposed method in state estimation with incomplete constraint.

  7. Research on cutting path optimization of sheet metal parts based on ant colony algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Z. Y.; Ling, H.; Li, L.; Wu, L. H.; Liu, N. B.

    2017-09-01

    In view of the disadvantages of the current cutting path optimization methods of sheet metal parts, a new method based on ant colony algorithm was proposed in this paper. The cutting path optimization problem of sheet metal parts was taken as the research object. The essence and optimization goal of the optimization problem were presented. The traditional serial cutting constraint rule was improved. The cutting constraint rule with cross cutting was proposed. The contour lines of parts were discretized and the mathematical model of cutting path optimization was established. Thus the problem was converted into the selection problem of contour lines of parts. Ant colony algorithm was used to solve the problem. The principle and steps of the algorithm were analyzed.

  8. Inferring segmented dense motion layers using 5D tensor voting.

    PubMed

    Min, Changki; Medioni, Gérard

    2008-09-01

    We present a novel local spatiotemporal approach to produce motion segmentation and dense temporal trajectories from an image sequence. A common representation of image sequences is a 3D spatiotemporal volume, (x,y,t), and its corresponding mathematical formalism is the fiber bundle. However, directly enforcing the spatiotemporal smoothness constraint is difficult in the fiber bundle representation. Thus, we convert the representation into a new 5D space (x,y,t,vx,vy) with an additional velocity domain, where each moving object produces a separate 3D smooth layer. The smoothness constraint is now enforced by extracting 3D layers using the tensor voting framework in a single step that solves both correspondence and segmentation simultaneously. Motion segmentation is achieved by identifying those layers, and the dense temporal trajectories are obtained by converting the layers back into the fiber bundle representation. We proceed to address three applications (tracking, mosaic, and 3D reconstruction) that are hard to solve from the video stream directly because of the segmentation and dense matching steps, but become straightforward with our framework. The approach does not make restrictive assumptions about the observed scene or camera motion and is therefore generally applicable. We present results on a number of data sets.

  9. Defining a genetic ideotype for crop improvement.

    PubMed

    Trethowan, Richard M

    2014-01-01

    While plant breeders traditionally base selection on phenotype, the development of genetic ideotypes can help focus the selection process. This chapter provides a road map for the establishment of a refined genetic ideotype. The first step is an accurate definition of the target environment including the underlying constraints, their probability of occurrence, and impact on phenotype. Once the environmental constraints are established, the wealth of information on plant physiological responses to stresses, known gene information, and knowledge of genotype ×environment and gene × environment interaction help refine the target ideotype and form a basis for cross prediction.Once a genetic ideotype is defined the challenge remains to build the ideotype in a plant breeding program. A number of strategies including marker-assisted recurrent selection and genomic selection can be used that also provide valuable information for the optimization of genetic ideotype. However, the informatics required to underpin the realization of the genetic ideotype then becomes crucial. The reduced cost of genotyping and the need to combine pedigree, phenotypic, and genetic data in a structured way for analysis and interpretation often become the rate-limiting steps, thus reducing genetic gain. Systems for managing these data and an example of ideotype construction for a defined environment type are discussed.

  10. An OpenMI Implementation of a Water Resources System using Simple Script Wrappers

    NASA Astrophysics Data System (ADS)

    Steward, D. R.; Aistrup, J. A.; Kulcsar, L.; Peterson, J. M.; Welch, S. M.; Andresen, D.; Bernard, E. A.; Staggenborg, S. A.; Bulatewicz, T.

    2013-12-01

    This team has developed an adaption of the Open Modelling Interface (OpenMI) that utilizes Simple Script Wrappers. Code is made OpenMI compliant through organization within three modules that initialize, perform time steps, and finalize results. A configuration file is prepared that specifies variables a model expects to receive as input and those it will make available as output. An example is presented for groundwater, economic, and agricultural production models in the High Plains Aquifer region of Kansas. Our models use the programming environments in Scilab and Matlab, along with legacy Fortran code, and our Simple Script Wrappers can also use Python. These models are collectively run within this interdisciplinary framework from initial conditions into the future. It will be shown that by applying model constraints to one model, the impact may be accessed on changes to the water resources system.

  11. Forecast and analysis of the cosmological redshift drift.

    PubMed

    Lazkoz, Ruth; Leanizbarrutia, Iker; Salzano, Vincenzo

    2018-01-01

    The cosmological redshift drift could lead to the next step in high-precision cosmic geometric observations, becoming a direct and irrefutable test for cosmic acceleration. In order to test the viability and possible properties of this effect, also called Sandage-Loeb (SL) test, we generate a model-independent mock data set in order to compare its constraining power with that of the future mock data sets of Type Ia Supernovae (SNe) and Baryon Acoustic Oscillations (BAO). The performance of those data sets is analyzed by testing several cosmological models with the Markov chain Monte Carlo (MCMC) method, both independently as well as combining all data sets. Final results show that, in general, SL data sets allow for remarkable constraints on the matter density parameter today [Formula: see text] on every tested model, showing also a great complementarity with SNe and BAO data regarding dark energy parameters.

  12. An Optimization Framework for Dynamic Hybrid Energy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenbo Du; Humberto E Garcia; Christiaan J.J. Paredis

    A computational framework for the efficient analysis and optimization of dynamic hybrid energy systems (HES) is developed. A microgrid system with multiple inputs and multiple outputs (MIMO) is modeled using the Modelica language in the Dymola environment. The optimization loop is implemented in MATLAB, with the FMI Toolbox serving as the interface between the computational platforms. Two characteristic optimization problems are selected to demonstrate the methodology and gain insight into the system performance. The first is an unconstrained optimization problem that optimizes the dynamic properties of the battery, reactor and generator to minimize variability in the HES. The second problemmore » takes operating and capital costs into consideration by imposing linear and nonlinear constraints on the design variables. The preliminary optimization results obtained in this study provide an essential step towards the development of a comprehensive framework for designing HES.« less

  13. Recursive optimal pruning with applications to tree structured vector quantizers

    NASA Technical Reports Server (NTRS)

    Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen

    1992-01-01

    A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.

  14. Toward Implementing Patient Flow in a Cancer Treatment Center to Reduce Patient Waiting Time and Improve Efficiency.

    PubMed

    Suss, Samuel; Bhuiyan, Nadia; Demirli, Kudret; Batist, Gerald

    2017-06-01

    Outpatient cancer treatment centers can be considered as complex systems in which several types of medical professionals and administrative staff must coordinate their work to achieve the overall goals of providing quality patient care within budgetary constraints. In this article, we use analytical methods that have been successfully employed for other complex systems to show how a clinic can simultaneously reduce patient waiting times and non-value added staff work in a process that has a series of steps, more than one of which involves a scarce resource. The article describes the system model and the key elements in the operation that lead to staff rework and patient queuing. We propose solutions to the problems and provide a framework to evaluate clinic performance. At the time of this report, the proposals are in the process of implementation at a cancer treatment clinic in a major metropolitan hospital in Montreal, Canada.

  15. Spectroscopic ellipsometry data inversion using constrained splines and application to characterization of ZnO with various morphologies

    NASA Astrophysics Data System (ADS)

    Gilliot, Mickaël; Hadjadj, Aomar; Stchakovsky, Michel

    2017-11-01

    An original method of ellipsometric data inversion is proposed based on the use of constrained splines. The imaginary part of the dielectric function is represented by a series of splines, constructed with particular constraints on slopes at the node boundaries to avoid well-know oscillations of natural splines. The nodes are used as fit parameters. The real part is calculated using Kramers-Kronig relations. The inversion can be performed in successive inversion steps with increasing resolution. This method is used to characterize thin zinc oxide layers obtained by a sol-gel and spin-coating process, with a particular recipe yielding very thin layers presenting nano-porosity. Such layers have particular optical properties correlated with thickness, morphological and structural properties. The use of the constrained spline method is particularly efficient for such materials which may not be easily represented by standard dielectric function models.

  16. Solving Assembly Sequence Planning using Angle Modulated Simulated Kalman Filter

    NASA Astrophysics Data System (ADS)

    Mustapa, Ainizar; Yusof, Zulkifli Md.; Adam, Asrul; Muhammad, Badaruddin; Ibrahim, Zuwairie

    2018-03-01

    This paper presents an implementation of Simulated Kalman Filter (SKF) algorithm for optimizing an Assembly Sequence Planning (ASP) problem. The SKF search strategy contains three simple steps; predict-measure-estimate. The main objective of the ASP is to determine the sequence of component installation to shorten assembly time or save assembly costs. Initially, permutation sequence is generated to represent each agent. Each agent is then subjected to a precedence matrix constraint to produce feasible assembly sequence. Next, the Angle Modulated SKF (AMSKF) is proposed for solving ASP problem. The main idea of the angle modulated approach in solving combinatorial optimization problem is to use a function, g(x), to create a continuous signal. The performance of the proposed AMSKF is compared against previous works in solving ASP by applying BGSA, BPSO, and MSPSO. Using a case study of ASP, the results show that AMSKF outperformed all the algorithms in obtaining the best solution.

  17. Microbial synthesis gas utilization and ways to resolve kinetic and mass-transfer limitations.

    PubMed

    Yasin, Muhammad; Jeong, Yeseul; Park, Shinyoung; Jeong, Jiyeong; Lee, Eun Yeol; Lovitt, Robert W; Kim, Byung Hong; Lee, Jinwon; Chang, In Seop

    2015-02-01

    Microbial conversion of syngas to energy-dense biofuels and valuable chemicals is a potential technology for the efficient utilization of fossils (e.g., coal) and renewable resources (e.g., lignocellulosic biomass) in an environmentally friendly manner. However, gas-liquid mass transfer and kinetic limitations are still major constraints that limit the widespread adoption and successful commercialization of the technology. This review paper provides rationales for syngas bioconversion and summarizes the reaction limited conditions along with the possible strategies to overcome these challenges. Mass transfer and economic performances of various reactor configurations are compared, and an ideal case for optimum bioreactor operation is presented. Overall, the challenges with the bioprocessing steps are highlighted, and potential solutions are suggested. Future research directions are provided and a conceptual design for a membrane-based syngas biorefinery is proposed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Observation Planning Made Simple with Science Opportunity Analyzer (SOA)

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara A.; Polanskey, Carol A.

    2004-01-01

    As NASA undertakes the exploration of the Moon and Mars as well as the rest of the Solar System while continuing to investigate Earth's oceans, winds, atmosphere, weather, etc., the ever-existing need to allow operations users to easily define their observations increases. Operation teams need to be able to determine the best time to perform an observation, as well as its duration and other parameters such as the observation target. In addition, operations teams need to be able to check the observation for validity against objectives and intent as well as spacecraft constraints such as turn rates and acceleration or pointing exclusion zones. Science Opportunity Analyzer (SOA), in development for the last six years, is a multi-mission toolset that has been built to meet those needs. The operations team can follow six simple steps and define his/her observation without having to know the complexities of orbital mechanics, coordinate transformations, or the spacecraft itself.

  19. Thermal sensors to control polymer forming. Challenge and solutions

    NASA Astrophysics Data System (ADS)

    Lemeunier, F.; Boyard, N.; Sarda, A.; Plot, C.; Lefèvre, N.; Petit, I.; Colomines, G.; Allanic, N.; Bailleul, J. L.

    2017-10-01

    Many thermal sensors are already used, for many years, to better understand and control material forming processes, especially polymer processing. Due to technical constraints (high pressure, sealing, sensor dimensions…) the thermal measurement is often performed in the tool or close its surface. Thus, it only gives partial and disturbed information. Having reliable information about the heat flux exchanges between the tool and the material during the process would be very helpful to improve the control of the process and to favor the development of new materials. In this work, we present several sensors developed in labs to study the molding steps in forming processes. The analysis of the obtained thermal measurements (temperature, heat flux) shows the required sensitivity threshold of sensitivity of thermal sensors to be able to detect on-line the rate of thermal reaction. Based on these data, we will present new sensor designs which have been patented.

  20. High performance techniques for space mission scheduling

    NASA Technical Reports Server (NTRS)

    Smith, Stephen F.

    1994-01-01

    In this paper, we summarize current research at Carnegie Mellon University aimed at development of high performance techniques and tools for space mission scheduling. Similar to prior research in opportunistic scheduling, our approach assumes the use of dynamic analysis of problem constraints as a basis for heuristic focusing of problem solving search. This methodology, however, is grounded in representational assumptions more akin to those adopted in recent temporal planning research, and in a problem solving framework which similarly emphasizes constraint posting in an explicitly maintained solution constraint network. These more general representational assumptions are necessitated by the predominance of state-dependent constraints in space mission planning domains, and the consequent need to integrate resource allocation and plan synthesis processes. First, we review the space mission problems we have considered to date and indicate the results obtained in these application domains. Next, we summarize recent work in constraint posting scheduling procedures, which offer the promise of better future solutions to this class of problems.

  1. Parametric Deformation of Discrete Geometry for Aerodynamic Shape Design

    NASA Technical Reports Server (NTRS)

    Anderson, George R.; Aftosmis, Michael J.; Nemec, Marian

    2012-01-01

    We present a versatile discrete geometry manipulation platform for aerospace vehicle shape optimization. The platform is based on the geometry kernel of an open-source modeling tool called Blender and offers access to four parametric deformation techniques: lattice, cage-based, skeletal, and direct manipulation. Custom deformation methods are implemented as plugins, and the kernel is controlled through a scripting interface. Surface sensitivities are provided to support gradient-based optimization. The platform architecture allows the use of geometry pipelines, where multiple modelers are used in sequence, enabling manipulation difficult or impossible to achieve with a constructive modeler or deformer alone. We implement an intuitive custom deformation method in which a set of surface points serve as the design variables and user-specified constraints are intrinsically satisfied. We test our geometry platform on several design examples using an aerodynamic design framework based on Cartesian grids. We examine inverse airfoil design and shape matching and perform lift-constrained drag minimization on an airfoil with thickness constraints. A transport wing-fuselage integration problem demonstrates the approach in 3D. In a final example, our platform is pipelined with a constructive modeler to parabolically sweep a wingtip while applying a 1-G loading deformation across the wingspan. This work is an important first step towards the larger goal of leveraging the investment of the graphics industry to improve the state-of-the-art in aerospace geometry tools.

  2. Advancing RF pulse design using an open-competition format: Report from the 2015 ISMRM challenge.

    PubMed

    Grissom, William A; Setsompop, Kawin; Hurley, Samuel A; Tsao, Jeffrey; Velikina, Julia V; Samsonov, Alexey A

    2017-10-01

    To advance the best solutions to two important RF pulse design problems with an open head-to-head competition. Two sub-challenges were formulated in which contestants competed to design the shortest simultaneous multislice (SMS) refocusing pulses and slice-selective parallel transmission (pTx) excitation pulses, subject to realistic hardware and safety constraints. Short refocusing pulses are needed for spin echo SMS imaging at high multiband factors, and short slice-selective pTx pulses are needed for multislice imaging in ultra-high field MRI. Each sub-challenge comprised two phases, in which the first phase posed problems with a low barrier of entry, and the second phase encouraged solutions that performed well in general. The Challenge ran from October 2015 to May 2016. The pTx Challenge winners developed a spokes pulse design method that combined variable-rate selective excitation with an efficient method to enforce SAR constraints, which achieved 10.6 times shorter pulse durations than conventional approaches. The SMS Challenge winners developed a time-optimal control multiband pulse design algorithm that achieved 5.1 times shorter pulse durations than conventional approaches. The Challenge led to rapid step improvements in solutions to significant problems in RF excitation for SMS imaging and ultra-high field MRI. Magn Reson Med 78:1352-1361, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  3. Ares I Flight Control System Design

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; Alaniz, Abran; Hall, Robert; Bedrossian, Nazareth; Hall, Charles; Ryan, Stephen; Jackson, Mark

    2010-01-01

    The Ares I launch vehicle represents a challenging flex-body structural environment for flight control system design. This paper presents a design methodology for employing numerical optimization to develop the Ares I flight control system. The design objectives include attitude tracking accuracy and robust stability with respect to rigid body dynamics, propellant slosh, and flex. Under the assumption that the Ares I time-varying dynamics and control system can be frozen over a short period of time, the flight controllers are designed to stabilize all selected frozen-time launch control systems in the presence of parametric uncertainty. Flex filters in the flight control system are designed to minimize the flex components in the error signals before they are sent to the attitude controller. To ensure adequate response to guidance command, step response specifications are introduced as constraints in the optimization problem. Imposing these constraints minimizes performance degradation caused by the addition of the flex filters. The first stage bending filter design achieves stability by adding lag to the first structural frequency to phase stabilize the first flex mode while gain stabilizing the higher modes. The upper stage bending filter design gain stabilizes all the flex bending modes. The flight control system designs provided here have been demonstrated to provide stable first and second stage control systems in both Draper Ares Stability Analysis Tool (ASAT) and the MSFC 6DOF nonlinear time domain simulation.

  4. Optimization of active distribution networks: Design and analysis of significative case studies for enabling control actions of real infrastructure

    NASA Astrophysics Data System (ADS)

    Moneta, Diana; Mora, Paolo; Viganò, Giacomo; Alimonti, Gianluca

    2014-12-01

    The diffusion of Distributed Generation (DG) based on Renewable Energy Sources (RES) requires new strategies to ensure reliable and economic operation of the distribution networks and to support the diffusion of DG itself. An advanced algorithm (DISCoVER - DIStribution Company VoltagE Regulator) is being developed to optimize the operation of active network by means of an advanced voltage control based on several regulations. Starting from forecasted load and generation, real on-field measurements, technical constraints and costs for each resource, the algorithm generates for each time period a set of commands for controllable resources that guarantees achievement of technical goals minimizing the overall cost. Before integrating the controller into the telecontrol system of the real networks, and in order to validate the proper behaviour of the algorithm and to identify possible critical conditions, a complete simulation phase has started. The first step is concerning the definition of a wide range of "case studies", that are the combination of network topology, technical constraints and targets, load and generation profiles and "costs" of resources that define a valid context to test the algorithm, with particular focus on battery and RES management. First results achieved from simulation activity on test networks (based on real MV grids) and actual battery characteristics are given, together with prospective performance on real case applications.

  5. Genarris: Random generation of molecular crystal structures and fast screening with a Harris approximation

    NASA Astrophysics Data System (ADS)

    Li, Xiayue; Curtis, Farren S.; Rose, Timothy; Schober, Christoph; Vazquez-Mayagoitia, Alvaro; Reuter, Karsten; Oberhofer, Harald; Marom, Noa

    2018-06-01

    We present Genarris, a Python package that performs configuration space screening for molecular crystals of rigid molecules by random sampling with physical constraints. For fast energy evaluations, Genarris employs a Harris approximation, whereby the total density of a molecular crystal is constructed via superposition of single molecule densities. Dispersion-inclusive density functional theory is then used for the Harris density without performing a self-consistency cycle. Genarris uses machine learning for clustering, based on a relative coordinate descriptor developed specifically for molecular crystals, which is shown to be robust in identifying packing motif similarity. In addition to random structure generation, Genarris offers three workflows based on different sequences of successive clustering and selection steps: the "Rigorous" workflow is an exhaustive exploration of the potential energy landscape, the "Energy" workflow produces a set of low energy structures, and the "Diverse" workflow produces a maximally diverse set of structures. The latter is recommended for generating initial populations for genetic algorithms. Here, the implementation of Genarris is reported and its application is demonstrated for three test cases.

  6. Reducing sick leave of Dutch vocational school students: adaptation of a sick leave protocol using the intervention mapping process.

    PubMed

    de Kroon, Marlou L A; Bulthuis, Jozien; Mulder, Wico; Schaafsma, Frederieke G; Anema, Johannes R

    2016-12-01

    Since the extent of sick leave and the problems of vocational school students are relatively large, we aimed to tailor a sick leave protocol at Dutch lower secondary education schools to the particular context of vocational schools. Four steps of the iterative process of Intervention Mapping (IM) to adapt this protocol were carried out: (1) performing a needs assessment and defining a program objective, (2) determining the performance and change objectives, (3) identifying theory-based methods and practical strategies and (4) developing a program plan. Interviews with students using structured questionnaires, in-depth interviews with relevant stakeholders, a literature research and, finally, a pilot implementation were carried out. A sick leave protocol was developed that was feasible and acceptable for all stakeholders. The main barriers for widespread implementation are time constraints in both monitoring and acting upon sick leave by school and youth health care. The iterative process of IM has shown its merits in the adaptation of the manual 'A quick return to school is much better' to a sick leave protocol for vocational school students.

  7. Dynamic Model of the BIO-Plex Air Revitalization System

    NASA Technical Reports Server (NTRS)

    Finn, Cory; Meyers, Karen; Duffield, Bruce; Luna, Bernadette (Technical Monitor)

    2000-01-01

    The BIO-Plex facility will need to support a variety of life support system designs and operation strategies. These systems will be tested and evaluated in the BIO-Plex facility. An important goal of the life support program is to identify designs that best meet all size and performance constraints for a variety of possible future missions. Integrated human testing is a necessary step in reaching this goal. System modeling and analysis will also play an important role in this endeavor. Currently, simulation studies are being used to estimate air revitalization buffer and storage requirements in order to develop the infrastructure requirements of the BIO-Plex facility. Simulation studies are also being used to verify that the envisioned operation strategy will be able to meet all performance criteria. In this paper, a simulation study is presented for a nominal BIO-Plex scenario with a high-level of crop growth. A general description of the dynamic mass flow model is provided, along with some simulation results. The paper also discusses sizing and operations issues and describes plans for future simulation studies.

  8. High-throughput process development: II. Membrane chromatography.

    PubMed

    Rathore, Anurag S; Muthukumar, Sampath

    2014-01-01

    Membrane chromatography is gradually emerging as an alternative to conventional column chromatography. It alleviates some of the major disadvantages associated with the latter including high pressure drop across the column bed and dependence on intra-particle diffusion for the transport of solute molecules to their binding sites within the pores of separation media. In the last decade, it has emerged as a method of choice for final polishing of biopharmaceuticals, in particular monoclonal antibody products. The relevance of such a platform is high in view of the constraints with respect to time and resources that the biopharma industry faces today. This protocol describes the steps involved in performing HTPD of a membrane chromatography step. It describes operation of a commercially available device (AcroPrep™ Advance filter plate with Mustang S membrane from Pall Corporation). This device is available in 96-well format with 7 μL membrane in each well. We discuss the challenges that one faces when performing such experiments as well as possible solutions to alleviate them. Besides describing the operation of the device, the protocol also presents an approach for statistical analysis of the data that is gathered from such a platform. A case study involving use of the protocol for examining ion exchange chromatography of Granulocyte Colony Stimulating Factor (GCSF), a therapeutic product, is briefly discussed. This is intended to demonstrate the usefulness of this protocol in generating data that is representative of the data obtained at the traditional lab scale. The agreement in the data is indeed very significant (regression coefficient 0.99). We think that this protocol will be of significant value to those involved in performing high-throughput process development of membrane chromatography.

  9. A two-step initial mass function:. Consequences of clustered star formation for binary properties

    NASA Astrophysics Data System (ADS)

    Durisen, R. H.; Sterzik, M. F.; Pickett, B. K.

    2001-06-01

    If stars originate in transient bound clusters of moderate size, these clusters will decay due to dynamic interactions in which a hard binary forms and ejects most or all the other stars. When the cluster members are chosen at random from a reasonable initial mass function (IMF), the resulting binary characteristics do not match current observations. We find a significant improvement in the trends of binary properties from this scenario when an additional constraint is taken into account, namely that there is a distribution of total cluster masses set by the masses of the cloud cores from which the clusters form. Two distinct steps then determine final stellar masses - the choice of a cluster mass and the formation of the individual stars. We refer to this as a ``two-step'' IMF. Simple statistical arguments are used in this paper to show that a two-step IMF, combined with typical results from dynamic few-body system decay, tends to give better agreement between computed binary characteristics and observations than a one-step mass selection process.

  10. Motionless phase stepping in X-ray phase contrast imaging with a compact source

    PubMed Central

    Miao, Houxun; Chen, Lei; Bennett, Eric E.; Adamo, Nick M.; Gomella, Andrew A.; DeLuca, Alexa M.; Patel, Ajay; Morgan, Nicole Y.; Wen, Han

    2013-01-01

    X-ray phase contrast imaging offers a way to visualize the internal structures of an object without the need to deposit significant radiation, and thereby alleviate the main concern in X-ray diagnostic imaging procedures today. Grating-based differential phase contrast imaging techniques are compatible with compact X-ray sources, which is a key requirement for the majority of clinical X-ray modalities. However, these methods are substantially limited by the need for mechanical phase stepping. We describe an electromagnetic phase-stepping method that eliminates mechanical motion, thus removing the constraints in speed, accuracy, and flexibility. The method is broadly applicable to both projection and tomography imaging modes. The transition from mechanical to electromagnetic scanning should greatly facilitate the translation of X-ray phase contrast techniques into mainstream applications. PMID:24218599

  11. A step forward in understanding step-overs: the case of the Dead Sea Fault in northern Israel

    NASA Astrophysics Data System (ADS)

    Dembo, Neta; Granot, Roi; Hamiel, Yariv

    2017-04-01

    The rotational deformation field around step-overs between segments of strike-slip faults is poorly resolved. Vertical-axis paleomagnetic rotations can be used to characterize the deformation field, and together with mechanical modeling, can provide constraints on the characteristics of the adjacent fault segments. The northern Dead Sea Fault, a major segmented sinistral transform fault that straddles the boundary between the Arabian Plate and Sinai Subplate, offers an appropriate tectonic setting for our detailed mechanical and paleomagnetic investigation. We examine the paleomagnetic vertical-axis rotations of Neogene-Pleistocene basalt outcrops surrounding a right step-over between two prominent segments of the fault: the Jordan Gorge section and the Hula East Boundary Fault. Results from 20 new paleomagnetic sites reveal significant (>20˚) counterclockwise rotations within the step-over and small clockwise rotations in the vicinity. Sites located further (>2.5 km) away from the step-over generally experience negligible to minor rotations. Finally, we construct a mechanical model guided by the observed rotational field that allows us to characterize the structural, mechanical and kinematic behavior of the Dead Sea Fault in northern Israel.

  12. [Application of ordinary Kriging method in entomologic ecology].

    PubMed

    Zhang, Runjie; Zhou, Qiang; Chen, Cuixian; Wang, Shousong

    2003-01-01

    Geostatistics is a statistic method based on regional variables and using the tool of variogram to analyze the spatial structure and the patterns of organism. In simulating the variogram within a great range, though optimal simulation cannot be obtained, the simulation method of a dialogue between human and computer can be used to optimize the parameters of the spherical models. In this paper, the method mentioned above and the weighted polynomial regression were utilized to simulate the one-step spherical model, the two-step spherical model and linear function model, and the available nearby samples were used to draw on the ordinary Kriging procedure, which provided a best linear unbiased estimate of the constraint of the unbiased estimation. The sum of square deviation between the estimating and measuring values of varying theory models were figured out, and the relative graphs were shown. It was showed that the simulation based on the two-step spherical model was the best simulation, and the one-step spherical model was better than the linear function model.

  13. On the Design of Smart Parking Networks in the Smart Cities: An Optimal Sensor Placement Model

    PubMed Central

    Bagula, Antoine; Castelli, Lorenzo; Zennaro, Marco

    2015-01-01

    Smart parking is a typical IoT application that can benefit from advances in sensor, actuator and RFID technologies to provide many services to its users and parking owners of a smart city. This paper considers a smart parking infrastructure where sensors are laid down on the parking spots to detect car presence and RFID readers are embedded into parking gates to identify cars and help in the billing of the smart parking. Both types of devices are endowed with wired and wireless communication capabilities for reporting to a gateway where the situation recognition is performed. The sensor devices are tasked to play one of the three roles: (1) slave sensor nodes located on the parking spot to detect car presence/absence; (2) master nodes located at one of the edges of a parking lot to detect presence and collect the sensor readings from the slave nodes; and (3) repeater sensor nodes, also called “anchor” nodes, located strategically at specific locations in the parking lot to increase the coverage and connectivity of the wireless sensor network. While slave and master nodes are placed based on geographic constraints, the optimal placement of the relay/anchor sensor nodes in smart parking is an important parameter upon which the cost and efficiency of the parking system depends. We formulate the optimal placement of sensors in smart parking as an integer linear programming multi-objective problem optimizing the sensor network engineering efficiency in terms of coverage and lifetime maximization, as well as its economic gain in terms of the number of sensors deployed for a specific coverage and lifetime. We propose an exact solution to the node placement problem using single-step and two-step solutions implemented in the Mosel language based on the Xpress-MPsuite of libraries. Experimental results reveal the relative efficiency of the single-step compared to the two-step model on different performance parameters. These results are consolidated by simulation results, which reveal that our solution outperforms a random placement in terms of both energy consumption, delay and throughput achieved by a smart parking network. PMID:26134104

  14. On the Design of Smart Parking Networks in the Smart Cities: An Optimal Sensor Placement Model.

    PubMed

    Bagula, Antoine; Castelli, Lorenzo; Zennaro, Marco

    2015-06-30

    Smart parking is a typical IoT application that can benefit from advances in sensor, actuator and RFID technologies to provide many services to its users and parking owners of a smart city. This paper considers a smart parking infrastructure where sensors are laid down on the parking spots to detect car presence and RFID readers are embedded into parking gates to identify cars and help in the billing of the smart parking. Both types of devices are endowed with wired and wireless communication capabilities for reporting to a gateway where the situation recognition is performed. The sensor devices are tasked to play one of the three roles: (1) slave sensor nodes located on the parking spot to detect car presence/absence; (2) master nodes located at one of the edges of a parking lot to detect presence and collect the sensor readings from the slave nodes; and (3) repeater sensor nodes, also called "anchor" nodes, located strategically at specific locations in the parking lot to increase the coverage and connectivity of the wireless sensor network. While slave and master nodes are placed based on geographic constraints, the optimal placement of the relay/anchor sensor nodes in smart parking is an important parameter upon which the cost and efficiency of the parking system depends. We formulate the optimal placement of sensors in smart parking as an integer linear programming multi-objective problem optimizing the sensor network engineering efficiency in terms of coverage and lifetime maximization, as well as its economic gain in terms of the number of sensors deployed for a specific coverage and lifetime. We propose an exact solution to the node placement problem using single-step and two-step solutions implemented in the Mosel language based on the Xpress-MPsuite of libraries. Experimental results reveal the relative efficiency of the single-step compared to the two-step model on different performance parameters. These results are consolidated by simulation results, which reveal that our solution outperforms a random placement in terms of both energy consumption, delay and throughput achieved by a smart parking network.

  15. The effect of imposing 'fractional abundance constraints' onto the multilayer perceptron for sub-pixel land cover classification

    NASA Astrophysics Data System (ADS)

    Heremans, Stien; Suykens, Johan A. K.; Van Orshoven, Jos

    2016-02-01

    To be physically interpretable, sub-pixel land cover fractions or abundances should fulfill two constraints, the Abundance Non-negativity Constraint (ANC) and the Abundance Sum-to-one Constraint (ASC). This paper focuses on the effect of imposing these constraints onto the MultiLayer Perceptron (MLP) for a multi-class sub-pixel land cover classification of a time series of low resolution MODIS-images covering the northern part of Belgium. Two constraining modes were compared, (i) an in-training approach that uses 'softmax' as the transfer function in the MLP's output layer and (ii) a post-training approach that linearly rescales the outputs of the unconstrained MLP. Our results demonstrate that the pixel-level prediction accuracy is markedly increased by the explicit enforcement, both in-training and post-training, of the ANC and the ASC. For aggregations of pixels (municipalities), the constrained perceptrons perform at least as well as their unconstrained counterparts. Although the difference in performance between the in-training and post-training approach is small, we recommend the former for integrating the fractional abundance constraints into MLPs meant for sub-pixel land cover estimation, regardless of the targeted level of spatial aggregation.

  16. Schools Performing against the Odds: Enablements and Constraints to School Leadership Practice

    ERIC Educational Resources Information Center

    Naicker, Inbanathan; Grant, Carolyn; Pillay, Sivanandani

    2016-01-01

    There are many schools in developing countries which, despite the challenges they face, defy the odds and continue to perform at exceptionally high levels. We cast our gaze on one of these resilient schools in South Africa, and sought to learn about the leadership practices prevalent in this school and the enablements and constraints to the school…

  17. Orbiter fuel cell performance constraints. STS/OPS Pratt Whitney fuel cells. Operating limits for mission planning

    NASA Technical Reports Server (NTRS)

    Kolkhorst, H. E.

    1980-01-01

    The orbiter fuel cell powerplant (FCP) performance constraints listed in the Shuttle Operational Data Book (SODB) were analyzed using the shuttle environmental control requirements evaluation tool. The effects of FCP lifetime, coolant loops, and FCP voltage output were considered. Results indicate that the FCP limits defined in the SODB are not valid.

  18. On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint

    PubMed Central

    Zhang, Chong; Liu, Yufeng; Wu, Yichao

    2015-01-01

    For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575

  19. MetaboTools: A comprehensive toolbox for analysis of genome-scale metabolic models

    DOE PAGES

    Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines

    2016-08-03

    Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less

  20. Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Luo, Yabo; Waden, Yongo P.

    2017-06-01

    Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.

  1. A Brownian dynamics study on ferrofluid colloidal dispersions using an iterative constraint method to satisfy Maxwell’s equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubina, Sean Hyun, E-mail: sdubin2@uic.edu; Wedgewood, Lewis Edward, E-mail: wedge@uic.edu

    2016-07-15

    Ferrofluids are often favored for their ability to be remotely positioned via external magnetic fields. The behavior of particles in ferromagnetic clusters under uniformly applied magnetic fields has been computationally simulated using the Brownian dynamics, Stokesian dynamics, and Monte Carlo methods. However, few methods have been established that effectively handle the basic principles of magnetic materials, namely, Maxwell’s equations. An iterative constraint method was developed to satisfy Maxwell’s equations when a uniform magnetic field is imposed on ferrofluids in a heterogeneous Brownian dynamics simulation that examines the impact of ferromagnetic clusters in a mesoscale particle collection. This was accomplished bymore » allowing a particulate system in a simple shear flow to advance by a time step under a uniformly applied magnetic field, then adjusting the ferroparticles via an iterative constraint method applied over sub-volume length scales until Maxwell’s equations were satisfied. The resultant ferrofluid model with constraints demonstrates that the magnetoviscosity contribution is not as substantial when compared to homogeneous simulations that assume the material’s magnetism is a direct response to the external magnetic field. This was detected across varying intensities of particle-particle interaction, Brownian motion, and shear flow. Ferroparticle aggregation was still extensively present but less so than typically observed.« less

  2. MapMaker and PathTracer for tracking carbon in genome-scale metabolic models

    PubMed Central

    Tervo, Christopher J.; Reed, Jennifer L.

    2016-01-01

    Constraint-based reconstruction and analysis (COBRA) modeling results can be difficult to interpret given the large numbers of reactions in genome-scale models. While paths in metabolic networks can be found, existing methods are not easily combined with constraint-based approaches. To address this limitation, two tools (MapMaker and PathTracer) were developed to find paths (including cycles) between metabolites, where each step transfers carbon from reactant to product. MapMaker predicts carbon transfer maps (CTMs) between metabolites using only information on molecular formulae and reaction stoichiometry, effectively determining which reactants and products share carbon atoms. MapMaker correctly assigned CTMs for over 97% of the 2,251 reactions in an Escherichia coli metabolic model (iJO1366). Using CTMs as inputs, PathTracer finds paths between two metabolites. PathTracer was applied to iJO1366 to investigate the importance of using CTMs and COBRA constraints when enumerating paths, to find active and high flux paths in flux balance analysis (FBA) solutions, to identify paths for putrescine utilization, and to elucidate a potential CO2 fixation pathway in E. coli. These results illustrate how MapMaker and PathTracer can be used in combination with constraint-based models to identify feasible, active, and high flux paths between metabolites. PMID:26771089

  3. Step-by-step guideline for disease-specific costing studies in low- and middle-income countries: a mixed methodology

    PubMed Central

    Hendriks, Marleen E.; Kundu, Piyali; Boers, Alexander C.; Bolarinwa, Oladimeji A.; te Pas, Mark J.; Akande, Tanimola M.; Agbede, Kayode; Gomez, Gabriella B.; Redekop, William K.; Schultsz, Constance; Tan, Siok Swan

    2014-01-01

    Background Disease-specific costing studies can be used as input into cost-effectiveness analyses and provide important information for efficient resource allocation. However, limited data availability and limited expertise constrain such studies in low- and middle-income countries (LMICs). Objective To describe a step-by-step guideline for conducting disease-specific costing studies in LMICs where data availability is limited and to illustrate how the guideline was applied in a costing study of cardiovascular disease prevention care in rural Nigeria. Design The step-by-step guideline provides practical recommendations on methods and data requirements for six sequential steps: 1) definition of the study perspective, 2) characterization of the unit of analysis, 3) identification of cost items, 4) measurement of cost items, 5) valuation of cost items, and 6) uncertainty analyses. Results We discuss the necessary tradeoffs between the accuracy of estimates and data availability constraints at each step and illustrate how a mixed methodology of accurate bottom-up micro-costing and more feasible approaches can be used to make optimal use of all available data. An illustrative example from Nigeria is provided. Conclusions An innovative, user-friendly guideline for disease-specific costing in LMICs is presented, using a mixed methodology to account for limited data availability. The illustrative example showed that the step-by-step guideline can be used by healthcare professionals in LMICs to conduct feasible and accurate disease-specific cost analyses. PMID:24685170

  4. Effect of leading-edge load constraints on the design and performance of supersonic wings

    NASA Technical Reports Server (NTRS)

    Darden, C. M.

    1985-01-01

    A theoretical and experimental investigation was conducted to assess the effect of leading-edge load constraints on supersonic wing design and performance. In the effort to delay flow separation and the formation of leading-edge vortices, two constrained, linear-theory optimization approaches were used to limit the loadings on the leading edge of a variable-sweep planform design. Experimental force and moment tests were made on two constrained camber wings, a flat uncambered wing, and an optimum design with no constraints. Results indicate that vortex strength and separation regions were mildest on the severely and moderately constrained wings.

  5. UCMS - A new signal parameter measurement system using digital signal processing techniques. [User Constraint Measurement System

    NASA Technical Reports Server (NTRS)

    Choi, H. J.; Su, Y. T.

    1986-01-01

    The User Constraint Measurement System (UCMS) is a hardware/software package developed by NASA Goddard to measure the signal parameter constraints of the user transponder in the TDRSS environment by means of an all-digital signal sampling technique. An account is presently given of the features of UCMS design and of its performance capabilities and applications; attention is given to such important aspects of the system as RF interface parameter definitions, hardware minimization, the emphasis on offline software signal processing, and end-to-end link performance. Applications to the measurement of other signal parameters are also discussed.

  6. SU-F-J-97: A Joint Registration and Segmentation Approach for Large Bladder Deformations in Adaptive Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derksen, A; Koenig, L; Heldmann, S

    Purpose: To improve results of deformable image registration (DIR) in adaptive radiotherapy for large bladder deformations in CT/CBCT pelvis imaging. Methods: A variational multi-modal DIR algorithm is incorporated in a joint iterative scheme, alternating between segmentation based bladder matching and registration. Using an initial DIR to propagate the bladder contour to the CBCT, in a segmentation step the contour is improved by discrete image gradient sampling along all surface normals and adapting the delineation to match the location of each maximum (with a search range of +−5/2mm at the superior/inferior bladder side and step size of 0.5mm). An additional graph-cutmore » based constraint limits the maximum difference between neighboring points. This improved contour is utilized in a subsequent DIR with a surface matching constraint. By calculating an euclidean distance map of the improved contour surface, the new constraint enforces the DIR to map each point of the original contour onto the improved contour. The resulting deformation is then used as a starting guess to compute a deformation update, which can again be used for the next segmentation step. The result is a dense deformation, able to capture much larger bladder deformations. The new method is evaluated on ten CT/CBCT male pelvis datasets, calculating Dice similarity coefficients (DSC) between the final propagated bladder contour and a manually delineated gold standard on the CBCT image. Results: Over all ten cases, an average DSC of 0.93±0.03 is achieved on the bladder. Compared with the initial DIR (0.88±0.05), the DSC is equal (2 cases) or improved (8 cases). Additionally, DSC accuracy of femoral bones (0.94±0.02) was not affected. Conclusion: The new approach shows that using the presented alternating segmentation/registration approach, the results of bladder DIR in the pelvis region can be greatly improved, especially for cases with large variations in bladder volume. Fraunhofer MEVIS received funding from a research grant by Varian Medical Systems.« less

  7. Energy Performance Monitoring and Optimization System for DoD Campuses

    DTIC Science & Technology

    2014-02-01

    estimated that, on average, the EPMO system exceeded the energy consumption reduction target of 20% and improved occupant thermal comfort by reducing the...dynamic models, operational and thermal comfort constraints, and plant efficiency in the same framework (Borrelli and Keviczky, 2008; Borrelli, Pekar...optimization modeling language uses the models described above in conjunction with information such as: thermal comfort constraints, equipment constraints, and

  8. Generalized gradient algorithm for trajectory optimization

    NASA Technical Reports Server (NTRS)

    Zhao, Yiyuan; Bryson, A. E.; Slattery, R.

    1990-01-01

    The generalized gradient algorithm presented and verified as a basis for the solution of trajectory optimization problems improves the performance index while reducing path equality constraints, and terminal equality constraints. The algorithm is conveniently divided into two phases, of which the first, 'feasibility' phase yields a solution satisfying both path and terminal constraints, while the second, 'optimization' phase uses the results of the first phase as initial guesses.

  9. A heuristic constraint programmed planner for deep space exploration problems

    NASA Astrophysics Data System (ADS)

    Jiang, Xiao; Xu, Rui; Cui, Pingyuan

    2017-10-01

    In recent years, the increasing numbers of scientific payloads and growing constraints on the probe have made constraint processing technology a hotspot in the deep space planning field. In the procedure of planning, the ordering of variables and values plays a vital role. This paper we present two heuristic ordering methods for variables and values. On this basis a graphplan-like constraint-programmed planner is proposed. In the planner we convert the traditional constraint satisfaction problem to a time-tagged form with different levels. Inspired by the most constrained first principle in constraint satisfaction problem (CSP), the variable heuristic is designed by the number of unassigned variables in the constraint and the value heuristic is designed by the completion degree of the support set. The simulation experiments show that the planner proposed is effective and its performance is competitive with other kind of planners.

  10. Six-minute stepper test: a valid clinical exercise tolerance test for COPD patients

    PubMed Central

    Grosbois, JM; Riquier, C; Chehere, B; Coquart, J; Béhal, H; Bart, F; Wallaert, B; Chenivesse, C

    2016-01-01

    Introduction Exercise tolerance testing is an integral part of the pulmonary rehabilitation (PR) management of patients with chronic obstructive pulmonary disease (COPD). The 6-minute stepper test (6MST) is a new, well-tolerated, reproducible exercise test, which can be performed without any spatial constraints. Objective The aim of this study was to compare the results of the 6MST to those obtained during a 6-minute walk test (6MWT) and cardiopulmonary exercise testing (CPET) in a cohort of COPD patients. Methods Ninety-one COPD patients managed by outpatient PR and assessed by 6MST, 6MWT, and CPET were retrospectively included in this study. Correlations between the number of steps on the 6MST, the distance covered on the 6MWT, oxygen consumption, and power at the ventilatory threshold and at maximum effort during CPET were analyzed before starting PR, and the improvement on the 6MST and 6MWT was compared after PR. Results The number of steps on the 6MST was significantly correlated with the distance covered on the 6MWT (r=0.56; P<0.0001), the power at maximum effort (r=0.46; P<0.0001), and oxygen consumption at maximum effort (r=0.39; P<0.005). Performances on the 6MST and 6MWT were significantly improved after PR (570 vs 488 steps, P=0.001 and 448 vs 406 m, respectively; P<0.0001). Improvements of the 6MST and 6MWT after PR were significantly correlated (r=0.34; P=0.03). Conclusion The results of this study show that the 6MST is a valid test to evaluate exercise tolerance in COPD patients. The use of this test in clinical practice appears to be particularly relevant for the assessment of patients managed by home PR. PMID:27099483

  11. Performance of convolutional codes on fading channels typical of planetary entry missions

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.; Mui, S. Y.; Reale, T. J.

    1974-01-01

    The performance of convolutional codes in fading channels typical of the planetary entry channel is examined in detail. The signal fading is due primarily to turbulent atmospheric scattering of the RF signal transmitted from an entry probe through a planetary atmosphere. Short constraint length convolutional codes are considered in conjunction with binary phase-shift keyed modulation and Viterbi maximum likelihood decoding, and for longer constraint length codes sequential decoding utilizing both the Fano and Zigangirov-Jelinek (ZJ) algorithms are considered. Careful consideration is given to the modeling of the channel in terms of a few meaningful parameters which can be correlated closely with theoretical propagation studies. For short constraint length codes the bit error probability performance was investigated as a function of E sub b/N sub o parameterized by the fading channel parameters. For longer constraint length codes the effect was examined of the fading channel parameters on the computational requirements of both the Fano and ZJ algorithms. The effects of simple block interleaving in combatting the memory of the channel is explored, using the analytic approach or digital computer simulation.

  12. XY vs X Mixer in Quantum Alternating Operator Ansatz for Optimization Problems with Constraints

    NASA Technical Reports Server (NTRS)

    Wang, Zhihui; Rubin, Nicholas; Rieffel, Eleanor G.

    2018-01-01

    Quantum Approximate Optimization Algorithm, further generalized as Quantum Alternating Operator Ansatz (QAOA), is a family of algorithms for combinatorial optimization problems. It is a leading candidate to run on emerging universal quantum computers to gain insight into quantum heuristics. In constrained optimization, penalties are often introduced so that the ground state of the cost Hamiltonian encodes the solution (a standard practice in quantum annealing). An alternative is to choose a mixing Hamiltonian such that the constraint corresponds to a constant of motion and the quantum evolution stays in the feasible subspace. Better performance of the algorithm is speculated due to a much smaller search space. We consider problems with a constant Hamming weight as the constraint. We also compare different methods of generating the generalized W-state, which serves as a natural initial state for the Hamming-weight constraint. Using graph-coloring as an example, we compare the performance of using XY model as a mixer that preserves the Hamming weight with the performance of adding a penalty term in the cost Hamiltonian.

  13. Maximizing and minimizing investment concentration with constraints of budget and investment risk

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-01-01

    In this paper, as a first step in examining the properties of a feasible portfolio subset that is characterized by budget and risk constraints, we assess the maximum and minimum of the investment concentration using replica analysis. To do this, we apply an analytical approach of statistical mechanics. We note that the optimization problem considered in this paper is the dual problem of the portfolio optimization problem discussed in the literature, and we verify that these optimal solutions are also dual. We also present numerical experiments, in which we use the method of steepest descent that is based on Lagrange's method of undetermined multipliers, and we compare the numerical results to those obtained by replica analysis in order to assess the effectiveness of our proposed approach.

  14. Constraints and Opportunities with Interview Transcription: Towards Reflection in Qualitative Research

    PubMed Central

    Oliver, Daniel G.; Serovich, Julianne M.; Mason, Tina L.

    2006-01-01

    In this paper we discuss the complexities of interview transcription. While often seen as a behind-the-scenes task, we suggest that transcription is a powerful act of representation. Transcription is practiced in multiple ways, often using naturalism, in which every utterance is captured in as much detail as possible, and/or denaturalism, in which grammar is corrected, interview noise (e.g., stutters, pauses, etc.) is removed and nonstandard accents (i.e., non-majority) are standardized. In this article, we discuss the constraints and opportunities of our transcription decisions and point to an intermediate, reflective step. We suggest that researchers incorporate reflection into their research design by interrogating their transcription decisions and the possible impact these decisions may have on participants and research outcomes. PMID:16534533

  15. Planetary quarantine: Space research and technology. [satellite quarantine constraints on outer planet mission

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The impact of satisfying satellite quarantine constraints on current outer planet mission and spacecraft designs is considered. Tools required to perform trajectory and navigation analyses for determining satellite impact probabilities are developed.

  16. A mathematical formulation for interface-based modular product design with geometric and weight constraints

    NASA Astrophysics Data System (ADS)

    Jung-Woon Yoo, John

    2016-06-01

    Since customer preferences change rapidly, there is a need for design processes with shorter product development cycles. Modularization plays a key role in achieving mass customization, which is crucial in today's competitive global market environments. Standardized interfaces among modularized parts have facilitated computational product design. To incorporate product size and weight constraints during computational design procedures, a mixed integer programming formulation is presented in this article. Product size and weight are two of the most important design parameters, as evidenced by recent smart-phone products. This article focuses on the integration of geometric, weight and interface constraints into the proposed mathematical formulation. The formulation generates the optimal selection of components for a target product, which satisfies geometric, weight and interface constraints. The formulation is verified through a case study and experiments are performed to demonstrate the performance of the formulation.

  17. Exploratory Research on Bearing Characteristics of Confined Stabilized Soil

    NASA Astrophysics Data System (ADS)

    Wu, Shuai Shuai; Gao, Zheng Guo; Li, Shi Yang; Cui, Wen Bo; Huang, Xin

    2018-06-01

    The performance of a new kind of confined stabilized soil (CSS) was investigated which was constructed by filling the stabilized soil, which was made by mixing soil with a binder containing a high content of expansive component, into an engineering plastic pipe. Cube compressive strength of the stabilized soil formed with constraint and axial compression performance of stabilized soil cylinders confined with the constraint pipe were measured. The results indicated that combining the constraint pipe and the binder containing expansion component could achieve such effects: higher production of expansive hydrates could be adopted so as to fill more voids in the stabilized soil and improve its strength; at the same time compressive prestress built on the core stabilized soil, combined of which hoop constraint provided effective radial compressive force on the core stabilized soil. These effects made the CSS acquire plastic failure mode and more than twice bearing capacity of ordinary stabilized soil with the same binder content.

  18. Canonical and symplectic analysis for three dimensional gravity without dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Escalante, Alberto, E-mail: aescalan@ifuap.buap.mx; Osmart Ochoa-Gutiérrez, H.

    2017-03-15

    In this paper a detailed Hamiltonian analysis of three-dimensional gravity without dynamics proposed by V. Hussain is performed. We report the complete structure of the constraints and the Dirac brackets are explicitly computed. In addition, the Faddeev–Jackiw symplectic approach is developed; we report the complete set of Faddeev–Jackiw constraints and the generalized brackets, then we show that the Dirac and the generalized Faddeev–Jackiw brackets coincide to each other. Finally, the similarities and advantages between Faddeev–Jackiw and Dirac’s formalism are briefly discussed. - Highlights: • We report the symplectic analysis for three dimensional gravity without dynamics. • We report the Faddeev–Jackiwmore » constraints. • A pure Dirac’s analysis is performed. • The complete structure of Dirac’s constraints is reported. • We show that symplectic and Dirac’s brackets coincide to each other.« less

  19. Generalizing Backtrack-Free Search: A Framework for Search-Free Constraint Satisfaction

    NASA Technical Reports Server (NTRS)

    Jonsson, Ari K.; Frank, Jeremy

    2000-01-01

    Tractable classes of constraint satisfaction problems are of great importance in artificial intelligence. Identifying and taking advantage of such classes can significantly speed up constraint problem solving. In addition, tractable classes are utilized in applications where strict worst-case performance guarantees are required, such as constraint-based plan execution. In this work, we present a formal framework for search-free (backtrack-free) constraint satisfaction. The framework is based on general procedures, rather than specific propagation techniques, and thus generalizes existing techniques in this area. We also relate search-free problem solving to the notion of decision sets and use the result to provide a constructive criterion that is sufficient to guarantee search-free problem solving.

  20. Projections onto the Pareto surface in multicriteria radiation therapy optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bokrantz, Rasmus, E-mail: bokrantz@kth.se, E-mail: rasmus.bokrantz@raysearchlabs.com; Miettinen, Kaisa

    2015-10-15

    Purpose: To eliminate or reduce the error to Pareto optimality that arises in Pareto surface navigation when the Pareto surface is approximated by a small number of plans. Methods: The authors propose to project the navigated plan onto the Pareto surface as a postprocessing step to the navigation. The projection attempts to find a Pareto optimal plan that is at least as good as or better than the initial navigated plan with respect to all objective functions. An augmented form of projection is also suggested where dose–volume histogram constraints are used to prevent that the projection causes a violation ofmore » some clinical goal. The projections were evaluated with respect to planning for intensity modulated radiation therapy delivered by step-and-shoot and sliding window and spot-scanned intensity modulated proton therapy. Retrospective plans were generated for a prostate and a head and neck case. Results: The projections led to improved dose conformity and better sparing of organs at risk (OARs) for all three delivery techniques and both patient cases. The mean dose to OARs decreased by 3.1 Gy on average for the unconstrained form of the projection and by 2.0 Gy on average when dose–volume histogram constraints were used. No consistent improvements in target homogeneity were observed. Conclusions: There are situations when Pareto navigation leaves room for improvement in OAR sparing and dose conformity, for example, if the approximation of the Pareto surface is coarse or the problem formulation has too permissive constraints. A projection onto the Pareto surface can identify an inaccurate Pareto surface representation and, if necessary, improve the quality of the navigated plan.« less

  1. Robust nuclei segmentation in cyto-histopathological images using statistical level set approach with topology preserving constraint

    NASA Astrophysics Data System (ADS)

    Taheri, Shaghayegh; Fevens, Thomas; Bui, Tien D.

    2017-02-01

    Computerized assessments for diagnosis or malignancy grading of cyto-histopathological specimens have drawn increased attention in the field of digital pathology. Automatic segmentation of cell nuclei is a fundamental step in such automated systems. Despite considerable research, nuclei segmentation is still a challenging task due noise, nonuniform illumination, and most importantly, in 2D projection images, overlapping and touching nuclei. In most published approaches, nuclei refinement is a post-processing step after segmentation, which usually refers to the task of detaching the aggregated nuclei or merging the over-segmented nuclei. In this work, we present a novel segmentation technique which effectively addresses the problem of individually segmenting touching or overlapping cell nuclei during the segmentation process. The proposed framework is a region-based segmentation method, which consists of three major modules: i) the image is passed through a color deconvolution step to extract the desired stains; ii) then the generalized fast radial symmetry transform is applied to the image followed by non-maxima suppression to specify the initial seed points for nuclei, and their corresponding GFRS ellipses which are interpreted as the initial nuclei borders for segmentation; iii) finally, these nuclei border initial curves are evolved through the use of a statistical level-set approach along with topology preserving criteria for segmentation and separation of nuclei at the same time. The proposed method is evaluated using Hematoxylin and Eosin, and fluorescent stained images, performing qualitative and quantitative analysis, showing that the method outperforms thresholding and watershed segmentation approaches.

  2. A normal mode-based geometric simulation approach for exploring biologically relevant conformational transitions in proteins.

    PubMed

    Ahmed, Aqeel; Rippmann, Friedrich; Barnickel, Gerhard; Gohlke, Holger

    2011-07-25

    A three-step approach for multiscale modeling of protein conformational changes is presented that incorporates information about preferred directions of protein motions into a geometric simulation algorithm. The first two steps are based on a rigid cluster normal-mode analysis (RCNMA). Low-frequency normal modes are used in the third step (NMSim) to extend the recently introduced idea of constrained geometric simulations of diffusive motions in proteins by biasing backbone motions of the protein, whereas side-chain motions are biased toward favorable rotamer states. The generated structures are iteratively corrected regarding steric clashes and stereochemical constraint violations. The approach allows performing three simulation types: unbiased exploration of conformational space; pathway generation by a targeted simulation; and radius of gyration-guided simulation. When applied to a data set of proteins with experimentally observed conformational changes, conformational variabilities are reproduced very well for 4 out of 5 proteins that show domain motions, with correlation coefficients r > 0.70 and as high as r = 0.92 in the case of adenylate kinase. In 7 out of 8 cases, NMSim simulations starting from unbound structures are able to sample conformations that are similar (root-mean-square deviation = 1.0-3.1 Å) to ligand bound conformations. An NMSim generated pathway of conformational change of adenylate kinase correctly describes the sequence of domain closing. The NMSim approach is a computationally efficient alternative to molecular dynamics simulations for conformational sampling of proteins. The generated conformations and pathways of conformational transitions can serve as input to docking approaches or as starting points for more sophisticated sampling techniques.

  3. Computer-based planning of optimal donor sites for autologous osseous grafts

    NASA Astrophysics Data System (ADS)

    Krol, Zdzislaw; Chlebiej, Michal; Zerfass, Peter; Zeilhofer, Hans-Florian U.; Sader, Robert; Mikolajczak, Pawel; Keeve, Erwin

    2002-05-01

    Bone graft surgery is often necessary for reconstruction of craniofacial defects after trauma, tumor, infection or congenital malformation. In this operative technique the removed or missing bone segment is filled with a bone graft. The mainstay of the craniofacial reconstruction rests with the replacement of the defected bone by autogeneous bone grafts. To achieve sufficient incorporation of the autograft into the host bone, precise planning and simulation of the surgical intervention is required. The major problem is to determine as accurately as possible the donor site where the graft should be dissected from and to define the shape of the desired transplant. A computer-aided method for semi-automatic selection of optimal donor sites for autografts in craniofacial reconstructive surgery has been developed. The non-automatic step of graft design and constraint setting is followed by a fully automatic procedure to find the best fitting position. In extension to preceding work, a new optimization approach based on the Levenberg-Marquardt method has been implemented and embedded into our computer-based surgical planning system. This new technique enables, once the pre-processing step has been performed, selection of the optimal donor site in time less than one minute. The method has been applied during surgery planning step in more than 20 cases. The postoperative observations have shown that functional results, such as speech and chewing ability as well as restoration of bony continuity were clearly better compared to conventionally planned operations. Moreover, in most cases the duration of the surgical interventions has been distinctly reduced.

  4. Determination of helix orientations in a flexible DNA by multi-frequency EPR spectroscopy.

    PubMed

    Grytz, C M; Kazemi, S; Marko, A; Cekan, P; Güntert, P; Sigurdsson, S Th; Prisner, T F

    2017-11-15

    Distance measurements are performed between a pair of spin labels attached to nucleic acids using Pulsed Electron-Electron Double Resonance (PELDOR, also called DEER) spectroscopy which is a complementary tool to other structure determination methods in structural biology. The rigid spin label Ç, when incorporated pairwise into two helical parts of a nucleic acid molecule, allows the determination of both the mutual orientation and the distance between those labels, since Ç moves rigidly with the helix to which it is attached. We have developed a two-step protocol to investigate the conformational flexibility of flexible nucleic acid molecules by multi-frequency PELDOR. In the first step, a library with a broad collection of conformers, which are in agreement with topological constraints, NMR restraints and distances derived from PELDOR, was created. In the second step, a weighted structural ensemble of these conformers was chosen, such that it fits the multi-frequency PELDOR time traces of all doubly Ç-labelled samples simultaneously. This ensemble reflects the global structure and the conformational flexibility of the two-way DNA junction. We demonstrate this approach on a flexible bent DNA molecule, consisting of two short helical parts with a five adenine bulge at the center. The kink and twist motions between both helical parts were quantitatively determined and showed high flexibility, in agreement with a Förster Resonance Energy Transfer (FRET) study on a similar bent DNA motif. The approach presented here should be useful to describe the relative orientation of helical motifs and the conformational flexibility of nucleic acid structures, both alone and in complexes with proteins and other molecules.

  5. On the exact solvability of the anisotropic central spin model: An operator approach

    NASA Astrophysics Data System (ADS)

    Wu, Ning

    2018-07-01

    Using an operator approach based on a commutator scheme that has been previously applied to Richardson's reduced BCS model and the inhomogeneous Dicke model, we obtain general exact solvability requirements for an anisotropic central spin model with XXZ-type hyperfine coupling between the central spin and the spin bath, without any prior knowledge of integrability of the model. We outline basic steps of the usage of the operators approach, and pedagogically summarize them into two Lemmas and two Constraints. Through a step-by-step construction of the eigen-problem, we show that the condition gj‧2 - gj2 = c naturally arises for the model to be exactly solvable, where c is a constant independent of the bath-spin index j, and {gj } and { gj‧ } are the longitudinal and transverse hyperfine interactions, respectively. The obtained conditions and the resulting Bethe ansatz equations are consistent with that in previous literature.

  6. Turning a cylindrical treadmill with feet: an MR-compatible device for assessment of the neural correlates of lower-limb movement.

    PubMed

    Toyomura, Akira; Yokosawa, Koichi; Shimojo, Atsushi; Fujii, Tetsunoshin; Kuriki, Shinya

    2018-06-17

    Locomotion, which is one of the most basic motor functions, is critical for performing various daily-life activities. Despite its essential function, assessment of brain activity during lower-limb movement is still limited because of the constraints of existing brain imaging methods. Here, we describe an MR-compatible, cylindrical treadmill device that allows participants to perform stepping movements on an MRI scanner table. The device was constructed from wood and all of the parts were handmade by the authors. We confirmed the MR-compatibility of the device by evaluating the temporal signal-to-noise ratio of 64 voxels of a phantom during scanning. Brain activity was measured while twenty participants turned the treadmill with feet in sync with metronome sounds. The rotary speed of the cylinder was encoded by optical fibers. The post/pre-central gyrus and cerebellum showed significant activity during the movements, which was comparable to the activity patterns reported in previous studies. Head movement on the y- and z-axes was influenced more by lower-limb movement than was head movement on the x-axis. Among the 60 runs (3 runs × 20 participants), head movement during two of the runs (3.3%) was excessive due to the lower-limb movement. Compared to MR-compatible devices proposed in the previous studies, the advantage of this device may be simple structure and replicability to realize stepping movement with a supine position. Collectively, our results suggest that the treadmill device is useful for evaluating lower-limb-related neural activity. Copyright © 2018. Published by Elsevier B.V.

  7. Novel adaptive neural control design for a constrained flexible air-breathing hypersonic vehicle based on actuator compensation

    NASA Astrophysics Data System (ADS)

    Bu, Xiangwei; Wu, Xiaoyan; He, Guangjun; Huang, Jiaqi

    2016-03-01

    This paper investigates the design of a novel adaptive neural controller for the longitudinal dynamics of a flexible air-breathing hypersonic vehicle with control input constraints. To reduce the complexity of controller design, the vehicle dynamics is decomposed into the velocity subsystem and the altitude subsystem, respectively. For each subsystem, only one neural network is utilized to approach the lumped unknown function. By employing a minimal-learning parameter method to estimate the norm of ideal weight vectors rather than their elements, there are only two adaptive parameters required for neural approximation. Thus, the computational burden is lower than the ones derived from neural back-stepping schemes. Specially, to deal with the control input constraints, additional systems are exploited to compensate the actuators. Lyapunov synthesis proves that all the closed-loop signals involved are uniformly ultimately bounded. Finally, simulation results show that the adopted compensation scheme can tackle actuator constraint effectively and moreover velocity and altitude can stably track their reference trajectories even when the physical limitations on control inputs are in effect.

  8. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models.

    PubMed

    Haraldsdóttir, Hulda S; Cousins, Ben; Thiele, Ines; Fleming, Ronan M T; Vempala, Santosh

    2017-06-01

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. We apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks. https://github.com/opencobra/cobratoolbox . ronan.mt.fleming@gmail.com or vempala@cc.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  9. High-order tracking differentiator based adaptive neural control of a flexible air-breathing hypersonic vehicle subject to actuators constraints.

    PubMed

    Bu, Xiangwei; Wu, Xiaoyan; Tian, Mingyan; Huang, Jiaqi; Zhang, Rui; Ma, Zhen

    2015-09-01

    In this paper, an adaptive neural controller is exploited for a constrained flexible air-breathing hypersonic vehicle (FAHV) based on high-order tracking differentiator (HTD). By utilizing functional decomposition methodology, the dynamic model is reasonably decomposed into the respective velocity subsystem and altitude subsystem. For the velocity subsystem, a dynamic inversion based neural controller is constructed. By introducing the HTD to adaptively estimate the newly defined states generated in the process of model transformation, a novel neural based altitude controller that is quite simpler than the ones derived from back-stepping is addressed based on the normal output-feedback form instead of the strict-feedback formulation. Based on minimal-learning parameter scheme, only two neural networks with two adaptive parameters are needed for neural approximation. Especially, a novel auxiliary system is explored to deal with the problem of control inputs constraints. Finally, simulation results are presented to test the effectiveness of the proposed control strategy in the presence of system uncertainties and actuators constraints. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  10. A globally convergent Lagrange and barrier function iterative algorithm for the traveling salesman problem.

    PubMed

    Dang, C; Xu, L

    2001-03-01

    In this paper a globally convergent Lagrange and barrier function iterative algorithm is proposed for approximating a solution of the traveling salesman problem. The algorithm employs an entropy-type barrier function to deal with nonnegativity constraints and Lagrange multipliers to handle linear equality constraints, and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the algorithm searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that the nonnegativity constraints are always satisfied automatically if the step length is a number between zero and one. At each iteration the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the algorithm converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the algorithm seems more effective and efficient than the softassign algorithm.

  11. Robust pattern decoding in shape-coded structured light

    NASA Astrophysics Data System (ADS)

    Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai

    2017-09-01

    Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.

  12. Correcting for the free energy costs of bond or angle constraints in molecular dynamics simulations

    PubMed Central

    König, Gerhard; Brooks, Bernard R.

    2014-01-01

    Background Free energy simulations are an important tool in the arsenal of computational biophysics, allowing the calculation of thermodynamic properties of binding or enzymatic reactions. This paper introduces methods to increase the accuracy and precision of free energy calculations by calculating the free energy costs of constraints during post-processing. The primary purpose of employing constraints for these free energy methods is to increase the phase space overlap between ensembles, which is required for accuracy and convergence. Methods The free energy costs of applying or removing constraints are calculated as additional explicit steps in the free energy cycle. The new techniques focus on hard degrees of freedom and use both gradients and Hessian estimation. Enthalpy, vibrational entropy, and Jacobian free energy terms are considered. Results We demonstrate the utility of this method with simple classical systems involving harmonic and anharmonic oscillators, four-atomic benchmark systems, an alchemical mutation of ethane to methanol, and free energy simulations between alanine and serine. The errors for the analytical test cases are all below 0.0007 kcal/mol, and the accuracy of the free energy results of ethane to methanol is improved from 0.15 to 0.04 kcal/mol. For the alanine to serine case, the phase space overlaps of the unconstrained simulations range between 0.15 and 0.9%. The introduction of constraints increases the overlap up to 2.05%. On average, the overlap increases by 94% relative to the unconstrained value and precision is doubled. Conclusions The approach reduces errors arising from constraints by about an order of magnitude. Free energy simulations benefit from the use of constraints through enhanced convergence and higher precision. General Significance The primary utility of this approach is to calculate free energies for systems with disparate energy surfaces and bonded terms, especially in multi-scale molecular mechanics/quantum mechanics simulations. PMID:25218695

  13. Correcting for the free energy costs of bond or angle constraints in molecular dynamics simulations.

    PubMed

    König, Gerhard; Brooks, Bernard R

    2015-05-01

    Free energy simulations are an important tool in the arsenal of computational biophysics, allowing the calculation of thermodynamic properties of binding or enzymatic reactions. This paper introduces methods to increase the accuracy and precision of free energy calculations by calculating the free energy costs of constraints during post-processing. The primary purpose of employing constraints for these free energy methods is to increase the phase space overlap between ensembles, which is required for accuracy and convergence. The free energy costs of applying or removing constraints are calculated as additional explicit steps in the free energy cycle. The new techniques focus on hard degrees of freedom and use both gradients and Hessian estimation. Enthalpy, vibrational entropy, and Jacobian free energy terms are considered. We demonstrate the utility of this method with simple classical systems involving harmonic and anharmonic oscillators, four-atomic benchmark systems, an alchemical mutation of ethane to methanol, and free energy simulations between alanine and serine. The errors for the analytical test cases are all below 0.0007kcal/mol, and the accuracy of the free energy results of ethane to methanol is improved from 0.15 to 0.04kcal/mol. For the alanine to serine case, the phase space overlaps of the unconstrained simulations range between 0.15 and 0.9%. The introduction of constraints increases the overlap up to 2.05%. On average, the overlap increases by 94% relative to the unconstrained value and precision is doubled. The approach reduces errors arising from constraints by about an order of magnitude. Free energy simulations benefit from the use of constraints through enhanced convergence and higher precision. The primary utility of this approach is to calculate free energies for systems with disparate energy surfaces and bonded terms, especially in multi-scale molecular mechanics/quantum mechanics simulations. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Published by Elsevier B.V.

  14. Mechanistic Explanations for Restricted Evolutionary Paths That Emerge from Gene Regulatory Networks

    PubMed Central

    Cotterell, James; Sharpe, James

    2013-01-01

    The extent and the nature of the constraints to evolutionary trajectories are central issues in biology. Constraints can be the result of systems dynamics causing a non-linear mapping between genotype and phenotype. How prevalent are these developmental constraints and what is their mechanistic basis? Although this has been extensively explored at the level of epistatic interactions between nucleotides within a gene, or amino acids within a protein, selection acts at the level of the whole organism, and therefore epistasis between disparate genes in the genome is expected due to their functional interactions within gene regulatory networks (GRNs) which are responsible for many aspects of organismal phenotype. Here we explore epistasis within GRNs capable of performing a common developmental function – converting a continuous morphogen input into discrete spatial domains. By exploring the full complement of GRN wiring designs that are able to perform this function, we analyzed all possible mutational routes between functional GRNs. Through this study we demonstrate that mechanistic constraints are common for GRNs that perform even a simple function. We demonstrate a common mechanistic cause for such a constraint involving complementation between counter-balanced gene-gene interactions. Furthermore we show how such constraints can be bypassed by means of “permissive” mutations that buffer changes in a direct route between two GRN topologies that would normally be unviable. We show that such bypasses are common and thus we suggest that unlike what was observed in protein sequence-function relationships, the “tape of life” is less reproducible when one considers higher levels of biological organization. PMID:23613807

  15. Marketers Understanding Engineers and Engineers Understanding Marketers: The Opportunities and Constraints of a Cross-Discipline Course Using 3D Printing to Develop Marketable Innovations

    ERIC Educational Resources Information Center

    Reifschneider, Louis; Kaufman, Peter; Langrehr, Frederick W.; Kaufman, Kristina

    2015-01-01

    Marketers are criticized for not understanding the steps in the engineering research and development process and the challenges of manufacturing a new product at a profit. Engineers are criticized for not considering the marketability of and customer interest in such a product during the planning stages. With the development of 3D printing, rapid…

  16. Analytical and Experimental Characterization of Thick-Section Fiber-Metal Laminates

    DTIC Science & Technology

    2013-06-01

    individual metal layers as loading increases. The off-axis deformation properties of the prepreg layers were modeled by using equivalent constraint models...the degraded stiffness of the prepreg layer is found. At each loading step the stiffness properties of individual layers are calculated. These...predicts stress-strain curves on-axis, additional work is needed to study the local interactions between metal and prepreg layers as damage occurs in each

  17. LMI-Based Fuzzy Optimal Variance Control of Airfoil Model Subject to Input Constraints

    NASA Technical Reports Server (NTRS)

    Swei, Sean S.M.; Ayoubi, Mohammad A.

    2017-01-01

    This paper presents a study of fuzzy optimal variance control problem for dynamical systems subject to actuator amplitude and rate constraints. Using Takagi-Sugeno fuzzy modeling and dynamic Parallel Distributed Compensation technique, the stability and the constraints can be cast as a multi-objective optimization problem in the form of Linear Matrix Inequalities. By utilizing the formulations and solutions for the input and output variance constraint problems, we develop a fuzzy full-state feedback controller. The stability and performance of the proposed controller is demonstrated through its application to the airfoil flutter suppression.

  18. Analysis of the sweeped actuator line method

    DOE PAGES

    Nathan, Jörn; Masson, Christian; Dufresne, Louis; ...

    2015-10-16

    The actuator line method made it possible to describe the near wake of a wind turbine more accurately than with the actuator disk method. Whereas the actuator line generates the helicoidal vortex system shed from the tip blades, the actuator disk method sheds a vortex sheet from the edge of the rotor plane. But with the actuator line come also temporal and spatial constraints, such as the need for a much smaller time step than with actuator disk. While the latter one only has to obey the Courant-Friedrichs-Lewy condition, the former one is also restricted by the grid resolution andmore » the rotor tip-speed. Additionally the spatial resolution has to be finer for the actuator line than with the actuator disk, for well resolving the tip vortices. Therefore this work is dedicated to examining a method in between of actuator line and actuator disk, which is able to model the transient behavior, such as the rotating blades, but which also relaxes the temporal constraint. Therefore a larger time-step is used and the blade forces are swept over a certain area. As a result, the main focus of this article is on the aspect of the blade tip vortex generation in comparison with the standard actuator line and actuator disk.« less

  19. Analysis of the sweeped actuator line method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nathan, Jörn; Masson, Christian; Dufresne, Louis

    The actuator line method made it possible to describe the near wake of a wind turbine more accurately than with the actuator disk method. Whereas the actuator line generates the helicoidal vortex system shed from the tip blades, the actuator disk method sheds a vortex sheet from the edge of the rotor plane. But with the actuator line come also temporal and spatial constraints, such as the need for a much smaller time step than with actuator disk. While the latter one only has to obey the Courant-Friedrichs-Lewy condition, the former one is also restricted by the grid resolution andmore » the rotor tip-speed. Additionally the spatial resolution has to be finer for the actuator line than with the actuator disk, for well resolving the tip vortices. Therefore this work is dedicated to examining a method in between of actuator line and actuator disk, which is able to model the transient behavior, such as the rotating blades, but which also relaxes the temporal constraint. Therefore a larger time-step is used and the blade forces are swept over a certain area. As a result, the main focus of this article is on the aspect of the blade tip vortex generation in comparison with the standard actuator line and actuator disk.« less

  20. Regulation of dynamic postural control to attend manual steadiness constraints.

    PubMed

    Teixeira, Luis Augusto; Coutinho, Joane Figueiredo Serpa; Coelho, Daniel Boari

    2018-05-02

    In daily living activities, performance of spatially accurate manual movements in upright stance depends on postural stability. In the present investigation, we aimed to evaluate the effect of the required manual steadiness (task constraint) on the regulation of dynamic postural control. A single group of young participants (n=20) were evaluated in the performance of a dual posturo-manual task of balancing on a platform oscillating in sinusoidal translations at 0.4 Hz (low) or 1 Hz (high) frequencies while stabilizing a cylinder on a handheld tray. Manual task constraint was manipulated by comparing the conditions of keeping the cylinder stationary on its flat or round side, corresponding to low and high manual task constraints, respectively. Results showed that in the low oscillation frequency the high manual task constraint led to lower oscillation amplitudes of the head, center of mass, and tray, in addition to higher relative phase values between ankle/hip-shoulder oscillatory rotations and between center of mass/center of pressure-feet oscillations as compared to values observed in the low manual task constraint. Further analyses showed that the high manual task constraint also affected variables related to both postural (increased amplitudes of center of pressure oscillation) and manual (increased amplitude of shoulder rotations) task components in the high oscillation frequency. These results suggest that control of a dynamic posturo-manual task is modulated in distinct parameters to attend the required manual steadiness in a complex and flexible way.

  1. Constrained spacecraft reorientation using mixed integer convex programming

    NASA Astrophysics Data System (ADS)

    Tam, Margaret; Glenn Lightsey, E.

    2016-10-01

    A constrained attitude guidance (CAG) system is developed using convex optimization to autonomously achieve spacecraft pointing objectives while meeting the constraints imposed by on-board hardware. These constraints include bounds on the control input and slew rate, as well as pointing constraints imposed by the sensors. The pointing constraints consist of inclusion and exclusion cones that dictate permissible orientations of the spacecraft in order to keep objects in or out of the field of view of the sensors. The optimization scheme drives a body vector towards a target inertial vector along a trajectory that consists solely of permissible orientations in order to achieve the desired attitude for a given mission mode. The non-convex rotational kinematics are handled by discretization, which also ensures that the quaternion stays unity norm. In order to guarantee an admissible path, the pointing constraints are relaxed. Depending on how strict the pointing constraints are, the degree of relaxation is tuneable. The use of binary variables permits the inclusion of logical expressions in the pointing constraints in the case that a set of sensors has redundancies. The resulting mixed integer convex programming (MICP) formulation generates a steering law that can be easily integrated into an attitude determination and control (ADC) system. A sample simulation of the system is performed for the Bevo-2 satellite, including disturbance torques and actuator dynamics which are not modeled by the controller. Simulation results demonstrate the robustness of the system to disturbances while meeting the mission requirements with desirable performance characteristics.

  2. Methodological aspects of an adaptive multidirectional pattern search to optimize speech perception using three hearing-aid algorithms

    NASA Astrophysics Data System (ADS)

    Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes

    2004-12-01

    In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .

  3. Multi-objective optimization for an automated and simultaneous phase and baseline correction of NMR spectral data

    NASA Astrophysics Data System (ADS)

    Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus

    2018-04-01

    Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.

  4. Reliability Assessment of a Robust Design Under Uncertainty for a 3-D Flexible Wing

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Hou, Gene J. -W.; Newman, Perry A.

    2003-01-01

    The paper presents reliability assessment results for the robust designs under uncertainty of a 3-D flexible wing previously reported by the authors. Reliability assessments (additional optimization problems) of the active constraints at the various probabilistic robust design points are obtained and compared with the constraint values or target constraint probabilities specified in the robust design. In addition, reliability-based sensitivity derivatives with respect to design variable mean values are also obtained and shown to agree with finite difference values. These derivatives allow one to perform reliability based design without having to obtain second-order sensitivity derivatives. However, an inner-loop optimization problem must be solved for each active constraint to find the most probable point on that constraint failure surface.

  5. Knowledge-based design of generate-and-patch problem solvers that solve global resource assignment problems

    NASA Technical Reports Server (NTRS)

    Voigt, Kerstin

    1992-01-01

    We present MENDER, a knowledge based system that implements software design techniques that are specialized to automatically compile generate-and-patch problem solvers that satisfy global resource assignments problems. We provide empirical evidence of the superior performance of generate-and-patch over generate-and-test: even with constrained generation, for a global constraint in the domain of '2D-floorplanning'. For a second constraint in '2D-floorplanning' we show that even when it is possible to incorporate the constraint into a constrained generator, a generate-and-patch problem solver may satisfy the constraint more rapidly. We also briefly summarize how an extended version of our system applies to a constraint in the domain of 'multiprocessor scheduling'.

  6. Molecular dynamics based enhanced sampling of collective variables with very large time steps.

    PubMed

    Chen, Pei-Yang; Tuckerman, Mark E

    2018-01-14

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  7. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    NASA Astrophysics Data System (ADS)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  8. TOTAL ORE PROCESSING INTEGRATION AND MANAGEMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leslie Gertsch; Richard Gertsch

    2005-05-16

    The lessons learned from ore segregation test No.3 were presented to Minntac Mine personnel during the reporting period. Ore was segregated by A-Factor, with low values going to Step 1/2 and high values going to Step 3. During the test, the mine maintained the best split possible for the given production and location constraints. During the test, Step 1&2 A-Factor was lowered more than Step 3 was raised. All other ore quality changes were not manipulated, but the segregation by A-Factor affected most of the other qualities. Magnetic iron, coarse tails, fine tails, silica, and grind changed in response tomore » the split. Segregation was achieved by adding ore from HIS to the Step 3 blend and lowering the amount of LC 1&2 and somewhat lowering the amount of LC 3&4. Conversely, Step 1&2 received less HIS with a corresponding increase in LC 1&2. The amount of IBC was increased to both Steps about one-third of the way into the test. For about the center half of the test, LC 3&4 was reduced to both Steps. The most noticeable layer changes were, then: an increase in the HIS split; a decrease in the LC 1&2 split; adding IBC to both Steps; and lowering LC 3&4 to both Steps. Statistical analysis of the dataset collected during ordinary, non-segregated operation of the mine and mill is continuing. Graphical analysis of blast patterns according to drill monitor data was slowed by student classwork. It is expected to resume after the semester ends in May.« less

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorier, Matthieu; Sisneros, Roberto; Bautista Gomez, Leonard

    While many parallel visualization tools now provide in situ visualization capabilities, the trend has been to feed such tools with large amounts of unprocessed output data and let them render everything at the highest possible resolution. This leads to an increased run time of simulations that still have to complete within a fixed-length job allocation. In this paper, we tackle the challenge of enabling in situ visualization under performance constraints. Our approach shuffles data across processes according to its content and filters out part of it in order to feed a visualization pipeline with only a reorganized subset of themore » data produced by the simulation. Our framework leverages fast, generic evaluation procedures to score blocks of data, using information theory, statistics, and linear algebra. It monitors its own performance and adapts dynamically to achieve appropriate visual fidelity within predefined performance constraints. Experiments on the Blue Waters supercomputer with the CM1 simulation show that our approach enables a 5 speedup with respect to the initial visualization pipeline and is able to meet performance constraints.« less

  10. Power-constrained supercomputing

    NASA Astrophysics Data System (ADS)

    Bailey, Peter E.

    As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound. Adaptive power balancing efficiently predicts where critical paths are likely to occur and distributes power to those paths. Greater power, in turn, allows increased thread concurrency levels, CPU frequency/voltage, or both. We describe these techniques in detail and show that, compared to the state-of-the-art technique of using statically predetermined, per-node power caps, Conductor leads to a best-case performance improvement of up to 30%, and an average improvement of 19.1%. At the node level, an accurate power/performance model will aid in selecting the right configuration from a large set of available configurations. We present a novel approach to generate such a model offline using kernel clustering and multivariate linear regression. Our model requires only two iterations to select a configuration, which provides a significant advantage over exhaustive search-based strategies. We apply our model to predict power and performance for different applications using arbitrary configurations, and show that our model, when used with hardware frequency-limiting in a runtime system, selects configurations with significantly higher performance at a given power limit than those chosen by frequency-limiting alone. When applied to a set of 36 computational kernels from a range of applications, our model accurately predicts power and performance; our runtime system based on the model maintains 91% of optimal performance while meeting power constraints 88% of the time. When the runtime system violates a power constraint, it exceeds the constraint by only 6% in the average case, while simultaneously achieving 54% more performance than an oracle. Through the combination of the above contributions, we hope to provide guidance and inspiration to research practitioners working on runtime systems for power-constrained environments. We also hope this dissertation will draw attention to the need for software and runtime-controlled power management under power constraints at various levels, from the processor level to the cluster level.

  11. Ranking network of a captive rhesus macaque society: a sophisticated corporative kingdom.

    PubMed

    Fushing, Hsieh; McAssey, Michael P; Beisner, Brianne; McCowan, Brenda

    2011-03-15

    We develop a three-step computing approach to explore a hierarchical ranking network for a society of captive rhesus macaques. The computed network is sufficiently informative to address the question: Is the ranking network for a rhesus macaque society more like a kingdom or a corporation? Our computations are based on a three-step approach. These steps are devised to deal with the tremendous challenges stemming from the transitivity of dominance as a necessary constraint on the ranking relations among all individual macaques, and the very high sampling heterogeneity in the behavioral conflict data. The first step simultaneously infers the ranking potentials among all network members, which requires accommodation of heterogeneous measurement error inherent in behavioral data. Our second step estimates the social rank for all individuals by minimizing the network-wide errors in the ranking potentials. The third step provides a way to compute confidence bounds for selected empirical features in the social ranking. We apply this approach to two sets of conflict data pertaining to two captive societies of adult rhesus macaques. The resultant ranking network for each society is found to be a sophisticated mixture of both a kingdom and a corporation. Also, for validation purposes, we reanalyze conflict data from twenty longhorn sheep and demonstrate that our three-step approach is capable of correctly computing a ranking network by eliminating all ranking error.

  12. A Petaflops Era Computing Analysis

    NASA Technical Reports Server (NTRS)

    Preston, Frank S.

    1998-01-01

    This report covers a study of the potential for petaflops (1O(exp 15) floating point operations per second) computing. This study was performed within the year 1996 and should be considered as the first step in an on-going effort. 'Me analysis concludes that a petaflop system is technically feasible but not feasible with today's state-of-the-art. Since the computer arena is now a commodity business, most experts expect that a petaflops system will evolve from current technology in an evolutionary fashion. To meet the price expectations of users waiting for petaflop performance, great improvements in lowering component costs will be required. Lower power consumption is also a must. The present rate of progress in improved performance places the date of introduction of petaflop systems at about 2010. Several years before that date, it is projected that the resolution limit of chips will reach the now known resolution limit. Aside from the economic problems and constraints, software is identified as the major problem. The tone of this initial study is more pessimistic than most of the Super-published material available on petaflop systems. Workers in the field are expected to generate more data which could serve to provide a basis for a more informed projection. This report includes an annotated bibliography.

  13. Modeling Global Ocean Biogeochemistry With Physical Data Assimilation: A Pragmatic Solution to the Equatorial Instability

    NASA Astrophysics Data System (ADS)

    Park, Jong-Yeon; Stock, Charles A.; Yang, Xiaosong; Dunne, John P.; Rosati, Anthony; John, Jasmin; Zhang, Shaoqing

    2018-03-01

    Reliable estimates of historical and current biogeochemistry are essential for understanding past ecosystem variability and predicting future changes. Efforts to translate improved physical ocean state estimates into improved biogeochemical estimates, however, are hindered by high biogeochemical sensitivity to transient momentum imbalances that arise during physical data assimilation. Most notably, the breakdown of geostrophic constraints on data assimilation in equatorial regions can lead to spurious upwelling, resulting in excessive equatorial productivity and biogeochemical fluxes. This hampers efforts to understand and predict the biogeochemical consequences of El Niño and La Niña. We develop a strategy to robustly integrate an ocean biogeochemical model with an ensemble coupled-climate data assimilation system used for seasonal to decadal global climate prediction. Addressing spurious vertical velocities requires two steps. First, we find that tightening constraints on atmospheric data assimilation maintains a better equatorial wind stress and pressure gradient balance. This reduces spurious vertical velocities, but those remaining still produce substantial biogeochemical biases. The remainder is addressed by imposing stricter fidelity to model dynamics over data constraints near the equator. We determine an optimal choice of model-data weights that removed spurious biogeochemical signals while benefitting from off-equatorial constraints that still substantially improve equatorial physical ocean simulations. Compared to the unconstrained control run, the optimally constrained model reduces equatorial biogeochemical biases and markedly improves the equatorial subsurface nitrate concentrations and hypoxic area. The pragmatic approach described herein offers a means of advancing earth system prediction in parallel with continued data assimilation advances aimed at fully considering equatorial data constraints.

  14. Calculation of Costs of Pregnancy- and Puerperium-related Care: Experience from a Hospital in a Low-income Country

    PubMed Central

    Medin, E.; Gazi, R.; Koehlmoos, T.P.; Rehnberg, C.; Saifi, R.; Bhuiya, A.; Khan, J.

    2010-01-01

    Calculation of costs of different medical and surgical services has numerous uses, which include monitoring the performance of service-delivery, setting the efficiency target, benchmarking of services across all sectors, considering investment decisions, commissioning to meet health needs, and negotiating revised levels of funding. The role of private-sector healthcare facilities has been increasing rapidly over the last decade. Despite the overall improvement in the public and private healthcare sectors in Bangladesh, lack of price benchmarking leads to patients facing unexplained price discrimination when receiving healthcare services. The aim of the study was to calculate the hospital-care cost of disease-specific cases, specifically pregnancy- and puerperium-related cases, and to indentify the practical challenges of conducting costing studies in the hospital setting in Bangladesh. A combination of micro-costing and step-down cost allocation was used for collecting information on the cost items and, ultimately, for calculating the unit cost for each diagnostic case. Data were collected from the hospital records of 162 patients having 11 different clinical diagnoses. Caesarean section due to maternal and foetal complications was the most expensive type of case whereas the length of stay due to complications was the major driver of cost. Some constraints in keeping hospital medical records and accounting practices were observed. Despite these constraints, the findings of the study indicate that it is feasible to carry out a large-scale study to further explore the costs of different hospital-care services. PMID:20635637

  15. Dynamic boundary layer based neural network quasi-sliding mode control for soft touching down on asteroid

    NASA Astrophysics Data System (ADS)

    Liu, Xiaosong; Shan, Zebiao; Li, Yuanchun

    2017-04-01

    Pinpoint landing is a critical step in some asteroid exploring missions. This paper is concerned with the descent trajectory control for soft touching down on a small irregularly-shaped asteroid. A dynamic boundary layer based neural network quasi-sliding mode control law is proposed to track a desired descending path. The asteroid's gravitational acceleration acting on the spacecraft is described by the polyhedron method. Considering the presence of input constraint and unmodeled acceleration, the dynamic equation of relative motion is presented first. The desired descending path is planned using cubic polynomial method, and a collision detection algorithm is designed. To perform trajectory tracking, a neural network sliding mode control law is given first, where the sliding mode control is used to ensure the convergence of system states. Two radial basis function neural networks (RBFNNs) are respectively used as an approximator for the unmodeled term and a compensator for the difference between the actual control input with magnitude constraint and nominal control. To improve the chattering induced by the traditional sliding mode control and guarantee the reachability of the system, a specific saturation function with dynamic boundary layer is proposed to replace the sign function in the preceding control law. Through the Lyapunov approach, the reachability condition of the control system is given. The improved control law can guarantee the system state move within a gradually shrinking quasi-sliding mode band. Numerical simulation results demonstrate the effectiveness of the proposed control strategy.

  16. The structure of poly(carbonsuboxide) on the atomic scale: a solid-state NMR study.

    PubMed

    Schmedt auf der Günne, Jörn; Beck, Johannes; Hoffbauer, Wilfried; Krieger-Beck, Petra

    2005-07-18

    In this contribution we present a study of the structure of amorphous poly(carbonsuboxide) (C3O2)x by 13C solid-state NMR spectroscopy supported by infrared spectroscopy and chemical analysis. Poly(carbonsuboxide) was obtained by polymerization of carbonsuboxide C3O2, which in turn was synthesized from malonic acid bis(trimethylsilylester). Two different 13C labeling schemes were applied to probe inter- and intramonomeric bonds in the polymer by dipolar solid-state NMR methods and also to allow quantitative 13C MAS NMR spectra. Four types of carbon environments can be distinguished in the NMR spectra. Double-quantum and triple-quantum 2D correlation experiments were used to assign the observed peaks using the through-space and through-bond dipolar coupling. In order to obtain distance constraints for the intermonomeric bonds, double-quantum constant-time experiments were performed. In these experiments an additional filter step was applied to suppress contributions from not directly bonded 13C,13C spin pairs. The 13C NMR intensities, chemical shifts, connectivities and distances gave constraints for both the polymerization mechanism and the short-range order of the polymer. The experimental results were complemented by bond lengths predicted by density functional theory methods for several previously suggested models. Based on the presented evidence we can unambiguously exclude models based on gamma-pyronic units and support models based on alpha-pyronic units. The possibility of planar ladder- and bracelet-like alpha-pyronic structures is discussed.

  17. a Robust Method for Stereo Visual Odometry Based on Multiple Euclidean Distance Constraint and Ransac Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.

    2017-07-01

    Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.

  18. Best practice strategies to safeguard drug prescribing and drug administration: an anthology of expert views and opinions.

    PubMed

    Seidling, Hanna M; Stützle, Marion; Hoppe-Tichy, Torsten; Allenet, Benoît; Bedouch, Pierrick; Bonnabry, Pascal; Coleman, Jamie J; Fernandez-Llimos, Fernando; Lovis, Christian; Rei, Maria Jose; Störzinger, Dominic; Taylor, Lenka A; Pontefract, Sarah K; van den Bemt, Patricia M L A; van der Sijs, Heleen; Haefeli, Walter E

    2016-04-01

    While evidence on implementation of medication safety strategies is increasing, reasons for selecting and relinquishing distinct strategies and details on implementation are typically not shared in published literature. We aimed to collect and structure expert information resulting from implementing medication safety strategies to provide advice for decision-makers. Medication safety experts with clinical expertise from thirteen hospitals throughout twelve European and North American countries shared their experience in workshop meetings, on-site-visits and remote structured interviews. We performed an expert-based, in-depth assessment of implementation of best-practice strategies to improve drug prescribing and drug administration. Workflow, variability and recommended medication safety strategies in drug prescribing and drug administration processes. According to the experts, institutions chose strategies that targeted process steps known to be particularly error-prone in the respective setting. Often, the selection was channeled by local constraints such as the e-health equipment and critically modulated by national context factors. In our study, the experts favored electronic prescribing with clinical decision support and medication reconciliation as most promising interventions. They agreed that self-assessment and introduction of medication safety boards were crucial to satisfy the setting-specific differences and foster successful implementation. While general evidence for implementation of strategies to improve medication safety exists, successful selection and adaptation of a distinct strategy requires a thorough knowledge of the institute-specific constraints and an ongoing monitoring and adjustment of the implemented measures.

  19. Copy number variants calling for single cell sequencing data by multi-constrained optimization.

    PubMed

    Xu, Bo; Cai, Hongmin; Zhang, Changsheng; Yang, Xi; Han, Guoqiang

    2016-08-01

    Variations in DNA copy number carry important information on genome evolution and regulation of DNA replication in cancer cells. The rapid development of single-cell sequencing technology allows one to explore gene expression heterogeneity among single-cells, thus providing important cancer cell evolution information. Single-cell DNA/RNA sequencing data usually have low genome coverage, which requires an extra step of amplification to accumulate enough samples. However, such amplification will introduce large bias and makes bioinformatics analysis challenging. Accurately modeling the distribution of sequencing data and effectively suppressing the bias influence is the key to success variations analysis. Recent advances demonstrate the technical noises by amplification are more likely to follow negative binomial distribution, a special case of Poisson distribution. Thus, we tackle the problem CNV detection by formulating it into a quadratic optimization problem involving two constraints, in which the underling signals are corrupted by Poisson distributed noises. By imposing the constraints of sparsity and smoothness, the reconstructed read depth signals from single-cell sequencing data are anticipated to fit the CNVs patterns more accurately. An efficient numerical solution based on the classical alternating direction minimization method (ADMM) is tailored to solve the proposed model. We demonstrate the advantages of the proposed method using both synthetic and empirical single-cell sequencing data. Our experimental results demonstrate that the proposed method achieves excellent performance and high promise of success with single-cell sequencing data. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  20. Joint Segmentation of Multiple Thoracic Organs in CT Images with Two Collaborative Deep Architectures.

    PubMed

    Trullo, Roger; Petitjean, Caroline; Nie, Dong; Shen, Dinggang; Ruan, Su

    2017-09-01

    Computed Tomography (CT) is the standard imaging technique for radiotherapy planning. The delineation of Organs at Risk (OAR) in thoracic CT images is a necessary step before radiotherapy, for preventing irradiation of healthy organs. However, due to low contrast, multi-organ segmentation is a challenge. In this paper, we focus on developing a novel framework for automatic delineation of OARs. Different from previous works in OAR segmentation where each organ is segmented separately, we propose two collaborative deep architectures to jointly segment all organs, including esophagus, heart, aorta and trachea. Since most of the organ borders are ill-defined, we believe spatial relationships must be taken into account to overcome the lack of contrast. The aim of combining two networks is to learn anatomical constraints with the first network, which will be used in the second network, when each OAR is segmented in turn. Specifically, we use the first deep architecture, a deep SharpMask architecture, for providing an effective combination of low-level representations with deep high-level features, and then take into account the spatial relationships between organs by the use of Conditional Random Fields (CRF). Next, the second deep architecture is employed to refine the segmentation of each organ by using the maps obtained on the first deep architecture to learn anatomical constraints for guiding and refining the segmentations. Experimental results show superior performance on 30 CT scans, comparing with other state-of-the-art methods.

  1. Working conditions and occupational risk exposure in employees driving for work.

    PubMed

    Fort, Emmanuel; Ndagire, Sheba; Gadegbeku, Blandine; Hours, Martine; Charbotel, Barbara

    2016-04-01

    An analysis of the occupational constraints and exposures to which employees facing road risk at work are subject was performed, with comparison versus non-exposed employees. Objective was to improve knowledge of the characteristics of workers exposed to road risk in France and of the concomitant occupational constraints. The descriptive study was based on data from the 2010 SUMER survey (Medical Monitoring of Occupational Risk Exposure: Surveillance Médicale des Expositions aux Risques professionnels), which included data not only on road risk exposure at work but also on a range of socio-occupational factors and working conditions. The main variable of interest was "driving (car, truck, bus, coach, etc.) on public thoroughfares" for work (during the last week of work). This was a dichotomous "Yes/No" variable, distinguishing employees who drove for work; it also comprised 4-step weekly exposure duration: <2h, 2-10h, 10-20h and ≥20h. 75% of the employees with driving exposure were male. Certain socio-occupational categories were found significantly more frequently: professional drivers (INSEE occupations and socio-occupational categories (PCS) 64), skilled workers (PCS 61), intermediate professions and teaching, health, civil service (functionaries) and assimilated (PCS 46) and company executives (PCS 36). Employees with driving exposure more often worked in small businesses or establishments. Constraints in terms of schedule and work-time were more frequent in employees with driving exposure. Constraints in terms of work rhythm were more frequent in non-exposed employees, with the exception of external demands requiring immediate response. On the Karasek's Job Demand-Control Model, employees with driving exposure less often had low decision latitude. Prevalence of job-strain was also lower, as was prevalence of "iso-strain" (combination of job-strain and social isolation). Employees with driving exposure were less often concerned by hostile behavior and, when they did report such psychological violence (inspired on the Leymann questionnaire), it was significantly more frequently due to clients, users or patients. Employees with driving exposure at work showed several specificities. The present study, based on a representative nationwide survey of employees, confirmed the existence of differences in working conditions between employees with and without driving exposure at work. In employees with driving exposure, constraints in terms of work-time and rhythm increased with weekly exposure duration, as did tension at work and exposure to hostile behavior. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Constraints on the Grueneisen Theory

    DTIC Science & Technology

    2007-02-01

    Constraints on the Grüneisen Theory 5c. PROGRAM ELEMENT NUMBER 5d. PROJECT NUMBER AH80 5e. TASK NUMBER 6. AUTHOR( S ) Steven B. Segletes 5f...WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME( S ) AND ADDRESS(ES) U.S. Army Research Laboratory ATTN: AMSRD-ARL-WM-TD Aberdeen Proving Ground...MD 21005-5069 8. PERFORMING ORGANIZATION REPORT NUMBER ARL-TR-4041 10. SPONSOR/MONITOR’S ACRONYM( S ) 9. SPONSORING/MONITORING AGENCY NAME

  3. Aircraft symmetric flight optimization. [gradient techniques for supersonic aircraft control

    NASA Technical Reports Server (NTRS)

    Falco, M.; Kelley, H. J.

    1973-01-01

    Review of the development of gradient techniques and their application to aircraft optimal performance computations in the vertical plane of flight. Results obtained using the method of gradients are presented for attitude- and throttle-control programs which extremize the fuel, range, and time performance indices subject to various trajectory and control constraints, including boundedness of engine throttle control. A penalty function treatment of state inequality constraints which generally appear in aircraft performance problems is outlined. Numerical results for maximum-range, minimum-fuel, and minimum-time climb paths for a hypothetical supersonic turbojet interceptor are presented and discussed. In addition, minimum-fuel climb paths subject to various levels of ground overpressure intensity constraint are indicated for a representative supersonic transport. A variant of the Gel'fand-Tsetlin 'method of ravines' is reviewed, and two possibilities for further development of continuous gradient processes are cited - namely, a projection version of conjugate gradients and a curvilinear search.

  4. Scaffolding Online Argumentation during Problem Solving

    ERIC Educational Resources Information Center

    Oh, S.; Jonassen, D. H.

    2007-01-01

    In this study, constraint-based argumentation scaffolding was proposed to facilitate online argumentation performance and ill-structured problem solving during online discussions. In addition, epistemological beliefs were presumed to play a role in solving ill-structured diagnosis-solution problems. Constraint-based discussion boards were…

  5. Electrochemistry and Storage Panel Report

    NASA Technical Reports Server (NTRS)

    Stedman, J. K.; Halpert, G.

    1984-01-01

    Design and performance requirements for electrochemical power storage systems are discussed and some of the approaches towards satisfying these constraints are described. Geosynchronous and low Earth orbit applications, radar type load constraints, and high voltage systems requirements are addressed. In addition, flywheel energy storage is discussed.

  6. Martian stepped-delta formation by rapid water release.

    PubMed

    Kraal, Erin R; van Dijk, Maurits; Postma, George; Kleinhans, Maarten G

    2008-02-21

    Deltas and alluvial fans preserved on the surface of Mars provide an important record of surface water flow. Understanding how surface water flow could have produced the observed morphology is fundamental to understanding the history of water on Mars. To date, morphological studies have provided only minimum time estimates for the longevity of martian hydrologic events, which range from decades to millions of years. Here we use sand flume studies to show that the distinct morphology of martian stepped (terraced) deltas could only have originated from a single basin-filling event on a timescale of tens of years. Stepped deltas therefore provide a minimum and maximum constraint on the duration and magnitude of some surface flows on Mars. We estimate that the amount of water required to fill the basin and deposit the delta is comparable to the amount of water discharged by large terrestrial rivers, such as the Mississippi. The massive discharge, short timescale, and the associated short canyon lengths favour the hypothesis that stepped fans are terraced delta deposits draped over an alluvial fan and formed by water released suddenly from subsurface storage.

  7. A WENO-Limited, ADER-DT, Finite-Volume Scheme for Efficient, Robust, and Communication-Avoiding Multi-Dimensional Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norman, Matthew R

    2014-01-01

    The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronizationmore » and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.« less

  8. A two-level approach to large mixed-integer programs with application to cogeneration in energy-efficient buildings

    DOE PAGES

    Lin, Fu; Leyffer, Sven; Munson, Todd

    2016-04-12

    We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less

  9. A fast fully constrained geometric unmixing of hyperspectral images

    NASA Astrophysics Data System (ADS)

    Zhou, Xin; Li, Xiao-run; Cui, Jian-tao; Zhao, Liao-ying; Zheng, Jun-peng

    2014-11-01

    A great challenge in hyperspectral image analysis is decomposing a mixed pixel into a collection of endmembers and their corresponding abundance fractions. This paper presents an improved implementation of Barycentric Coordinate approach to unmix hyperspectral images, integrating with the Most-Negative Remove Projection method to meet the abundance sum-to-one constraint (ASC) and abundance non-negativity constraint (ANC). The original barycentric coordinate approach interprets the endmember unmixing problem as a simplex volume ratio problem, which is solved by calculate the determinants of two augmented matrix. One consists of all the members and the other consist of the to-be-unmixed pixel and all the endmembers except for the one corresponding to the specific abundance that is to be estimated. In this paper, we first modified the algorithm of Barycentric Coordinate approach by bringing in the Matrix Determinant Lemma to simplify the unmixing process, which makes the calculation only contains linear matrix and vector operations. So, the matrix determinant calculation of every pixel, as the original algorithm did, is avoided. By the end of this step, the estimated abundance meet the ASC constraint. Then, the Most-Negative Remove Projection method is used to make the abundance fractions meet the full constraints. This algorithm is demonstrated both on synthetic and real images. The resulting algorithm yields the abundance maps that are similar to those obtained by FCLS, while the runtime is outperformed as its computational simplicity.

  10. A two-level approach to large mixed-integer programs with application to cogeneration in energy-efficient buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Fu; Leyffer, Sven; Munson, Todd

    We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less

  11. Consistent second-order boundary implementations for convection-diffusion lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chew, Jia Wei

    2018-02-01

    In this study, an alternative second-order boundary scheme is proposed under the framework of the convection-diffusion lattice Boltzmann (LB) method for both straight and curved geometries. With the proposed scheme, boundary implementations are developed for the Dirichlet, Neumann and linear Robin conditions in a consistent way. The Chapman-Enskog analysis and the Hermite polynomial expansion technique are first applied to derive the explicit expression for the general distribution function with second-order accuracy. Then, the macroscopic variables involved in the expression for the distribution function is determined by the prescribed macroscopic constraints and the known distribution functions after streaming [see the paragraph after Eq. (29) for the discussions of the "streaming step" in LB method]. After that, the unknown distribution functions are obtained from the derived macroscopic information at the boundary nodes. For straight boundaries, boundary nodes are directly placed at the physical boundary surface, and the present scheme is applied directly. When extending the present scheme to curved geometries, a local curvilinear coordinate system and first-order Taylor expansion are introduced to relate the macroscopic variables at the boundary nodes to the physical constraints at the curved boundary surface. In essence, the unknown distribution functions at the boundary node are derived from the known distribution functions at the same node in accordance with the macroscopic boundary conditions at the surface. Therefore, the advantages of the present boundary implementations are (i) the locality, i.e., no information from neighboring fluid nodes is required; (ii) the consistency, i.e., the physical boundary constraints are directly applied when determining the macroscopic variables at the boundary nodes, thus the three kinds of conditions are realized in a consistent way. It should be noted that the present focus is on two-dimensional cases, and theoretical derivations as well as the numerical validations are performed in the framework of the two-dimensional five-velocity lattice model.

  12. Recent advances in stellarator optimization

    DOE PAGES

    Gates, D. A.; Boozer, A. H.; Brown, T.; ...

    2017-10-27

    Computational optimization has revolutionized the field of stellarator design. To date, optimizations have focused primarily on optimization of neoclassical confinement and ideal MHD stability, although limited optimization of other parameters has also been performed. Here, we outline a select set of new concepts for stellarator optimization that, when taken as a group, present a significant step forward in the stellarator concept. One of the criticisms that has been leveled at existing methods of design is the complexity of the resultant field coils. Recently, a new coil optimization code—COILOPT++, which uses a spline instead of a Fourier representation of the coils,—wasmore » written and included in the STELLOPT suite of codes. The advantage of this method is that it allows the addition of real space constraints on the locations of the coils. The code has been tested by generating coil designs for optimized quasi-axisymmetric stellarator plasma configurations of different aspect ratios. As an initial exercise, a constraint that the windings be vertical was placed on large major radius half of the non-planar coils. Further constraints were also imposed that guaranteed that sector blanket modules could be removed from between the coils, enabling a sector maintenance scheme. Results of this exercise will be presented. New ideas on methods for the optimization of turbulent transport have garnered much attention since these methods have led to design concepts that are calculated to have reduced turbulent heat loss. We have explored possibilities for generating an experimental database to test whether the reduction in transport that is predicted is consistent with experimental observations. Thus, a series of equilibria that can be made in the now latent QUASAR experiment have been identified that will test the predicted transport scalings. Fast particle confinement studies aimed at developing a generalized optimization algorithm are also discussed. A new algorithm developed for the design of the scraper element on W7-X is presented along with ideas for automating the optimization approach.« less

  13. Recent advances in stellarator optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gates, D. A.; Boozer, A. H.; Brown, T.

    Computational optimization has revolutionized the field of stellarator design. To date, optimizations have focused primarily on optimization of neoclassical confinement and ideal MHD stability, although limited optimization of other parameters has also been performed. Here, we outline a select set of new concepts for stellarator optimization that, when taken as a group, present a significant step forward in the stellarator concept. One of the criticisms that has been leveled at existing methods of design is the complexity of the resultant field coils. Recently, a new coil optimization code—COILOPT++, which uses a spline instead of a Fourier representation of the coils,—wasmore » written and included in the STELLOPT suite of codes. The advantage of this method is that it allows the addition of real space constraints on the locations of the coils. The code has been tested by generating coil designs for optimized quasi-axisymmetric stellarator plasma configurations of different aspect ratios. As an initial exercise, a constraint that the windings be vertical was placed on large major radius half of the non-planar coils. Further constraints were also imposed that guaranteed that sector blanket modules could be removed from between the coils, enabling a sector maintenance scheme. Results of this exercise will be presented. New ideas on methods for the optimization of turbulent transport have garnered much attention since these methods have led to design concepts that are calculated to have reduced turbulent heat loss. We have explored possibilities for generating an experimental database to test whether the reduction in transport that is predicted is consistent with experimental observations. Thus, a series of equilibria that can be made in the now latent QUASAR experiment have been identified that will test the predicted transport scalings. Fast particle confinement studies aimed at developing a generalized optimization algorithm are also discussed. A new algorithm developed for the design of the scraper element on W7-X is presented along with ideas for automating the optimization approach.« less

  14. Conformal and covariant Z4 formulation of the Einstein equations: Strongly hyperbolic first-order reduction and solution with discontinuous Galerkin schemes

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael; Guercilena, Federico; Köppel, Sven; Rezzolla, Luciano; Zanotti, Olindo

    2018-04-01

    We present a strongly hyperbolic first-order formulation of the Einstein equations based on the conformal and covariant Z4 system (CCZ4) with constraint-violation damping, which we refer to as FO-CCZ4. As CCZ4, this formulation combines the advantages of a conformal and traceless formulation, with the suppression of constraint violations given by the damping terms, but being first order in time and space, it is particularly suited for a discontinuous Galerkin (DG) implementation. The strongly hyperbolic first-order formulation has been obtained by making careful use of first and second-order ordering constraints. A proof of strong hyperbolicity is given for a selected choice of standard gauges via an analytical computation of the entire eigenstructure of the FO-CCZ4 system. The resulting governing partial differential equations system is written in nonconservative form and requires the evolution of 58 unknowns. A key feature of our formulation is that the first-order CCZ4 system decouples into a set of pure ordinary differential equations and a reduced hyperbolic system of partial differential equations that contains only linearly degenerate fields. We implement FO-CCZ4 in a high-order path-conservative arbitrary-high-order-method-using-derivatives (ADER)-DG scheme with adaptive mesh refinement and local time-stepping, supplemented with a third-order ADER-WENO subcell finite-volume limiter in order to deal with singularities arising with black holes. We validate the correctness of the formulation through a series of standard tests in vacuum, performed in one, two and three spatial dimensions, and also present preliminary results on the evolution of binary black-hole systems. To the best of our knowledge, these are the first successful three-dimensional simulations of moving punctures carried out with high-order DG schemes using a first-order formulation of the Einstein equations.

  15. Experimental Matching of Instances to Heuristics for Constraint Satisfaction Problems.

    PubMed

    Moreno-Scott, Jorge Humberto; Ortiz-Bayliss, José Carlos; Terashima-Marín, Hugo; Conant-Pablos, Santiago Enrique

    2016-01-01

    Constraint satisfaction problems are of special interest for the artificial intelligence and operations research community due to their many applications. Although heuristics involved in solving these problems have largely been studied in the past, little is known about the relation between instances and the respective performance of the heuristics used to solve them. This paper focuses on both the exploration of the instance space to identify relations between instances and good performing heuristics and how to use such relations to improve the search. Firstly, the document describes a methodology to explore the instance space of constraint satisfaction problems and evaluate the corresponding performance of six variable ordering heuristics for such instances in order to find regions on the instance space where some heuristics outperform the others. Analyzing such regions favors the understanding of how these heuristics work and contribute to their improvement. Secondly, we use the information gathered from the first stage to predict the most suitable heuristic to use according to the features of the instance currently being solved. This approach proved to be competitive when compared against the heuristics applied in isolation on both randomly generated and structured instances of constraint satisfaction problems.

  16. Experimental Matching of Instances to Heuristics for Constraint Satisfaction Problems

    PubMed Central

    Moreno-Scott, Jorge Humberto; Ortiz-Bayliss, José Carlos; Terashima-Marín, Hugo; Conant-Pablos, Santiago Enrique

    2016-01-01

    Constraint satisfaction problems are of special interest for the artificial intelligence and operations research community due to their many applications. Although heuristics involved in solving these problems have largely been studied in the past, little is known about the relation between instances and the respective performance of the heuristics used to solve them. This paper focuses on both the exploration of the instance space to identify relations between instances and good performing heuristics and how to use such relations to improve the search. Firstly, the document describes a methodology to explore the instance space of constraint satisfaction problems and evaluate the corresponding performance of six variable ordering heuristics for such instances in order to find regions on the instance space where some heuristics outperform the others. Analyzing such regions favors the understanding of how these heuristics work and contribute to their improvement. Secondly, we use the information gathered from the first stage to predict the most suitable heuristic to use according to the features of the instance currently being solved. This approach proved to be competitive when compared against the heuristics applied in isolation on both randomly generated and structured instances of constraint satisfaction problems. PMID:26949383

  17. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    NASA Astrophysics Data System (ADS)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  18. The influence of the Re-Link Trainer on gait symmetry in healthy adults.

    PubMed

    Ward, Sarah; Wiedemann, Lukas; Stinear, Cathy; Stinear, James; McDaid, Andrew

    2017-07-01

    Walking function post-stroke is characterized by asymmetries in gait cycle parameters and joint kinematics. The Re-Link Trainer is designed to provide kinematic constraint to the paretic lower limb, to guide a physiologically normal and symmetrical gait pattern. The purpose of this pilot study was to assess the immediate influence of the Re-Link Trainer on measures of gait symmetry in healthy adults. Participants demonstrated a significantly lower cadence and a 62% reduction in walking speed in the Re-Link Trainer compared to normal walking. The step length ratio had a significant increase from 1.0 during normal walking to 2.5 when walking in the Re-Link Trainer. The results from this pilot study suggest in its current iteration the Re-Link Trainer imposes an asymmetrical constraint on lower limb kinematics.

  19. Kalman Filter Estimation of Spinning Spacecraft Attitude using Markley Variables

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph E.; Harman, Richard

    2004-01-01

    There are several different ways to represent spacecraft attitude and its time rate of change. For spinning or momentum-biased spacecraft, one particular representation has been put forward as a superior parameterization for numerical integration. Markley has demonstrated that these new variables have fewer rapidly varying elements for spinning spacecraft than other commonly used representations and provide advantages when integrating the equations of motion. The current work demonstrates how a Kalman filter can be devised to estimate the attitude using these new variables. The seven Markley variables are subject to one constraint condition, making the error covariance matrix singular. The filter design presented here explicitly accounts for this constraint by using a six-component error state in the filter update step. The reduced dimension error state is unconstrained and its covariance matrix is nonsingular.

  20. Analysis of Automated Aircraft Conflict Resolution and Weather Avoidance

    NASA Technical Reports Server (NTRS)

    Love, John F.; Chan, William N.; Lee, Chu Han

    2009-01-01

    This paper describes an analysis of using trajectory-based automation to resolve both aircraft and weather constraints for near-term air traffic management decision making. The auto resolution algorithm developed and tested at NASA-Ames to resolve aircraft to aircraft conflicts has been modified to mitigate convective weather constraints. Modifications include adding information about the size of a gap between weather constraints to the routing solution. Routes that traverse gaps that are smaller than a specific size are not used. An evaluation of the performance of the modified autoresolver to resolve both conflicts with aircraft and weather was performed. Integration with the Center-TRACON Traffic Management System was completed to evaluate the effect of weather routing on schedule delays.

  1. Comparison of IMRT planning with two-step and one-step optimization: a strategy for improving therapeutic gain and reducing the integral dose

    NASA Astrophysics Data System (ADS)

    Abate, A.; Pressello, M. C.; Benassi, M.; Strigari, L.

    2009-12-01

    The aim of this study was to evaluate the effectiveness and efficiency in inverse IMRT planning of one-step optimization with the step-and-shoot (SS) technique as compared to traditional two-step optimization using the sliding windows (SW) technique. The Pinnacle IMRT TPS allows both one-step and two-step approaches. The same beam setup for five head-and-neck tumor patients and dose-volume constraints were applied for all optimization methods. Two-step plans were produced converting the ideal fluence with or without a smoothing filter into the SW sequence. One-step plans, based on direct machine parameter optimization (DMPO), had the maximum number of segments per beam set at 8, 10, 12, producing a directly deliverable sequence. Moreover, the plans were generated whether a split-beam was used or not. Total monitor units (MUs), overall treatment time, cost function and dose-volume histograms (DVHs) were estimated for each plan. PTV conformality and homogeneity indexes and normal tissue complication probability (NTCP) that are the basis for improving therapeutic gain, as well as non-tumor integral dose (NTID), were evaluated. A two-sided t-test was used to compare quantitative variables. All plans showed similar target coverage. Compared to two-step SW optimization, the DMPO-SS plans resulted in lower MUs (20%), NTID (4%) as well as NTCP values. Differences of about 15-20% in the treatment delivery time were registered. DMPO generates less complex plans with identical PTV coverage, providing lower NTCP and NTID, which is expected to reduce the risk of secondary cancer. It is an effective and efficient method and, if available, it should be favored over the two-step IMRT planning.

  2. Prominent Constraints Faced by Government Managers.

    DTIC Science & Technology

    1983-06-01

    A A t-32 412 PROMINENT CONSTRAINTS FACED BY GOVERNMENT MANAGERS (U) JUN 83NAVAL POSTGRADUATE SCHOOL MONTEREY CA R I NIEDERMULLER7N.LASSI FIED F/G ,5...STANDARS-963A- ’ I It I Uf NAVAL POSTGRADUATE SCHOOL Monterey, California co IA THESIS PROMINENT CONSTRAINTS FACED BY GOVERNMENT MANAGERS by Robert T...Government Managers June 1983 S. PERFORMING ORO. REPORT NUMsER 7. AUTNOR(s) .COtAC R03 GRANT NUNSER(W.) Robert T. Niedermuller I. PERPORMIXG OROANIZATIIOM

  3. First record of entomopathogenic fungi on autumn leaf Caterpillar (Doleschallia bisaltide)

    NASA Astrophysics Data System (ADS)

    Dayanti, A. K.; Sholahuddin; Yunus, A.; Subositi, D.

    2018-03-01

    Caricature plant is one of the medicinal plants in Indonesia to cure hemorrhoids, menstruation, and others. The cultivation constraints of caricature plant is autumn leaf caterpillars (Doleschallia bisaltide). Utilization of synthetic insecticides is not allowed to avoid bioaccumulation of chemical residues. Entomopathogenic fungi is an alternative way to control D. bisaltide. The objective of the research was to obtain isolates of entomopathogenic fungi of D. bisaltide. The research conducted by two steps, which were exsploration of infecfted D. bisaltide. The second step was identification of the fungi. Exploration results of 16 pupae of D. Bisaltide were infected by fungi. Identification done by classify the mcroscopic and microscopic fungi isolate characteristic. One from five fungal isolates were entomopathogenic fungi from Verticillium genera.

  4. A new approach to mixed H2/H infinity controller synthesis using gradient-based parameter optimization methods

    NASA Technical Reports Server (NTRS)

    Ly, Uy-Loi; Schoemig, Ewald

    1993-01-01

    In the past few years, the mixed H(sub 2)/H-infinity control problem has been the object of much research interest since it allows the incorporation of robust stability into the LQG framework. The general mixed H(sub 2)/H-infinity design problem has yet to be solved analytically. Numerous schemes have considered upper bounds for the H(sub 2)-performance criterion and/or imposed restrictive constraints on the class of systems under investigation. Furthermore, many modern control applications rely on dynamic models obtained from finite-element analysis and thus involve high-order plant models. Hence the capability to design low-order (fixed-order) controllers is of great importance. In this research a new design method was developed that optimizes the exact H(sub 2)-norm of a certain subsystem subject to robust stability in terms of H-infinity constraints and a minimal number of system assumptions. The derived algorithm is based on a differentiable scalar time-domain penalty function to represent the H-infinity constraints in the overall optimization. The scheme is capable of handling multiple plant conditions and hence multiple performance criteria and H-infinity constraints and incorporates additional constraints such as fixed-order and/or fixed structure controllers. The defined penalty function is applicable to any constraint that is expressible in form of a real symmetric matrix-inequity.

  5. Solving Constraint-Satisfaction Problems In Prolog Language

    NASA Technical Reports Server (NTRS)

    Nachtsheim, Philip R.

    1991-01-01

    Technique for solution of constraint-satisfaction problems uses definite-clause grammars of Prolog computer language. Exploits fact that grammar-rule notation viewed as "state-change notation". Facilitates development of dynamic representation performing informed as well as blind searches. Applicable to design, scheduling, and planning problems.

  6. Sports science needs more interdisciplinary, constraints-led research programmes: The case of water safety in New Zealand.

    PubMed

    Button, C; Croft, J L

    2017-12-01

    In the lead article of this special issue, Paul Glazier proposes that Newell's constraints model has the potential to contribute to a grand unified theory of sports performance in that it can help to integrate the disciplinary silos that have typically operated in isolation in sports and exercise science. With a few caveats discussed in this commentary, we agree with Glazier's proposal. However, his ideas suggest that there is a need to demonstrate explicitly how such an integration might occur within applied scientific research. To help fill this perceived 'gap' and thereby illustrate the value of adopting a constraints-led approach, we offer an example of our own interdisciplinary research programme. We believe our research on water safety is ideally suited to this task due to the diverse range of interacting constraints present and as such provides a tangible example of how this approach can unify different disciplinary perspectives examining an important aspect of sport performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    PubMed

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  8. A constraint logic programming approach to associate 1D and 3D structural components for large protein complexes.

    PubMed

    Dal Palù, Alessandro; Pontelli, Enrico; He, Jing; Lu, Yonggang

    2007-01-01

    The paper describes a novel framework, constructed using Constraint Logic Programming (CLP) and parallelism, to determine the association between parts of the primary sequence of a protein and alpha-helices extracted from 3D low-resolution descriptions of large protein complexes. The association is determined by extracting constraints from the 3D information, regarding length, relative position and connectivity of helices, and solving these constraints with the guidance of a secondary structure prediction algorithm. Parallelism is employed to enhance performance on large proteins. The framework provides a fast, inexpensive alternative to determine the exact tertiary structure of unknown proteins.

  9. Open innovation in the European space sector: Existing practices, constraints and opportunities

    NASA Astrophysics Data System (ADS)

    van Burg, Elco; Giannopapa, Christina; Reymen, Isabelle M. M. J.

    2017-12-01

    To enhance innovative output and societal spillover of the European space sector, the open innovation approach is becoming popular. Yet, open innovation, referring to innovation practices that cross borders of individual firms, faces constraints. To explore these constraints and identify opportunities, this study performs interviews with government/agency officials and space technology entrepreneurs. The interviews highlight three topic areas with constraints and opportunities: 1) mainly one-directional knowledge flows (from outside the space sector to inside), 2) knowledge and property management, and 3) the role of small- and medium sized companies. These results bear important implications for innovation practices in the space sector.

  10. Postural adjustment errors during lateral step initiation in older and younger adults

    PubMed Central

    Sparto, Patrick J.; Fuhrman, Susan I.; Redfern, Mark S.; Perera, Subashan; Jennings, J. Richard; Furman, Joseph M.

    2016-01-01

    The purpose was to examine age differences and varying levels of step response inhibition on the performance of a voluntary lateral step initiation task. Seventy older adults (70 – 94 y) and twenty younger adults (21 – 58 y) performed visually-cued step initiation conditions based on direction and spatial location of arrows, ranging from a simple choice reaction time task to a perceptual inhibition task that included incongruous cues about which direction to step (e.g. a left pointing arrow appearing on the right side of a monitor). Evidence of postural adjustment errors and step latencies were recorded from vertical ground reaction forces exerted by the stepping leg. Compared with younger adults, older adults demonstrated greater variability in step behavior, generated more postural adjustment errors during conditions requiring inhibition, and had greater step initiation latencies that increased more than younger adults as the inhibition requirements of the condition became greater. Step task performance was related to clinical balance test performance more than executive function task performance. PMID:25595953

  11. Postural adjustment errors during lateral step initiation in older and younger adults

    PubMed Central

    Sparto, Patrick J.; Fuhrman, Susan I.; Redfern, Mark S.; Perera, Subashan; Jennings, J. Richard; Furman, Joseph M.

    2014-01-01

    The purpose was to examine age differences and varying levels of step response inhibition on the performance of a voluntary lateral step initiation task. Seventy older adults (70 – 94 y) and twenty younger adults (21 – 58 y) performed visually-cued step initiation conditions based on direction and spatial location of arrows, ranging from a simple choice reaction time task to a perceptual inhibition task that included incongruous cues about which direction to step (e.g. a left pointing arrow appearing on the right side of a monitor). Evidence of postural adjustment errors and step latencies were recorded from vertical ground reaction forces exerted by the stepping leg. Compared with younger adults, older adults demonstrated greater variability in step behavior, generated more postural adjustment errors during conditions requiring inhibition, and had greater step initiation latencies that increased more than younger adults as the inhibition requirements of the condition became greater. Step task performance was related to clinical balance test performance more than executive function task performance. PMID:25183162

  12. A new adaptive multiple modelling approach for non-linear and non-stationary systems

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Gong, Yu; Hong, Xia

    2016-07-01

    This paper proposes a novel adaptive multiple modelling algorithm for non-linear and non-stationary systems. This simple modelling paradigm comprises K candidate sub-models which are all linear. With data available in an online fashion, the performance of all candidate sub-models are monitored based on the most recent data window, and M best sub-models are selected from the K candidates. The weight coefficients of the selected sub-model are adapted via the recursive least square (RLS) algorithm, while the coefficients of the remaining sub-models are unchanged. These M model predictions are then optimally combined to produce the multi-model output. We propose to minimise the mean square error based on a recent data window, and apply the sum to one constraint to the combination parameters, leading to a closed-form solution, so that maximal computational efficiency can be achieved. In addition, at each time step, the model prediction is chosen from either the resultant multiple model or the best sub-model, whichever is the best. Simulation results are given in comparison with some typical alternatives, including the linear RLS algorithm and a number of online non-linear approaches, in terms of modelling performance and time consumption.

  13. Filtered gradient reconstruction algorithm for compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Mejia, Yuri; Arguello, Henry

    2017-04-01

    Compressive sensing matrices are traditionally based on random Gaussian and Bernoulli entries. Nevertheless, they are subject to physical constraints, and their structure unusually follows a dense matrix distribution, such as the case of the matrix related to compressive spectral imaging (CSI). The CSI matrix represents the integration of coded and shifted versions of the spectral bands. A spectral image can be recovered from CSI measurements by using iterative algorithms for linear inverse problems that minimize an objective function including a quadratic error term combined with a sparsity regularization term. However, current algorithms are slow because they do not exploit the structure and sparse characteristics of the CSI matrices. A gradient-based CSI reconstruction algorithm, which introduces a filtering step in each iteration of a conventional CSI reconstruction algorithm that yields improved image quality, is proposed. Motivated by the structure of the CSI matrix, Φ, this algorithm modifies the iterative solution such that it is forced to converge to a filtered version of the residual ΦTy, where y is the compressive measurement vector. We show that the filtered-based algorithm converges to better quality performance results than the unfiltered version. Simulation results highlight the relative performance gain over the existing iterative algorithms.

  14. Track classification within wireless sensor network

    NASA Astrophysics Data System (ADS)

    Doumerc, Robin; Pannetier, Benjamin; Moras, Julien; Dezert, Jean; Canevet, Loic

    2017-05-01

    In this paper, we present our study on track classification by taking into account environmental information and target estimated states. The tracker uses several motion model adapted to different target dynamics (pedestrian, ground vehicle and SUAV, i.e. small unmanned aerial vehicle) and works in centralized architecture. The main idea is to explore both: classification given by heterogeneous sensors and classification obtained with our fusion module. The fusion module, presented in his paper, provides a class on each track according to track location, velocity and associated uncertainty. To model the likelihood on each class, a fuzzy approach is used considering constraints on target capability to move in the environment. Then the evidential reasoning approach based on Dempster-Shafer Theory (DST) is used to perform a time integration of this classifier output. The fusion rules are tested and compared on real data obtained with our wireless sensor network.In order to handle realistic ground target tracking scenarios, we use an autonomous smart computer deposited in the surveillance area. After the calibration step of the heterogeneous sensor network, our system is able to handle real data from a wireless ground sensor network. The performance of this system is evaluated in a real exercise for intelligence operation ("hunter hunt" scenario).

  15. Application of Particle Swarm Optimization in Computer Aided Setup Planning

    NASA Astrophysics Data System (ADS)

    Kafashi, Sajad; Shakeri, Mohsen; Abedini, Vahid

    2011-01-01

    New researches are trying to integrate computer aided design (CAD) and computer aided manufacturing (CAM) environments. The role of process planning is to convert the design specification into manufacturing instructions. Setup planning has a basic role in computer aided process planning (CAPP) and significantly affects the overall cost and quality of machined part. This research focuses on the development for automatic generation of setups and finding the best setup plan in feasible condition. In order to computerize the setup planning process, three major steps are performed in the proposed system: a) Extraction of machining data of the part. b) Analyzing and generation of all possible setups c) Optimization to reach the best setup plan based on cost functions. Considering workshop resources such as machine tool, cutter and fixture, all feasible setups could be generated. Then the problem is adopted with technological constraints such as TAD (tool approach direction), tolerance relationship and feature precedence relationship to have a completely real and practical approach. The optimal setup plan is the result of applying the PSO (particle swarm optimization) algorithm into the system using cost functions. A real sample part is illustrated to demonstrate the performance and productivity of the system.

  16. Coastal Acoustic Tomography Data Constraints Applied to a Coastal Ocean Circulation Model

    DTIC Science & Technology

    1994-04-01

    Postgraduate School Monterey, CA 93943-5100 Abstract A direct insertion scheme for assimilating coastal acoustic tomo- graphic ( CAT ) vertical...days of this control run were taken to represent "actuality." A series of assimilation experiments was carried out in which CAT temperature slices...synthesized from different CAT configurations based on the "true ocean" were inserted into the n.odel at various time steps to examine the convergence of

  17. Planning with Continuous Resources in Stochastic Domains

    NASA Technical Reports Server (NTRS)

    Mausam, Mausau; Benazera, Emmanuel; Brafman, Roneu; Hansen, Eric

    2005-01-01

    We consider the problem of optimal planning in stochastic domains with metric resource constraints. Our goal is to generate a policy whose expected sum of rewards is maximized for a given initial state. We consider a general formulation motivated by our application domain--planetary exploration--in which the choice of an action at each step may depend on the current resource levels. We adapt the forward search algorithm AO* to handle our continuous state space efficiently.

  18. Automatic data partitioning on distributed memory multicomputers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gupta, Manish

    1992-01-01

    Distributed-memory parallel computers are increasingly being used to provide high levels of performance for scientific applications. Unfortunately, such machines are not very easy to program. A number of research efforts seek to alleviate this problem by developing compilers that take over the task of generating communication. The communication overheads and the extent of parallelism exploited in the resulting target program are determined largely by the manner in which data is partitioned across different processors of the machine. Most of the compilers provide no assistance to the programmer in the crucial task of determining a good data partitioning scheme. A novel approach is presented, the constraints-based approach, to the problem of automatic data partitioning for numeric programs. In this approach, the compiler identifies some desirable requirements on the distribution of various arrays being referenced in each statement, based on performance considerations. These desirable requirements are referred to as constraints. For each constraint, the compiler determines a quality measure that captures its importance with respect to the performance of the program. The quality measure is obtained through static performance estimation, without actually generating the target data-parallel program with explicit communication. Each data distribution decision is taken by combining all the relevant constraints. The compiler attempts to resolve any conflicts between constraints such that the overall execution time of the parallel program is minimized. This approach has been implemented as part of a compiler called Paradigm, that accepts Fortran 77 programs, and specifies the partitioning scheme to be used for each array in the program. We have obtained results on some programs taken from the Linpack and Eispack libraries, and the Perfect Benchmarks. These results are quite promising, and demonstrate the feasibility of automatic data partitioning for a significant class of scientific application programs with regular computations.

  19. Automatic 3D kidney segmentation based on shape constrained GC-OAAM

    NASA Astrophysics Data System (ADS)

    Chen, Xinjian; Summers, Ronald M.; Yao, Jianhua

    2011-03-01

    The kidney can be classified into three main tissue types: renal cortex, renal medulla and renal pelvis (or collecting system). Dysfunction of different renal tissue types may cause different kidney diseases. Therefore, accurate and efficient segmentation of kidney into different tissue types plays a very important role in clinical research. In this paper, we propose an automatic 3D kidney segmentation method which segments the kidney into the three different tissue types: renal cortex, medulla and pelvis. The proposed method synergistically combines active appearance model (AAM), live wire (LW) and graph cut (GC) methods, GC-OAAM for short. Our method consists of two main steps. First, a pseudo 3D segmentation method is employed for kidney initialization in which the segmentation is performed slice-by-slice via a multi-object oriented active appearance model (OAAM) method. An improved iterative model refinement algorithm is proposed for the AAM optimization, which synergistically combines the AAM and LW method. Multi-object strategy is applied to help the object initialization. The 3D model constraints are applied to the initialization result. Second, the object shape information generated from the initialization step is integrated into the GC cost computation. A multi-label GC method is used to segment the kidney into cortex, medulla and pelvis. The proposed method was tested on 19 clinical arterial phase CT data sets. The preliminary results showed the feasibility and efficiency of the proposed method.

  20. Single-Receiver GPS Phase Bias Resolution

    NASA Technical Reports Server (NTRS)

    Bertiger, William I.; Haines, Bruce J.; Weiss, Jan P.; Harvey, Nathaniel E.

    2010-01-01

    Existing software has been modified to yield the benefits of integer fixed double-differenced GPS-phased ambiguities when processing data from a single GPS receiver with no access to any other GPS receiver data. When the double-differenced combination of phase biases can be fixed reliably, a significant improvement in solution accuracy is obtained. This innovation uses a large global set of GPS receivers (40 to 80 receivers) to solve for the GPS satellite orbits and clocks (along with any other parameters). In this process, integer ambiguities are fixed and information on the ambiguity constraints is saved. For each GPS transmitter/receiver pair, the process saves the arc start and stop times, the wide-lane average value for the arc, the standard deviation of the wide lane, and the dual-frequency phase bias after bias fixing for the arc. The second step of the process uses the orbit and clock information, the bias information from the global solution, and only data from the single receiver to resolve double-differenced phase combinations. It is called "resolved" instead of "fixed" because constraints are introduced into the problem with a finite data weight to better account for possible errors. A receiver in orbit has much shorter continuous passes of data than a receiver fixed to the Earth. The method has parameters to account for this. In particular, differences in drifting wide-lane values must be handled differently. The first step of the process is automated, using two JPL software sets, Longarc and Gipsy-Oasis. The resulting orbit/clock and bias information files are posted on anonymous ftp for use by any licensed Gipsy-Oasis user. The second step is implemented in the Gipsy-Oasis executable, gd2p.pl, which automates the entire process, including fetching the information from anonymous ftp

  1. Green material selection for sustainability: A hybrid MCDM approach.

    PubMed

    Zhang, Honghao; Peng, Yong; Tian, Guangdong; Wang, Danqi; Xie, Pengpeng

    2017-01-01

    Green material selection is a crucial step for the material industry to comprehensively improve material properties and promote sustainable development. However, because of the subjectivity and conflicting evaluation criteria in its process, green material selection, as a multi-criteria decision making (MCDM) problem, has been a widespread concern to the relevant experts. Thus, this study proposes a hybrid MCDM approach that combines decision making and evaluation laboratory (DEMATEL), analytical network process (ANP), grey relational analysis (GRA) and technique for order performance by similarity to ideal solution (TOPSIS) to select the optimal green material for sustainability based on the product's needs. A nonlinear programming model with constraints was proposed to obtain the integrated closeness index. Subsequently, an empirical application of rubbish bins was used to illustrate the proposed method. In addition, a sensitivity analysis and a comparison with existing methods were employed to validate the accuracy and stability of the obtained final results. We found that this method provides a more accurate and effective decision support tool for alternative evaluation or strategy selection.

  2. Green material selection for sustainability: A hybrid MCDM approach

    PubMed Central

    Zhang, Honghao; Peng, Yong; Tian, Guangdong; Wang, Danqi; Xie, Pengpeng

    2017-01-01

    Green material selection is a crucial step for the material industry to comprehensively improve material properties and promote sustainable development. However, because of the subjectivity and conflicting evaluation criteria in its process, green material selection, as a multi-criteria decision making (MCDM) problem, has been a widespread concern to the relevant experts. Thus, this study proposes a hybrid MCDM approach that combines decision making and evaluation laboratory (DEMATEL), analytical network process (ANP), grey relational analysis (GRA) and technique for order performance by similarity to ideal solution (TOPSIS) to select the optimal green material for sustainability based on the product's needs. A nonlinear programming model with constraints was proposed to obtain the integrated closeness index. Subsequently, an empirical application of rubbish bins was used to illustrate the proposed method. In addition, a sensitivity analysis and a comparison with existing methods were employed to validate the accuracy and stability of the obtained final results. We found that this method provides a more accurate and effective decision support tool for alternative evaluation or strategy selection. PMID:28498864

  3. Engineering Specification for Large-aperture UVO Space Telescopes Derived from Science Requirements

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Postman, Marc; Smith, W. Scott

    2013-01-01

    The Advance Mirror Technology Development (AMTD) project is a three year effort initiated in FY12 to mature by at least a half TRL step six critical technologies required to enable 4 to 8 meter UVOIR space telescope primary mirror assemblies for both general astrophysics and ultra-high contrast observations of exoplanets. AMTD uses a science-driven systems engineering approach. We mature technologies required to enable the highest priority science AND result in a high-performance low-cost low-risk system. To provide the science community with options, we are pursuing multiple technology paths. We have assembled an outstanding team from academia, industry, and government with extensive expertise in astrophysics and exoplanet characterization, and in the design/manufacture of monolithic and segmented space telescopes. A key accomplishment is deriving engineering specifications for advanced normal-incidence monolithic and segmented mirror systems needed to enable both general astrophysics and ultra-high contrast observations of exoplanets missions as a function of potential launch vehicles and their mass and volume constraints.

  4. AMTD: update of engineering specifications derived from science requirements for future UVOIR space telescopes

    NASA Astrophysics Data System (ADS)

    Stahl, H. Philip; Postman, Marc; Mosier, Gary; Smith, W. Scott; Blaurock, Carl; Ha, Kong; Stark, Christopher C.

    2014-08-01

    The Advance Mirror Technology Development (AMTD) project is in Phase 2 of a multiyear effort, initiated in FY12, to mature by at least a half TRL step six critical technologies required to enable 4 meter or larger UVOIR space telescope primary mirror assemblies for both general astrophysics and ultra-high contrast observations of exoplanets. AMTD uses a science-driven systems engineering approach. We mature technologies required to enable the highest priority science AND provide a high-performance low-cost low-risk system. To give the science community options, we are pursuing multiple technology paths. A key task is deriving engineering specifications for advanced normal-incidence monolithic and segmented mirror systems needed to enable both general astrophysics and ultra-high contrast observations of exoplanets missions as a function of potential launch vehicles and their mass and volume constraints. A key finding of this effort is that the science requires an 8 meter or larger aperture telescope.

  5. NDEC: A NEA platform for nuclear data testing, verification and benchmarking

    NASA Astrophysics Data System (ADS)

    Díez, C. J.; Michel-Sendis, F.; Cabellos, O.; Bossant, M.; Soppera, N.

    2017-09-01

    The selection, testing, verification and benchmarking of evaluated nuclear data consists, in practice, in putting an evaluated file through a number of checking steps where different computational codes verify that the file and the data it contains complies with different requirements. These requirements range from format compliance to good performance in application cases, while at the same time physical constraints and the agreement with experimental data are verified. At NEA, the NDEC (Nuclear Data Evaluation Cycle) platform aims at providing, in a user friendly interface, a thorough diagnose of the quality of a submitted evaluated nuclear data file. Such diagnose is based on the results of different computational codes and routines which carry out the mentioned verifications, tests and checks. NDEC also searches synergies with other existing NEA tools and databases, such as JANIS, DICE or NDaST, including them into its working scheme. Hence, this paper presents NDEC, its current development status and its usage in the JEFF nuclear data project.

  6. Framework to model neutral particle flux in convex high aspect ratio structures using one-dimensional radiosity

    NASA Astrophysics Data System (ADS)

    Manstetten, Paul; Filipovic, Lado; Hössinger, Andreas; Weinbub, Josef; Selberherr, Siegfried

    2017-02-01

    We present a computationally efficient framework to compute the neutral flux in high aspect ratio structures during three-dimensional plasma etching simulations. The framework is based on a one-dimensional radiosity approach and is applicable to simulations of convex rotationally symmetric holes and convex symmetric trenches with a constant cross-section. The framework is intended to replace the full three-dimensional simulation step required to calculate the neutral flux during plasma etching simulations. Especially for high aspect ratio structures, the computational effort, required to perform the full three-dimensional simulation of the neutral flux at the desired spatial resolution, conflicts with practical simulation time constraints. Our results are in agreement with those obtained by three-dimensional Monte Carlo based ray tracing simulations for various aspect ratios and convex geometries. With this framework we present a comprehensive analysis of the influence of the geometrical properties of high aspect ratio structures as well as of the particle sticking probability on the neutral particle flux.

  7. Reconnaissance and Autonomy for Small Robots (RASR) team: MAGIC 2010 challenge

    NASA Astrophysics Data System (ADS)

    Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark; Corley, Katrina

    2012-06-01

    The Reconnaissance and Autonomy for Small Robots (RASR) team developed a system for the coordination of groups of unmanned ground vehicles (UGVs) that can execute a variety of military relevant missions in dynamic urban environments. Historically, UGV operations have been primarily performed via tele-operation, requiring at least one dedicated operator per robot, and requiring substantial real-time bandwidth to accomplish those missions. Our team goal was to develop a system that can provide long-term value to the war-fighter, utilizing MAGIC-2010 as a stepping stone. To that end, we self-imposed a set of constraints that would force us to develop technology that could readily be used by the military in the near term: • Use a relevant (deployed) platform • Use low-cost, reliable sensors • Develop an expandable and modular control system with innovative software algorithms to minimize the computing footprint required • Minimize required communications bandwidth and handle communication losses • Minimize additional power requirements to maximize battery life and mission duration

  8. Mitochondrial DNA, restoring Beethovens music.

    PubMed

    Merheb, Maxime; Vaiedelich, Stéphane; Maniguet, Thiérry; Hänni, Catherine

    2016-01-01

    Great ancient composers have endured many obstacles and constraints which are very difficult to understand unless we perform the restoration process of ancient music. Species identification in leather used during manufacturing is the key step to start such a restoration process in order to produce a facsimile of a museum piano. Our study reveals the species identification in the leather covering the hammer head in a piano created by Erard in 1802. This is the last existing piano similar to the piano that Beethoven used with its leather preserved in its original state. The leather sample was not present in a homogeneous piece, yet combined with glue. Using a DNA extraction method that avoids PCR inhibitors; we discovered that sheep and cattle are the origin of the combination. To identify the species in the leather, we focused on the amounts of mitochondrial DNA in both leather and glue and results have led us to the conclusion that the leather used to cover the hammer head in this piano was made of cattle hide.

  9. Analysis of hospital costs as a basis for pricing services in Mali.

    PubMed

    Audibert, Martine; Mathonnat, Jacky; Pareil, Delphine; Kabamba, Raymond

    2007-01-01

    In a move to achieve a better equity in the funding of access to health care, particularly for the poor, a better efficiency of hospital functioning and a better financial balance, the analysis of hospital costs in Mali brings several key elements to improve the pricing of medical services. The method utilized is the classical step-down process which takes into consideration the entire set of direct and indirect costs borne by the hospital. Although this approach does not allow to estimate the economic cost of consultations, it is a useful contribution to assess the financial activity of the hospital and improve its performance, financially speaking, through a more relevant user fees policy. The study shows that there are possibilities of cross-subsidies within the hospital or within services which improve the recovery of some of the current costs. It also leads to several proposals of pricing care while taking into account the constraints, the level of the hospital its specific conditions and equity. Copyright (c) 2007 John Wiley & Sons, Ltd.

  10. Reconciling the Differences between the Measurements of CO2 Isotopes by the Phoenix and MSL Landers

    NASA Technical Reports Server (NTRS)

    Niles, P. B.; Mahaffy, P. R.; Atreya, S.; Pavlov, A. A.; Trainer, M.; Webster, C. R.; Wong, M.

    2014-01-01

    Precise stable isotope measurements of the CO2 in the martian atmosphere have the potential to provide important constraints for our understanding of the history of volatiles, the carbon cycle, current atmospheric processes, and the degree of water/rock interaction on Mars. There have been several different measurements by landers and Earth based systems performed in recent years that have not been in agreement. In particular, measurements of the isotopic composition of martian atmospheric CO2 by the Thermal and Evolved Gas Analyzer (TEGA) instrument on the Mars Phoenix Lander and the Sample Analysis at Mars (SAM) instrument on the Mars Science Laboratory (MSL) are in stark disagreement. This work attempts to use measurements of mass 45 and mass 46 of martian atmospheric CO2 by the SAM and TEGA instruments to search for agreement as a first step towards reaching a consensus measurement that might be supported by data from both instruments.

  11. Disentangling the Roles of Approach, Activation and Valence in Instrumental and Pavlovian Responding

    PubMed Central

    Huys, Quentin J. M.; Cools, Roshan; Gölzer, Martin; Friedel, Eva; Heinz, Andreas; Dolan, Raymond J.; Dayan, Peter

    2011-01-01

    Hard-wired, Pavlovian, responses elicited by predictions of rewards and punishments exert significant benevolent and malevolent influences over instrumentally-appropriate actions. These influences come in two main groups, defined along anatomical, pharmacological, behavioural and functional lines. Investigations of the influences have so far concentrated on the groups as a whole; here we take the critical step of looking inside each group, using a detailed reinforcement learning model to distinguish effects to do with value, specific actions, and general activation or inhibition. We show a high degree of sophistication in Pavlovian influences, with appetitive Pavlovian stimuli specifically promoting approach and inhibiting withdrawal, and aversive Pavlovian stimuli promoting withdrawal and inhibiting approach. These influences account for differences in the instrumental performance of approach and withdrawal behaviours. Finally, although losses are as informative as gains, we find that subjects neglect losses in their instrumental learning. Our findings argue for a view of the Pavlovian system as a constraint or prior, facilitating learning by alleviating computational costs that come with increased flexibility. PMID:21556131

  12. Multi-objective Optimization of a Solar Humidification Dehumidification Desalination Unit

    NASA Astrophysics Data System (ADS)

    Rafigh, M.; Mirzaeian, M.; Najafi, B.; Rinaldi, F.; Marchesi, R.

    2017-11-01

    In the present paper, a humidification-dehumidification desalination unit integrated with solar system is considered. In the first step mathematical model of the whole plant is represented. Next, taking into account the logical constraints, the performance of the system is optimized. On one hand it is desired to have higher energetic efficiency, while on the other hand, higher efficiency results in an increment in the required area for each subsystem which consequently leads to an increase in the total cost of the plant. In the present work, the optimum solution is achieved when the specific energy of the solar heater and also the areas of humidifier and dehumidifier are minimized. Due to the fact that considered objective functions are in conflict, conventional optimization methods are not applicable. Hence, multi objective optimization using genetic algorithm which is an efficient tool for dealing with problems with conflicting objectives has been utilized and a set of optimal solutions called Pareto front each of which is a tradeoff between the mentioned objectives is generated.

  13. Skull removal in MR images using a modified artificial bee colony optimization algorithm.

    PubMed

    Taherdangkoo, Mohammad

    2014-01-01

    Removal of the skull from brain Magnetic Resonance (MR) images is an important preprocessing step required for other image analysis techniques such as brain tissue segmentation. In this paper, we propose a new algorithm based on the Artificial Bee Colony (ABC) optimization algorithm to remove the skull region from brain MR images. We modify the ABC algorithm using a different strategy for initializing the coordinates of scout bees and their direction of search. Moreover, we impose an additional constraint to the ABC algorithm to avoid the creation of discontinuous regions. We found that our algorithm successfully removed all bony skull from a sample of de-identified MR brain images acquired from different model scanners. The obtained results of the proposed algorithm compared with those of previously introduced well known optimization algorithms such as Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO) demonstrate the superior results and computational performance of our algorithm, suggesting its potential for clinical applications.

  14. Sequential limiting in continuous and discontinuous Galerkin methods for the Euler equations

    NASA Astrophysics Data System (ADS)

    Dobrev, V.; Kolev, Tz.; Kuzmin, D.; Rieben, R.; Tomov, V.

    2018-03-01

    We present a new predictor-corrector approach to enforcing local maximum principles in piecewise-linear finite element schemes for the compressible Euler equations. The new element-based limiting strategy is suitable for continuous and discontinuous Galerkin methods alike. In contrast to synchronized limiting techniques for systems of conservation laws, we constrain the density, momentum, and total energy in a sequential manner which guarantees positivity preservation for the pressure and internal energy. After the density limiting step, the total energy and momentum gradients are adjusted to incorporate the irreversible effect of density changes. Antidiffusive corrections to bounds-compatible low-order approximations are limited to satisfy inequality constraints for the specific total and kinetic energy. An accuracy-preserving smoothness indicator is introduced to gradually adjust lower bounds for the element-based correction factors. The employed smoothness criterion is based on a Hessian determinant test for the density. A numerical study is performed for test problems with smooth and discontinuous solutions.

  15. AMTD: Update of Engineering Specifications Derived from Science Requirements for Future UVOIR Space Telescopes

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip; Postman, Marc; Mosier, Gary; Smith, W. Scott; Blaurock, Carl; Ha, Kong; Stark, Christopher C.

    2014-01-01

    The Advance Mirror Technology Development (AMTD) project is in Phase 2 of a multiyear effort, initiated in FY12, to mature by at least a half TRL step six critical technologies required to enable 4 meter or larger UVOIR space telescope primary mirror assemblies for both general astrophysics and ultra-high contrast observations of exoplanets. AMTD uses a science-driven systems engineering approach. We mature technologies required to enable the highest priority science AND provide a high-performance low-cost low-risk system. To give the science community options, we are pursuing multiple technology paths. A key task is deriving engineering specifications for advanced normal-incidence monolithic and segmented mirror systems needed to enable both general astrophysics and ultra-high contrast observations of exoplanets missions as a function of potential launch vehicles and their mass and volume constraints. A key finding of this effort is that the science requires an 8 meter or larger aperture telescope

  16. Semilocal Exchange Energy Functional for Two-Dimensional Quantum Systems: A Step Beyond Generalized Gradient Approximations.

    PubMed

    Jana, Subrata; Samal, Prasanjit

    2017-06-29

    Semilocal density functionals for the exchange-correlation energy of electrons are extensively used as they produce realistic and accurate results for finite and extended systems. The choice of techniques plays a crucial role in constructing such functionals of improved accuracy and efficiency. An accurate and efficient semilocal exchange energy functional in two dimensions is constructed by making use of the corresponding hole which is derived based on the density matrix expansion. The exchange hole involved is localized under the generalized coordinate transformation and satisfies all the relevant constraints. Comprehensive testing and excellent performance of the functional is demonstrated versus exact exchange results. The accuracy of results obtained by using the newly constructed functional is quite remarkable as it substantially reduces the errors present in the local and nonempirical exchange functionals proposed so far for two-dimensional quantum systems. The underlying principles involved in the functional construction are physically appealing and hold promise for developing range separated and nonlocal exchange functionals in two dimensions.

  17. Parameter Estimation with Almost No Public Communication for Continuous-Variable Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Lupo, Cosmo; Ottaviani, Carlo; Papanastasiou, Panagiotis; Pirandola, Stefano

    2018-06-01

    One crucial step in any quantum key distribution (QKD) scheme is parameter estimation. In a typical QKD protocol the users have to sacrifice part of their raw data to estimate the parameters of the communication channel as, for example, the error rate. This introduces a trade-off between the secret key rate and the accuracy of parameter estimation in the finite-size regime. Here we show that continuous-variable QKD is not subject to this constraint as the whole raw keys can be used for both parameter estimation and secret key generation, without compromising the security. First, we show that this property holds for measurement-device-independent (MDI) protocols, as a consequence of the fact that in a MDI protocol the correlations between Alice and Bob are postselected by the measurement performed by an untrusted relay. This result is then extended beyond the MDI framework by exploiting the fact that MDI protocols can simulate device-dependent one-way QKD with arbitrarily high precision.

  18. SU-E-T-417: The Impact of Normal Tissue Constraints On PTV Dose Homogeneity for Intensity Modulated Radiotherapy (IMRT), Volume Modulated Arc Therapy (VMAT) and Tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, J; McDonald, D; Ashenafi, M

    2014-06-01

    Purpose: Complex intensity modulated arc therapy tends to spread low dose to normal tissue(NT)regions to obtain improved target conformity and homogeneity and OAR sparing.This work evaluates the trade-offs between PTV homogeneity and reduction of the maximum dose(Dmax)spread to NT while planning of IMRT,VMAT and Tomotherapy. Methods: Ten prostate patients,previously planned with step-and-shoot IMRT,were selected.To fairly evaluate how PTV homogeneity was affected by NT Dmax constraints,original IMRT DVH objectives for PTV and OARs(femoral heads,and rectal and bladder wall)applied to 2 VMAT plans in Pinnacle(V9.0), and Tomotherapy(V4.2).The only constraint difference was the NT which was defined as body contours excluding targets,OARs andmore » dose rings.NT Dmax constraint for 1st VMAT was set to the prescription dose(Dp).For 2nd VMAT(VMAT-NT)and Tomotherapy,it was set to the Dmax achieved in IMRT(~70-80% of Dp).All NT constraints were set to the lowest priority.Three common homogeneity indices(HI),RTOG-HI=Dmax/Dp,moderated-HI=D95%/D5% and complex-HI=(D2%-D98%)/Dp*100 were calculated. Results: All modalities with similar dosimetric endpoints for PTV and OARs.The complex-HI shows the most variability of indices,with average values of 5.9,4.9,9.3 and 6.1 for IMRT,VMAT,VMAT-NT and Tomotherapy,respectively.VMAT provided the best PTV homogeneity without compromising any OAR/NT sparing.Both VMAT-NT and Tomotherapy,planned with more restrictive NT constraints,showed reduced homogeneity,with VMAT-NT showing the worst homogeneity(P<0.0001)for all HI.Tomotherapy gave the lowest NT Dmax,with slightly decreased homogeneity compared to VMAT. Finally, there was no significant difference in NT Dmax or Dmean between VMAT and VMAT-NT. Conclusion: PTV HI is highly dependent on permitted NT constraints. Results demonstrated that VMAT-NT with more restrictive NT constraints does not reduce Dmax NT,but significantly receives higher Dmax and worse target homogeneity.Therefore, it is critical that planners do not use too restrictive NT constraints during VMAT optimization.Tomotherapy plan was not as sensitive to NT constraints,however,care shall be taken to ensure NT is not pushed too hard.These results are relevant for clinical practice.The biological effect of higher Dmax and increased target heterogeneity needs further study.« less

  19. Attentional Demands on Motor-Respiratory Coordination

    ERIC Educational Resources Information Center

    Hessler, Eric E.; Amazeen, Polemnia G.

    2009-01-01

    Athletic performance requires the pacing of breathing with exercise, known as motor-respiratory coordination (MRC). In this study, we added cognitive and physical constraints while participants intentionally controlled their breathing locations during rhythmic arm movement. This is the first study to examine a cognitive constraint on MRC.…

  20. Exactly energy conserving semi-implicit particle in cell formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapenta, Giovanni, E-mail: giovanni.lapenta@kuleuven.be

    We report a new particle in cell (PIC) method based on the semi-implicit approach. The novelty of the new method is that unlike any of its semi-implicit predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. Recent research has presented fully implicit methods where energy conservation is obtained as part of a non-linear iteration procedure. The new method (referred to as Energy Conserving Semi-Implicit Method, ECSIM), instead, does not require any non-linear iteration and its computational cycle is similar to that of explicit PIC. The properties of the new method are: i) it conservesmore » energy exactly to round-off for any time step or grid spacing; ii) it is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency and allowing the user to select any desired time step; iii) it eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length; iv) the particle mover has a computational complexity identical to that of the explicit PIC, only the field solver has an increased computational cost. The new ECSIM is tested in a number of benchmarks where accuracy and computational performance are tested. - Highlights: • We present a new fully energy conserving semi-implicit particle in cell (PIC) method based on the implicit moment method (IMM). The new method is called Energy Conserving Implicit Moment Method (ECIMM). • The novelty of the new method is that unlike any of its predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. • The new method is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency. • The new method eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length. • These features are achieved at a reduced cost compared with either previous IMM or fully implicit implementation of PIC.« less

  1. Iterative repair for scheduling and rescheduling

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Deale, Michael

    1991-01-01

    An iterative repair search method is described called constraint based simulated annealing. Simulated annealing is a hill climbing search technique capable of escaping local minima. The utility of the constraint based framework is shown by comparing search performance with and without the constraint framework on a suite of randomly generated problems. Results are also shown of applying the technique to the NASA Space Shuttle ground processing problem. These experiments show that the search methods scales to complex, real world problems and reflects interesting anytime behavior.

  2. Development of an LSI maximum-likelihood convolutional decoder for advanced forward error correction capability on the NASA 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Clark, R. T.; Mccallister, R. D.

    1982-01-01

    The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.

  3. Chance-Constrained Guidance With Non-Convex Constraints

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro

    2011-01-01

    Missions to small bodies, such as comets or asteroids, require autonomous guidance for descent to these small bodies. Such guidance is made challenging by uncertainty in the position and velocity of the spacecraft, as well as the uncertainty in the gravitational field around the small body. In addition, the requirement to avoid collision with the asteroid represents a non-convex constraint that means finding the optimal guidance trajectory, in general, is intractable. In this innovation, a new approach is proposed for chance-constrained optimal guidance with non-convex constraints. Chance-constrained guidance takes into account uncertainty so that the probability of collision is below a specified threshold. In this approach, a new bounding method has been developed to obtain a set of decomposed chance constraints that is a sufficient condition of the original chance constraint. The decomposition of the chance constraint enables its efficient evaluation, as well as the application of the branch and bound method. Branch and bound enables non-convex problems to be solved efficiently to global optimality. Considering the problem of finite-horizon robust optimal control of dynamic systems under Gaussian-distributed stochastic uncertainty, with state and control constraints, a discrete-time, continuous-state linear dynamics model is assumed. Gaussian-distributed stochastic uncertainty is a more natural model for exogenous disturbances such as wind gusts and turbulence than the previously studied set-bounded models. However, with stochastic uncertainty, it is often impossible to guarantee that state constraints are satisfied, because there is typically a non-zero probability of having a disturbance that is large enough to push the state out of the feasible region. An effective framework to address robustness with stochastic uncertainty is optimization with chance constraints. These require that the probability of violating the state constraints (i.e., the probability of failure) is below a user-specified bound known as the risk bound. An example problem is to drive a car to a destination as fast as possible while limiting the probability of an accident to 10(exp -7). This framework allows users to trade conservatism against performance by choosing the risk bound. The more risk the user accepts, the better performance they can expect.

  4. Optimal Siting and Sizing of Multiple DG Units for the Enhancement of Voltage Profile and Loss Minimization in Transmission Systems Using Nature Inspired Algorithms

    PubMed Central

    Ramamoorthy, Ambika; Ramachandran, Rajeswari

    2016-01-01

    Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and  P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology. PMID:27057557

  5. Optimal Siting and Sizing of Multiple DG Units for the Enhancement of Voltage Profile and Loss Minimization in Transmission Systems Using Nature Inspired Algorithms.

    PubMed

    Ramamoorthy, Ambika; Ramachandran, Rajeswari

    2016-01-01

    Power grid becomes smarter nowadays along with technological development. The benefits of smart grid can be enhanced through the integration of renewable energy sources. In this paper, several studies have been made to reconfigure a conventional network into a smart grid. Amongst all the renewable sources, solar power takes the prominent position due to its availability in abundance. Proposed methodology presented in this paper is aimed at minimizing network power losses and at improving the voltage stability within the frame work of system operation and security constraints in a transmission system. Locations and capacities of DGs have a significant impact on the system losses in a transmission system. In this paper, combined nature inspired algorithms are presented for optimal location and sizing of DGs. This paper proposes a two-step optimization technique in order to integrate DG. In a first step, the best size of DG is determined through PSO metaheuristics and the results obtained through PSO is tested for reverse power flow by negative load approach to find possible bus locations. Then, optimal location is found by Loss Sensitivity Factor (LSF) and weak (WK) bus methods and the results are compared. In a second step, optimal sizing of DGs is determined by PSO, GSA, and hybrid PSOGSA algorithms. Apart from optimal sizing and siting of DGs, different scenarios with number of DGs (3, 4, and 5) and PQ capacities of DGs (P alone, Q alone, and P and Q both) are also analyzed and the results are analyzed in this paper. A detailed performance analysis is carried out on IEEE 30-bus system to demonstrate the effectiveness of the proposed methodology.

  6. A qualitative evaluation of a physician-delivered pedometer-based step count prescription strategy with insight from participants and treating physicians.

    PubMed

    Cooke, Alexandra B; Pace, Romina; Chan, Deborah; Rosenberg, Ellen; Dasgupta, Kaberi; Daskalopoulou, Stella S

    2018-05-01

    The integration of pedometers into clinical practice has the potential to enhance physical activity levels in patients with chronic disease. Our SMARTER randomized controlled trial demonstrated that a physician-delivered step count prescription strategy has measurable effects on daily steps, glycemic control, and insulin resistance in patients with type 2 diabetes and/or hypertension. In this study, we aimed to understand perceived barriers and facilitators influencing successful uptake and sustainability of the strategy, from patient and physician perspectives. Qualitative in-depth interviews were conducted in a purposive sample of physicians (n = 10) and participants (n = 20), including successful and less successful cases in terms of pedometer-assessed step count improvements. Themes that achieved saturation in either group through thematic analysis are presented. All participants appreciated the pedometer-based monitoring combined with step count prescriptions. Accountability to physicians and support offered by the trial coordinator influenced participant motivation. Those who increased step counts adopted strategies to integrate more steps into their routines and were able to overcome weather-related barriers by finding indoor alternative options to outdoor steps. Those who decreased step counts reported difficulty in overcoming weather-related challenges, health limitations and work constraints. Physicians indicated the strategy provided a framework for discussing physical activity and motivating patients, but emphasized the need for support from allied professionals to help deliver the strategy in busy clinical settings. A physician-delivered step count prescription strategy was feasibly integrated into clinical practice and successful in engaging most patients; however, continual support is needed for maximal engagement and sustained use. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Exploring the roles of cannot-link constraint in community detection via Multi-variance Mixed Gaussian Generative Model.

    PubMed

    Yang, Liang; Ge, Meng; Jin, Di; He, Dongxiao; Fu, Huazhu; Wang, Jing; Cao, Xiaochun

    2017-01-01

    Due to the demand for performance improvement and the existence of prior information, semi-supervised community detection with pairwise constraints becomes a hot topic. Most existing methods have been successfully encoding the must-link constraints, but neglect the opposite ones, i.e., the cannot-link constraints, which can force the exclusion between nodes. In this paper, we are interested in understanding the role of cannot-link constraints and effectively encoding pairwise constraints. Towards these goals, we define an integral generative process jointly considering the network topology, must-link and cannot-link constraints. We propose to characterize this process as a Multi-variance Mixed Gaussian Generative (MMGG) Model to address diverse degrees of confidences that exist in network topology and pairwise constraints and formulate it as a weighted nonnegative matrix factorization problem. The experiments on artificial and real-world networks not only illustrate the superiority of our proposed MMGG, but also, most importantly, reveal the roles of pairwise constraints. That is, though the must-link is more important than cannot-link when either of them is available, both must-link and cannot-link are equally important when both of them are available. To the best of our knowledge, this is the first work on discovering and exploring the importance of cannot-link constraints in semi-supervised community detection.

  8. Exploring the roles of cannot-link constraint in community detection via Multi-variance Mixed Gaussian Generative Model

    PubMed Central

    Ge, Meng; Jin, Di; He, Dongxiao; Fu, Huazhu; Wang, Jing; Cao, Xiaochun

    2017-01-01

    Due to the demand for performance improvement and the existence of prior information, semi-supervised community detection with pairwise constraints becomes a hot topic. Most existing methods have been successfully encoding the must-link constraints, but neglect the opposite ones, i.e., the cannot-link constraints, which can force the exclusion between nodes. In this paper, we are interested in understanding the role of cannot-link constraints and effectively encoding pairwise constraints. Towards these goals, we define an integral generative process jointly considering the network topology, must-link and cannot-link constraints. We propose to characterize this process as a Multi-variance Mixed Gaussian Generative (MMGG) Model to address diverse degrees of confidences that exist in network topology and pairwise constraints and formulate it as a weighted nonnegative matrix factorization problem. The experiments on artificial and real-world networks not only illustrate the superiority of our proposed MMGG, but also, most importantly, reveal the roles of pairwise constraints. That is, though the must-link is more important than cannot-link when either of them is available, both must-link and cannot-link are equally important when both of them are available. To the best of our knowledge, this is the first work on discovering and exploring the importance of cannot-link constraints in semi-supervised community detection. PMID:28678864

  9. Importance of Force Decomposition for Local Stress Calculations in Biomembrane Molecular Simulations.

    PubMed

    Vanegas, Juan M; Torres-Sánchez, Alejandro; Arroyo, Marino

    2014-02-11

    Local stress fields are routinely computed from molecular dynamics trajectories to understand the structure and mechanical properties of lipid bilayers. These calculations can be systematically understood with the Irving-Kirkwood-Noll theory. In identifying the stress tensor, a crucial step is the decomposition of the forces on the particles into pairwise contributions. However, such a decomposition is not unique in general, leading to an ambiguity in the definition of the stress tensor, particularly for multibody potentials. Furthermore, a theoretical treatment of constraints in local stress calculations has been lacking. Here, we present a new implementation of local stress calculations that systematically treats constraints and considers a privileged decomposition, the central force decomposition, that leads to a symmetric stress tensor by construction. We focus on biomembranes, although the methodology presented here is widely applicable. Our results show that some unphysical behavior obtained with previous implementations (e.g. nonconstant normal stress profiles along an isotropic bilayer in equilibrium) is a consequence of an improper treatment of constraints. Furthermore, other valid force decompositions produce significantly different stress profiles, particularly in the presence of dihedral potentials. Our methodology reveals the striking effect of unsaturations on the bilayer mechanics, missed by previous stress calculation implementations.

  10. Swarm robotics and minimalism

    NASA Astrophysics Data System (ADS)

    Sharkey, Amanda J. C.

    2007-09-01

    Swarm Robotics (SR) is closely related to Swarm Intelligence, and both were initially inspired by studies of social insects. Their guiding principles are based on their biological inspiration and take the form of an emphasis on decentralized local control and communication. Earlier studies went a step further in emphasizing the use of simple reactive robots that only communicate indirectly through the environment. More recently SR studies have moved beyond these constraints to explore the use of non-reactive robots that communicate directly, and that can learn and represent their environment. There is no clear agreement in the literature about how far such extensions of the original principles could go. Should there be any limitations on the individual abilities of the robots used in SR studies? Should knowledge of the capabilities of social insects lead to constraints on the capabilities of individual robots in SR studies? There is a lack of explicit discussion of such questions, and researchers have adopted a variety of constraints for a variety of reasons. A simple taxonomy of swarm robotics is presented here with the aim of addressing and clarifying these questions. The taxonomy distinguishes subareas of SR based on the emphases and justifications for minimalism and individual simplicity.

  11. Goldratt's thinking process applied to the budget constraints of a Texas MHMR facility.

    PubMed

    Taylor, Lloyd J; Churchwell, Lana

    2004-01-01

    Managers for years have known that the best way to run a business is to constantly be looking for ways to improve the way to do business. The barrier has been the ability to identify and solve the right problems. Eliyahu Goldratt (1992c), in his book The Goal, uses a love story format to illustrate his "Theory of Constraints." In Goldratt's (1994) next book, It's Not Luck, he further illustrates this powerful technique called "The Thinking Process" which is based on the Socratic method, using the "if ... then" reasoning process, The first step is to identify UDEs or undesirable effects within the organization and then use these UDEs to create a Current Reality Tree (CRT) which helps to identify the core problem. Next, use an Evaporating Cloud to come up with ideas and a way to break the constraint. Finally, use the injections in the Evaporating Cloud to create a Future Reality Tree, further validating the idea and making sure it does not create any negative effects. In this article, the "Thinking Process" will be used to identify and solve problems related to the General Medical Department of an MHMR State Hospital.

  12. Trajectory optimization and guidance for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Mease, Kenneth D.; Vanburen, Mark A.

    1989-01-01

    The first step in the approach to developing guidance laws for a horizontal take-off, air breathing single-stage-to-orbit vehicle is to characterize the minimum-fuel ascent trajectories. The capability to generate constrained, minimum fuel ascent trajectories for a single-stage-to-orbit vehicle was developed. A key component of this capability is the general purpose trajectory optimization program OTIS. The pre-production version, OTIS 0.96 was installed and run on a Convex C-1. A propulsion model was developed covering the entire flight envelope of a single-stage-to-orbit vehicle. Three separate propulsion modes, corresponding to an after burning turbojet, a ramjet and a scramjet, are used in the air breathing propulsion phase. The Generic Hypersonic Aerodynamic Model Example aerodynamic model of a hypersonic air breathing single-stage-to-orbit vehicle was obtained and implemented. Preliminary results pertaining to the effects of variations in acceleration constraints, available thrust level and fuel specific impulse on the shape of the minimum-fuel ascent trajectories were obtained. The results show that, if the air breathing engines are sized for acceleration to orbital velocity, it is the acceleration constraint rather than the dynamic pressure constraint that is active during ascent.

  13. Thermodynamic Analysis of Chemically Reacting Mixtures-Comparison of First and Second Order Models.

    PubMed

    Pekař, Miloslav

    2018-01-01

    Recently, a method based on non-equilibrium continuum thermodynamics which derives thermodynamically consistent reaction rate models together with thermodynamic constraints on their parameters was analyzed using a triangular reaction scheme. The scheme was kinetically of the first order. Here, the analysis is further developed for several first and second order schemes to gain a deeper insight into the thermodynamic consistency of rate equations and relationships between chemical thermodynamic and kinetics. It is shown that the thermodynamic constraints on the so-called proper rate coefficient are usually simple sign restrictions consistent with the supposed reaction directions. Constraints on the so-called coupling rate coefficients are more complex and weaker. This means more freedom in kinetic coupling between reaction steps in a scheme, i.e., in the kinetic effects of other reactions on the rate of some reaction in a reacting system. When compared with traditional mass-action rate equations, the method allows a reduction in the number of traditional rate constants to be evaluated from data, i.e., a reduction in the dimensionality of the parameter estimation problem. This is due to identifying relationships between mass-action rate constants (relationships which also include thermodynamic equilibrium constants) which have so far been unknown.

  14. Middleware Case Study: MeDICi

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wynne, Adam S.

    2011-05-05

    In many application domains in science and engineering, data produced by sensors, instruments and networks is naturally processed by software applications structured as a pipeline . Pipelines comprise a sequence of software components that progressively process discrete units of data to produce a desired outcome. For example, in a Web crawler that is extracting semantics from text on Web sites, the first stage in the pipeline might be to remove all HTML tags to leave only the raw text of the document. The second step may parse the raw text to break it down into its constituent grammatical parts, suchmore » as nouns, verbs and so on. Subsequent steps may look for names of people or places, interesting events or times so documents can be sequenced on a time line. Each of these steps can be written as a specialized program that works in isolation with other steps in the pipeline. In many applications, simple linear software pipelines are sufficient. However, more complex applications require topologies that contain forks and joins, creating pipelines comprising branches where parallel execution is desirable. It is also increasingly common for pipelines to process very large files or high volume data streams which impose end-to-end performance constraints. Additionally, processes in a pipeline may have specific execution requirements and hence need to be distributed as services across a heterogeneous computing and data management infrastructure. From a software engineering perspective, these more complex pipelines become problematic to implement. While simple linear pipelines can be built using minimal infrastructure such as scripting languages, complex topologies and large, high volume data processing requires suitable abstractions, run-time infrastructures and development tools to construct pipelines with the desired qualities-of-service and flexibility to evolve to handle new requirements. The above summarizes the reasons we created the MeDICi Integration Framework (MIF) that is designed for creating high-performance, scalable and modifiable software pipelines. MIF exploits a low friction, robust, open source middleware platform and extends it with component and service-based programmatic interfaces that make implementing complex pipelines simple. The MIF run-time automatically handles queues between pipeline elements in order to handle request bursts, and automatically executes multiple instances of pipeline elements to increase pipeline throughput. Distributed pipeline elements are supported using a range of configurable communications protocols, and the MIF interfaces provide efficient mechanisms for moving data directly between two distributed pipeline elements.« less

  15. Constraints on Ho from Time-Delay Measurements of PG1115+080

    NASA Technical Reports Server (NTRS)

    Chartas, George

    2003-01-01

    The observations that were performed as part of the award titled: Constraints on Ho From Time-Delay Measurements of PG1115+080 resulted in several scientific publications and presentations. We list these publications and presentations and provide brief description of the important science presented in them.

  16. Simulations of the heat exchange in thermoplastic injection molds manufactured by additive techniques

    NASA Astrophysics Data System (ADS)

    Daldoul, Wafa; Toulorge, Thomas; Vincent, Michel

    2017-10-01

    The cost and quality of complex parts manufactured by thermoplastic injection is traditionally limited by design constraints on the cooling system of the mold. A possible solution is to create the mold by additive manufacturing, which makes it possible to freely design the cooling channels. Such molds normally contain hollow parts (alveoli) in order to decrease their cost. However, the complex geometry of the cooling channels and the alveoli makes it difficult to predict the performance of the cooling system. This work aims to compute the heat exchanges between the polymer, the mold and the cooling channels with complex geometries. An Immersed Volume approach is taken, where the different parts of the domain (i.e. the polymer, the cooling channels, the alveoli and the mold) are represented by level-sets and the thermo-mechanical properties of the materials vary smoothly at the interface between the parts. The energy and momentum equations are solved by a stabilized Finite Element method. In order to accurately resolve the large variations of material properties and the steep temperature gradients at interfaces, state-of-the art anisotropic mesh refinement techniques are employed. The filling stage of the process is neglected. In a first step, only the heat equation is solved, so that the packing stage is also disregarded. In a second step, thermo-mechanical effects occurring in the polymer during the packing stage are taken into account, which results in the injection of an additional amount of polymer that significantly influences the temperature evolution. The method is validated on the simple geometry of a center-gated disk and compared with experimental measurements. The agreement is very good. Simulations are performed on an industrial case which illustrates the ability of the method to deal with complex geometries.

  17. Evaluation and inversion of a net ecosystem carbon exchange model for grasslands and croplands

    NASA Astrophysics Data System (ADS)

    Herbst, M.; Klosterhalfen, A.; Weihermueller, L.; Graf, A.; Schmidt, M.; Huisman, J. A.; Vereecken, H.

    2017-12-01

    A one-dimensional soil water, heat, and CO2 flux model (SOILCO2), a pool concept of soil carbon turnover (RothC), and a crop growth module (SUCROS) was coupled to predict the net ecosystem exchange (NEE) of carbon. This model, further referred to as AgroC, was extended with routines for managed grassland as well as for root exudation and root decay. In a first step, the coupled model was applied to two winter wheat sites and one upland grassland site in Germany. The model was calibrated based on soil water content, soil temperature, biometric, and soil respiration measurements for each site, and validated in terms of hourly NEE measured with the eddy covariance technique. The overall model performance of AgroC was acceptable with a model efficiency >0.78 for NEE. In a second step, AgroC was optimized with the eddy covariance NEE measurements to examine the effect of various objective functions, constraints, and data-transformations on estimated NEE, which showed a distinct sensitivity to the choice of objective function and the inclusion of soil respiration data in the optimization process. Both, day and nighttime fluxes, were found to be sensitive to the selected optimization strategy. Additional consideration of soil respiration measurements improved the simulation of small positive fluxes remarkably. Even though the model performance of the selected optimization strategies did not diverge substantially, the resulting annual NEE differed substantially. We conclude that data-transformation, definition of objective functions, and data sources have to be considered cautiously when using a terrestrial ecosystem model to determine carbon balances by means of eddy covariance measurements.

  18. Deformable registration of x-ray to MRI for post-implant dosimetry in prostate brachytherapy

    NASA Astrophysics Data System (ADS)

    Park, Seyoun; Song, Danny Y.; Lee, Junghoon

    2016-03-01

    Post-implant dosimetric assessment in prostate brachytherapy is typically performed using CT as the standard imaging modality. However, poor soft tissue contrast in CT causes significant variability in target contouring, resulting in incorrect dose calculations for organs of interest. CT-MR fusion-based approach has been advocated taking advantage of the complementary capabilities of CT (seed identification) and MRI (soft tissue visibility), and has proved to provide more accurate dosimetry calculations. However, seed segmentation in CT requires manual review, and the accuracy is limited by the reconstructed voxel resolution. In addition, CT deposits considerable amount of radiation to the patient. In this paper, we propose an X-ray and MRI based post-implant dosimetry approach. Implanted seeds are localized using three X-ray images by solving a combinatorial optimization problem, and the identified seeds are registered to MR images by an intensity-based points-to-volume registration. We pre-process the MR images using geometric and Gaussian filtering. To accommodate potential soft tissue deformation, our registration is performed in two steps, an initial affine transformation and local deformable registration. An evolutionary optimizer in conjunction with a points-to-volume similarity metric is used for the affine registration. Local prostate deformation and seed migration are then adjusted by the deformable registration step with external and internal force constraints. We tested our algorithm on six patient data sets, achieving registration error of (1.2+/-0.8) mm in < 30 sec. Our proposed approach has the potential to be a fast and cost-effective solution for post-implant dosimetry with equivalent accuracy as the CT-MR fusion-based approach.

  19. Wind farm topology-finding algorithm considering performance, costs, and environmental impacts.

    PubMed

    Tazi, Nacef; Chatelet, Eric; Bouzidi, Youcef; Meziane, Rachid

    2017-06-05

    Optimal power in wind farms turns to be a modern problem for investors and decision makers; onshore wind farms are subject to performance and economic and environmental constraints. The aim of this work is to define the best installed capacity (best topology) with maximum performance and profits and consider environmental impacts as well. In this article, we continue the work recently done on wind farm topology-finding algorithm. The proposed resolution technique is based on finding the best topology of the system that maximizes the wind farm performance (availability) under the constraints of costs and capital investments. Global warming potential of wind farm is calculated and taken into account in the results. A case study is done using data and constraints similar to those collected from wind farm constructors, managers, and maintainers. Multi-state systems (MSS), universal generating function (UGF), wind, and load charge functions are applied. An economic study was conducted to assess the wind farm investment. Net present value (NPV) and levelized cost of energy (LCOE) were calculated for best topologies found.

  20. Time-lapse joint inversion of geophysical data with automatic joint constraints and dynamic attributes

    NASA Astrophysics Data System (ADS)

    Rittgers, J. B.; Revil, A.; Mooney, M. A.; Karaoulis, M.; Wodajo, L.; Hickey, C. J.

    2016-12-01

    Joint inversion and time-lapse inversion techniques of geophysical data are often implemented in an attempt to improve imaging of complex subsurface structures and dynamic processes by minimizing negative effects of random and uncorrelated spatial and temporal noise in the data. We focus on the structural cross-gradient (SCG) approach (enforcing recovered models to exhibit similar spatial structures) in combination with time-lapse inversion constraints applied to surface-based electrical resistivity and seismic traveltime refraction data. The combination of both techniques is justified by the underlying petrophysical models. We investigate the benefits and trade-offs of SCG and time-lapse constraints. Using a synthetic case study, we show that a combined joint time-lapse inversion approach provides an overall improvement in final recovered models. Additionally, we introduce a new approach to reweighting SCG constraints based on an iteratively updated normalized ratio of model sensitivity distributions at each time-step. We refer to the new technique as the Automatic Joint Constraints (AJC) approach. The relevance of the new joint time-lapse inversion process is demonstrated on the synthetic example. Then, these approaches are applied to real time-lapse monitoring field data collected during a quarter-scale earthen embankment induced-piping failure test. The use of time-lapse joint inversion is justified by the fact that a change of porosity drives concomitant changes in seismic velocities (through its effect on the bulk and shear moduli) and resistivities (through its influence upon the formation factor). Combined with the definition of attributes (i.e. specific characteristics) of the evolving target associated with piping, our approach allows localizing the position of the preferential flow path associated with internal erosion. This is not the case using other approaches.

Top