Sample records for time step constraint

  1. A two steps solution approach to solving large nonlinear models: application to a problem of conjunctive use.

    PubMed

    Vieira, J; Cunha, M C

    2011-01-01

    This article describes a solution method of solving large nonlinear problems in two steps. The two steps solution approach takes advantage of handling smaller and simpler models and having better starting points to improve solution efficiency. The set of nonlinear constraints (named as complicating constraints) which makes the solution of the model rather complex and time consuming is eliminated from step one. The complicating constraints are added only in the second step so that a solution of the complete model is then found. The solution method is applied to a large-scale problem of conjunctive use of surface water and groundwater resources. The results obtained are compared with solutions determined with the direct solve of the complete model in one single step. In all examples the two steps solution approach allowed a significant reduction of the computation time. This potential gain of efficiency of the two steps solution approach can be extremely important for work in progress and it can be particularly useful for cases where the computation time would be a critical factor for having an optimized solution in due time.

  2. Multi-Time Step Service Restoration for Advanced Distribution Systems and Microgrids

    DOE PAGES

    Chen, Bo; Chen, Chen; Wang, Jianhui; ...

    2017-07-07

    Modern power systems are facing increased risk of disasters that can cause extended outages. The presence of remote control switches (RCSs), distributed generators (DGs), and energy storage systems (ESS) provides both challenges and opportunities for developing post-fault service restoration methodologies. Inter-temporal constraints of DGs, ESS, and loads under cold load pickup (CLPU) conditions impose extra complexity on problem formulation and solution. In this paper, a multi-time step service restoration methodology is proposed to optimally generate a sequence of control actions for controllable switches, ESSs, and dispatchable DGs to assist the system operator with decision making. The restoration sequence is determinedmore » to minimize the unserved customers by energizing the system step by step without violating operational constraints at each time step. The proposed methodology is formulated as a mixed-integer linear programming (MILP) model and can adapt to various operation conditions. Furthermore, the proposed method is validated through several case studies that are performed on modified IEEE 13-node and IEEE 123-node test feeders.« less

  3. Multi-Time Step Service Restoration for Advanced Distribution Systems and Microgrids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Bo; Chen, Chen; Wang, Jianhui

    Modern power systems are facing increased risk of disasters that can cause extended outages. The presence of remote control switches (RCSs), distributed generators (DGs), and energy storage systems (ESS) provides both challenges and opportunities for developing post-fault service restoration methodologies. Inter-temporal constraints of DGs, ESS, and loads under cold load pickup (CLPU) conditions impose extra complexity on problem formulation and solution. In this paper, a multi-time step service restoration methodology is proposed to optimally generate a sequence of control actions for controllable switches, ESSs, and dispatchable DGs to assist the system operator with decision making. The restoration sequence is determinedmore » to minimize the unserved customers by energizing the system step by step without violating operational constraints at each time step. The proposed methodology is formulated as a mixed-integer linear programming (MILP) model and can adapt to various operation conditions. Furthermore, the proposed method is validated through several case studies that are performed on modified IEEE 13-node and IEEE 123-node test feeders.« less

  4. Contact-aware simulations of particulate Stokesian suspensions

    NASA Astrophysics Data System (ADS)

    Lu, Libin; Rahimian, Abtin; Zorin, Denis

    2017-10-01

    We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.

  5. Kinematic constraints associated with the acquisition of overarm throwing part I: step and trunk actions.

    PubMed

    Stodden, David F; Langendorfer, Stephen J; Fleisig, Glenn S; Andrews, James R

    2006-12-01

    The purposes of this study were to: (a) examine differences within specific kinematic variables and ball velocity associated with developmental component levels of step and trunk action (Roberton & Halverson, 1984), and (b) if the differences in kinematic variables were significantly associated with the differences in component levels, determine potential kinematic constraints associated with skilled throwing acquisition. Results indicated stride length (69.3 %) and time from stride foot contact to ball release (39. 7%) provided substantial contributions to ball velocity (p < .001). All trunk kinematic measures increased significantly with increasing component levels (p < .001). Results suggest that trunk linear and rotational velocities, degree of trunk tilt, time from stride foot contact to ball release, and ball velocity represented potential control parameters and, therefore, constraints on overarm throwing acquisition.

  6. Asynchronous collision integrators: Explicit treatment of unilateral contact with friction and nodal restraints

    PubMed Central

    Wolff, Sebastian; Bucher, Christian

    2013-01-01

    This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806

  7. Evaluation of atomic pressure in the multiple time-step integration algorithm.

    PubMed

    Andoh, Yoshimichi; Yoshii, Noriyuki; Yamada, Atsushi; Okazaki, Susumu

    2017-04-15

    In molecular dynamics (MD) calculations, reduction in calculation time per MD loop is essential. A multiple time-step (MTS) integration algorithm, the RESPA (Tuckerman and Berne, J. Chem. Phys. 1992, 97, 1990-2001), enables reductions in calculation time by decreasing the frequency of time-consuming long-range interaction calculations. However, the RESPA MTS algorithm involves uncertainties in evaluating the atomic interaction-based pressure (i.e., atomic pressure) of systems with and without holonomic constraints. It is not clear which intermediate forces and constraint forces in the MTS integration procedure should be used to calculate the atomic pressure. In this article, we propose a series of equations to evaluate the atomic pressure in the RESPA MTS integration procedure on the basis of its equivalence to the Velocity-Verlet integration procedure with a single time step (STS). The equations guarantee time-reversibility even for the system with holonomic constrants. Furthermore, we generalize the equations to both (i) arbitrary number of inner time steps and (ii) arbitrary number of force components (RESPA levels). The atomic pressure calculated by our equations with the MTS integration shows excellent agreement with the reference value with the STS, whereas pressures calculated using the conventional ad hoc equations deviated from it. Our equations can be extended straightforwardly to the MTS integration algorithm for the isothermal NVT and isothermal-isobaric NPT ensembles. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  8. Self-Paced and Temporally Constrained Throwing Performance by Team-Handball Experts and Novices without Foreknowledge of Target Position

    PubMed Central

    Rousanoglou, Elissavet N.; Noutsos, Konstantinos S.; Bayios, Ioannis A.; Boudolos, Konstantinos D.

    2015-01-01

    The fixed duration of a team-handball game and its continuously changing situations incorporate an inherent temporal pressure. Also, the target’s position is not foreknown but online determined by the player’s interceptive processing of visual information. These ecological limitations do not favour throwing performance, particularly in novice players, and are not reflected in previous experimental settings of self-paced throws with foreknowledge of target position. The study investigated the self-paced and temporally constrained throwing performance without foreknowledge of target position, in team-handball experts and novices in three shot types (Standing Shot, 3Step Shot, Jump Shot). The target position was randomly illuminated on a tabloid surface before (self-paced condition) and after (temporally constrained condition) shot initiation. Response time, throwing velocity and throwing accuracy were measured. A mixed 2 (experience) X 2 (temporal constraint condition) ANOVA was applied. The novices performed with significantly lower throwing velocity and worse throwing accuracy in all shot types (p = 0.000) and, longer response time only in the 3Step Shot (p = 0.013). The temporal constraint (significantly shorter response times in all shot types at p = 0.000) had a shot specific effect with lower throwing velocity only in the 3Step Shot (p = 0.001) and an unexpected greater throwing accuracy only in the Standing Shot (p = 0.002). The significant interaction between experience and temporal constraint condition in throwing accuracy (p = 0.003) revealed a significant temporal constraint effect in the novices (p = 0.002) but not in the experts (p = 0.798). The main findings of the study are the shot specificity of the temporal constraint effect, as well as that, depending on the shot, the novices’ throwing accuracy may benefit rather than worsen under temporal pressure. Key points The temporal constraint induced a shot specific significant difference in throwing velocity in both the experts and the novices. The temporal constraint induced a shot specific significant difference in throwing accuracy only in the novices. Depending on the shot demands, the throwing accuracy of the novices may benefit under temporally constrained situations. PMID:25729288

  9. Integrating viscoelastic mass spring dampers into position-based dynamics to simulate soft tissue deformation in real time

    PubMed Central

    Lu, Yuhua; Liu, Qian

    2018-01-01

    We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications. PMID:29515870

  10. Integrating viscoelastic mass spring dampers into position-based dynamics to simulate soft tissue deformation in real time.

    PubMed

    Xu, Lang; Lu, Yuhua; Liu, Qian

    2018-02-01

    We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications.

  11. Hardware design and implementation of fast DOA estimation method based on multicore DSP

    NASA Astrophysics Data System (ADS)

    Guo, Rui; Zhao, Yingxiao; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-10-01

    In this paper, we present a high-speed real-time signal processing hardware platform based on multicore digital signal processor (DSP). The real-time signal processing platform shows several excellent characteristics including high performance computing, low power consumption, large-capacity data storage and high speed data transmission, which make it able to meet the constraint of real-time direction of arrival (DOA) estimation. To reduce the high computational complexity of DOA estimation algorithm, a novel real-valued MUSIC estimator is used. The algorithm is decomposed into several independent steps and the time consumption of each step is counted. Based on the statistics of the time consumption, we present a new parallel processing strategy to distribute the task of DOA estimation to different cores of the real-time signal processing hardware platform. Experimental results demonstrate that the high processing capability of the signal processing platform meets the constraint of real-time direction of arrival (DOA) estimation.

  12. Method and apparatus for automated assembly

    DOEpatents

    Jones, Rondall E.; Wilson, Randall H.; Calton, Terri L.

    1999-01-01

    A process and apparatus generates a sequence of steps for assembly or disassembly of a mechanical system. Each step in the sequence is geometrically feasible, i.e., the part motions required are physically possible. Each step in the sequence is also constraint feasible, i.e., the step satisfies user-definable constraints. Constraints allow process and other such limitations, not usually represented in models of the completed mechanical system, to affect the sequence.

  13. STEPS: lean thinking, theory of constraints and identifying bottlenecks in an emergency department.

    PubMed

    Ryan, A; Hunter, K; Cunningham, K; Williams, J; O'Shea, H; Rooney, P; Hickey, F

    2013-04-01

    This study aimed to identify the bottlenecks in patients' journeys through an emergency department (ED). For each stage of the patient journey, the average times were compared between two groups divided according to the four hour time frame and disproportionate delays were identified using a significance test These bottlenecks were evaluated with reference to a lean thinking value-stream map and the five focusing steps of the theory of constraints. A total of 434 (72.5%) ED patients were tracked over one week. Logistic regression showed that patients who had radiological tests, blood tests or who were admitted were 4.4, 4.1 and 7.7 times more likely, respectively, to stay over four hours in the ED than those who didn't The stages that were significantly delayed were the time spent waiting for radiology (p = 0.001), waiting for the in-patient team (p = 0.004), waiting for a bed (p < 0.001) and ED doctor turnaround time (p < 0.001).

  14. Appointment Template Redesign in a Women's Health Clinic Using Clinical Constraints to Improve Service Quality and Efficiency.

    PubMed

    Huang, Y; Verduzco, S

    2015-01-01

    Patient wait time is a critical element of access to care that has long been recognized as a major problem in modern outpatient health care delivery systems. It impacts patient and medical staff productivity, stress, quality and efficiency of medical care, as well as health-care cost and availability. This study was conducted in a Women's Health Clinic. The objective was to improve clinic service quality by redesigning patient appointment template using the clinical constraints. The proposed scheduling template consisted of two key elements: the redesign of appointment types and the determination of the length of time slots using defined constraints. The re-classification technique was used for the redesign of appointment visit types to capture service variation for scheduling purposes. Then, the appointment length was determined by incorporating clinic constraints or goals, such as patient wait time, physician idle time, overtime, finish time, lunch hours, when the last appointment was scheduled, and the desired number of appointment slots, to converge the optimal length of appointment slots for each visit type. The redesigned template was implemented and the results indicated a 73% reduction in average patient waiting from the reported 40 to 11 minutes. The patient no-show rate was reduced by 4% from 24% to 20%. The morning section on average finished about 11:50 am. The clinic day was finished around 4:45 pm. Provider average idle time was estimated to be about 5 minutes, which can be used for charting/documenting patients. This study provided an alternative method of redesigning appointment scheduling templates using only the clinical constraints rather than the traditional way that required an objective function. This paper also documented the employed methods step by step in a real clinic setting. The implementation results concluded a significant improvement on patient wait time and no-show rate.

  15. Appointment Template Redesign in a Women’s Health Clinic Using Clinical Constraints to Improve Service Quality and Efficiency

    PubMed Central

    Verduzco, S.

    2015-01-01

    Summary Background Patient wait time is a critical element of access to care that has long been recognized as a major problem in modern outpatient health care delivery systems. It impacts patient and medical staff productivity, stress, quality and efficiency of medical care, as well as health-care cost and availability. Objectives This study was conducted in a Women’s Health Clinic. The objective was to improve clinic service quality by redesigning patient appointment template using the clinical constraints. Methods The proposed scheduling template consisted of two key elements: the redesign of appointment types and the determination of the length of time slots using defined constraints. The re-classification technique was used for the redesign of appointment visit types to capture service variation for scheduling purposes. Then, the appointment length was determined by incorporating clinic constraints or goals, such as patient wait time, physician idle time, overtime, finish time, lunch hours, when the last appointment was scheduled, and the desired number of appointment slots, to converge the optimal length of appointment slots for each visit type. Results The redesigned template was implemented and the results indicated a 73% reduction in average patient waiting from the reported 40 to 11 minutes. The patient no-show rate was reduced by 4% from 24% to 20%. The morning section on average finished about 11:50 am. The clinic day was finished around 4:45 pm. Provider average idle time was estimated to be about 5 minutes, which can be used for charting/documenting patients. Conclusions This study provided an alternative method of redesigning appointment scheduling templates using only the clinical constraints rather than the traditional way that required an objective function. This paper also documented the employed methods step by step in a real clinic setting. The implementation results concluded a significant improvement on patient wait time and no-show rate. PMID:26171075

  16. Time scale of random sequential adsorption.

    PubMed

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

  17. Solving the MHD equations by the space time conservation element and solution element method

    NASA Astrophysics Data System (ADS)

    Zhang, Moujin; John Yu, S.-T.; Henry Lin, S.-C.; Chang, Sin-Chung; Blankson, Isaiah

    2006-05-01

    We apply the space-time conservation element and solution element (CESE) method to solve the ideal MHD equations with special emphasis on satisfying the divergence free constraint of magnetic field, i.e., ∇ · B = 0. In the setting of the CESE method, four approaches are employed: (i) the original CESE method without any additional treatment, (ii) a simple corrector procedure to update the spatial derivatives of magnetic field B after each time marching step to enforce ∇ · B = 0 at all mesh nodes, (iii) a constraint-transport method by using a special staggered mesh to calculate magnetic field B, and (iv) the projection method by solving a Poisson solver after each time marching step. To demonstrate the capabilities of these methods, two benchmark MHD flows are calculated: (i) a rotated one-dimensional MHD shock tube problem and (ii) a MHD vortex problem. The results show no differences between different approaches and all results compare favorably with previously reported data.

  18. RSM 1.0 user's guide: A resupply scheduler using integer optimization

    NASA Technical Reports Server (NTRS)

    Viterna, Larry A.; Green, Robert D.; Reed, David M.

    1991-01-01

    The Resupply Scheduling Model (RSM) is a PC based, fully menu-driven computer program. It uses integer programming techniques to determine an optimum schedule to replace components on or before a fixed replacement period, subject to user defined constraints such as transportation mass and volume limits or available repair crew time. Principal input for RSJ includes properties such as mass and volume and an assembly sequence. Resource constraints are entered for each period corresponding to the component properties. Though written to analyze the electrical power system on the Space Station Freedom, RSM is quite general and can be used to model the resupply of almost any system subject to user defined resource constraints. Presented here is a step by step procedure for preparing the input, performing the analysis, and interpreting the results. Instructions for installing the program and information on the algorithms are given.

  19. Monocular Visual Odometry Based on Trifocal Tensor Constraint

    NASA Astrophysics Data System (ADS)

    Chen, Y. J.; Yang, G. L.; Jiang, Y. X.; Liu, X. Y.

    2018-02-01

    For the problem of real-time precise localization in the urban street, a monocular visual odometry based on Extend Kalman fusion of optical-flow tracking and trifocal tensor constraint is proposed. To diminish the influence of moving object, such as pedestrian, we estimate the motion of the camera by extracting the features on the ground, which improves the robustness of the system. The observation equation based on trifocal tensor constraint is derived, which can form the Kalman filter alone with the state transition equation. An Extend Kalman filter is employed to cope with the nonlinear system. Experimental results demonstrate that, compares with Yu’s 2-step EKF method, the algorithm is more accurate which meets the needs of real-time accurate localization in cities.

  20. Applying the theory of constraints in health care: Part 1--The philosophy.

    PubMed

    Breen, Anne M; Burton-Houle, Tracey; Aron, David C

    2002-01-01

    The imperative to improve both technical and service quality while simultaneously reducing costs is quite clear. The Theory of Constraints (TOC) is an emerging philosophy that rests on two assumptions: (1) systems thinking and (2) if a constraint "is anything that limits a system from achieving higher performance versus its goal," then every system must have at least one (and at most no more than a few) constraints or limiting factors. A constraint is neither good nor bad in itself. Rather, it just is. In fact, recognition of the existence of constraints represents an excellent opportunity for improvement because it allows one to focus ones efforts in the most productive area--identifying and managing the constraints. This is accomplished by using the five focusing steps of TOC: (1) identify the system's constraint; (2) decide how to exploit it; (3) subordinate/synchronize everything else to the above decisions; (4) elevate the system's constraint; and (5) if the constraint has shifted in the above steps, go back to step 1. Do not allow inertia to become the system's constraint. TOC also refers to a series of tools termed "thinking processes" and the sequence in which they are used.

  1. Variable Step Integration Coupled with the Method of Characteristics Solution for Water-Hammer Analysis, A Case Study

    NASA Technical Reports Server (NTRS)

    Turpin, Jason B.

    2004-01-01

    One-dimensional water-hammer modeling involves the solution of two coupled non-linear hyperbolic partial differential equations (PDEs). These equations result from applying the principles of conservation of mass and momentum to flow through a pipe, and usually the assumption that the speed at which pressure waves propagate through the pipe is constant. In order to solve these equations for the interested quantities (i.e. pressures and flow rates), they must first be converted to a system of ordinary differential equations (ODEs) by either approximating the spatial derivative terms with numerical techniques or using the Method of Characteristics (MOC). The MOC approach is ideal in that no numerical approximation errors are introduced in converting the original system of PDEs into an equivalent system of ODEs. Unfortunately this resulting system of ODEs is bound by a time step constraint so that when integrating the equations the solution can only be obtained at fixed time intervals. If the fluid system to be modeled also contains dynamic components (i.e. components that are best modeled by a system of ODEs), it may be necessary to take extremely small time steps during certain points of the model simulation in order to achieve stability and/or accuracy in the solution. Coupled together, the fixed time step constraint invoked by the MOC, and the occasional need for extremely small time steps in order to obtain stability and/or accuracy, can greatly increase simulation run times. As one solution to this problem, a method for combining variable step integration (VSI) algorithms with the MOC was developed for modeling water-hammer in systems with highly dynamic components. A case study is presented in which reverse flow through a dual-flapper check valve introduces a water-hammer event. The predicted pressure responses upstream of the check-valve are compared with test data.

  2. Fast and Easy 3D Reconstruction with the Help of Geometric Constraints and Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Annich, Afafe; El Abderrahmani, Abdellatif; Satori, Khalid

    2017-09-01

    The purpose of the work presented in this paper is to describe new method of 3D reconstruction from one or more uncalibrated images. This method is based on two important concepts: geometric constraints and genetic algorithms (GAs). At first, we are going to discuss the combination between bundle adjustment and GAs that we have proposed in order to improve 3D reconstruction efficiency and success. We used GAs in order to improve fitness quality of initial values that are used in the optimization problem. It will increase surely convergence rate. Extracted geometric constraints are used first to obtain an estimated value of focal length that helps us in the initialization step. Matching homologous points and constraints is used to estimate the 3D model. In fact, our new method gives us a lot of advantages: reducing the estimated parameter number in optimization step, decreasing used image number, winning time and stabilizing good quality of 3D results. At the end, without any prior information about our 3D scene, we obtain an accurate calibration of the cameras, and a realistic 3D model that strictly respects the geometric constraints defined before in an easy way. Various data and examples will be used to highlight the efficiency and competitiveness of our present approach.

  3. A muscle-driven approach to restore stepping with an exoskeleton for individuals with paraplegia.

    PubMed

    Chang, Sarah R; Nandor, Mark J; Li, Lu; Kobetic, Rudi; Foglyano, Kevin M; Schnellenberger, John R; Audu, Musa L; Pinault, Gilles; Quinn, Roger D; Triolo, Ronald J

    2017-05-30

    Functional neuromuscular stimulation, lower limb orthosis, powered lower limb exoskeleton, and hybrid neuroprosthesis (HNP) technologies can restore stepping in individuals with paraplegia due to spinal cord injury (SCI). However, a self-contained muscle-driven controllable exoskeleton approach based on an implanted neural stimulator to restore walking has not been previously demonstrated, which could potentially result in system use outside the laboratory and viable for long term use or clinical testing. In this work, we designed and evaluated an untethered muscle-driven controllable exoskeleton to restore stepping in three individuals with paralysis from SCI. The self-contained HNP combined neural stimulation to activate the paralyzed muscles and generate joint torques for limb movements with a controllable lower limb exoskeleton to stabilize and support the user. An onboard controller processed exoskeleton sensor signals, determined appropriate exoskeletal constraints and stimulation commands for a finite state machine (FSM), and transmitted data over Bluetooth to an off-board computer for real-time monitoring and data recording. The FSM coordinated stimulation and exoskeletal constraints to enable functions, selected with a wireless finger switch user interface, for standing up, standing, stepping, or sitting down. In the stepping function, the FSM used a sensor-based gait event detector to determine transitions between gait phases of double stance, early swing, late swing, and weight acceptance. The HNP restored stepping in three individuals with motor complete paralysis due to SCI. The controller appropriately coordinated stimulation and exoskeletal constraints using the sensor-based FSM for subjects with different stimulation systems. The average range of motion at hip and knee joints during walking were 8.5°-20.8° and 14.0°-43.6°, respectively. Walking speeds varied from 0.03 to 0.06 m/s, and cadences from 10 to 20 steps/min. A self-contained muscle-driven exoskeleton was a feasible intervention to restore stepping in individuals with paraplegia due to SCI. The untethered hybrid system was capable of adjusting to different individuals' needs to appropriately coordinate exoskeletal constraints with muscle activation using a sensor-driven FSM for stepping. Further improvements for out-of-the-laboratory use should include implantation of plantar flexor muscles to improve walking speed and power assist as needed at the hips and knees to maintain walking as muscles fatigue.

  4. Enforcing the Courant-Friedrichs-Lewy condition in explicitly conservative local time stepping schemes

    NASA Astrophysics Data System (ADS)

    Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.

    2018-04-01

    An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.

  5. 78 FR 69079 - Midcontinent Independent System Operator, Inc.; Supplemental Notice of Technical Conference

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-18

    ... Schedule 46. a. For step one, define the terms ``Hourly Real-Time RSG MWP'' and ``Resource CMC Real-time... RSG credits and the difference between one and the Constraint Management Charge Allocation Factor... and Headroom Need is (1) less than or equal to zero, (2) greater than or equal to the Economic...

  6. Running DNA Mini-Gels in 20 Minutes or Less Using Sodium Boric Acid Buffer

    ERIC Educational Resources Information Center

    Jenkins, Kristin P.; Bielec, Barbara

    2006-01-01

    Providing a biotechnology experience for students can be challenging on several levels, and time is a real constraint for many experiments. Many DNA based methods require a gel electrophoresis step, and although some biotechnology procedures have convenient break points, gel electrophoresis does not. In addition to the time required for loading…

  7. Molecular dynamics based enhanced sampling of collective variables with very large time steps.

    PubMed

    Chen, Pei-Yang; Tuckerman, Mark E

    2018-01-14

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  8. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    NASA Astrophysics Data System (ADS)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  9. Biomechanical influences on balance recovery by stepping.

    PubMed

    Hsiao, E T; Robinovitch, S N

    1999-10-01

    Stepping represents a common means for balance recovery after a perturbation to upright posture. Yet little is known regarding the biomechanical factors which determine whether a step succeeds in preventing a fall. In the present study, we developed a simple pendulum-spring model of balance recovery by stepping, and used this to assess how step length and step contact time influence the effort (leg contact force) and feasibility of balance recovery by stepping. We then compared model predictions of step characteristics which minimize leg contact force to experimentally observed values over a range of perturbation strengths. At all perturbation levels, experimentally observed step execution times were higher than optimal, and step lengths were smaller than optimal. However, the predicted increase in leg contact force associated with these deviations was substantial only for large perturbations. Furthermore, increases in the strength of the perturbation caused subjects to take larger, quicker steps, which reduced their predicted leg contact force. We interpret these data to reflect young subjects' desire to minimize recovery effort, subject to neuromuscular constraints on step execution time and step length. Finally, our model predicts that successful balance recovery by stepping is governed by a coupling between step length, step execution time, and leg strength, so that the feasibility of balance recovery decreases unless declines in one capacity are offset by enhancements in the others. This suggests that one's risk for falls may be affected more by small but diffuse neuromuscular impairments than by larger impairment in a single motor capacity.

  10. Adaptive [theta]-methods for pricing American options

    NASA Astrophysics Data System (ADS)

    Khaliq, Abdul Q. M.; Voss, David A.; Kazmi, Kamran

    2008-12-01

    We develop adaptive [theta]-methods for solving the Black-Scholes PDE for American options. By adding a small, continuous term, the Black-Scholes PDE becomes an advection-diffusion-reaction equation on a fixed spatial domain. Standard implementation of [theta]-methods would require a Newton-type iterative procedure at each time step thereby increasing the computational complexity of the methods. Our linearly implicit approach avoids such complications. We establish a general framework under which [theta]-methods satisfy a discrete version of the positivity constraint characteristic of American options, and numerically demonstrate the sensitivity of the constraint. The positivity results are established for the single-asset and independent two-asset models. In addition, we have incorporated and analyzed an adaptive time-step control strategy to increase the computational efficiency. Numerical experiments are presented for one- and two-asset American options, using adaptive exponential splitting for two-asset problems. The approach is compared with an iterative solution of the two-asset problem in terms of computational efficiency.

  11. Exshall: A Turkel-Zwas explicit large time-step FORTRAN program for solving the shallow-water equations in spherical coordinates

    NASA Astrophysics Data System (ADS)

    Navon, I. M.; Yu, Jian

    A FORTRAN computer program is presented and documented applying the Turkel-Zwas explicit large time-step scheme to a hemispheric barotropic model with constraint restoration of integral invariants of the shallow-water equations. We then proceed to detail the algorithms embodied in the code EXSHALL in this paper, particularly algorithms related to the efficiency and stability of T-Z scheme and the quadratic constraint restoration method which is based on a variational approach. In particular we provide details about the high-latitude filtering, Shapiro filtering, and Robert filtering algorithms used in the code. We explain in detail the various subroutines in the EXSHALL code with emphasis on algorithms implemented in the code and present the flowcharts of some major subroutines. Finally, we provide a visual example illustrating a 4-day run using real initial data, along with a sample printout and graphic isoline contours of the height field and velocity fields.

  12. Age-related modifications in steering behaviour: effects of base-of-support constraints at the turn point.

    PubMed

    Paquette, Maxime R; Fuller, Jason R; Adkin, Allan L; Vallis, Lori Ann

    2008-09-01

    This study investigated the effects of altering the base of support (BOS) at the turn point on anticipatory locomotor adjustments during voluntary changes in travel direction in healthy young and older adults. Participants were required to walk at their preferred pace along a 3-m straight travel path and continue to walk straight ahead or turn 40 degrees to the left or right for an additional 2-m. The starting foot and occasionally the gait starting point were adjusted so that participants had to execute the turn using a cross-over step with a narrow BOS or a lead-out step with a wide BOS. Spatial and temporal gait variables, magnitudes of angular segmental movement, and timing and sequencing of body segment reorientation were similar despite executing the turn with a narrow or wide BOS. A narrow BOS during turning generated an increased step width in the step prior to the turn for both young and older adults. Age-related changes when turning included reduced step velocity and step length for older compared to young adults. Age-related changes in the timing and sequencing of body segment reorientation prior to the turn point were also observed. A reduction in walking speed and an increase in step width just prior to the turn, combined with a delay in motion of the center of mass suggests that older adults used a more cautious combined foot placement and hip strategy to execute changes in travel direction compared to young adults. The results of this study provide insight into mobility constraints during a common locomotor task in older adults.

  13. Constraint Preserving Schemes Using Potential-Based Fluxes. I. Multidimensional Transport Equations (PREPRINT)

    DTIC Science & Technology

    2010-01-01

    i,j−∆t nEni ,j , u∗∗i,j =u ∗ i,j−∆t nEni ,j , un+1i,j = 1 2 (uni,j+u ∗∗ i,j ). (2.26) An alternative first-order accurate genuinely multi-dimensional...time stepping is the ex- tended Lax-Friedrichs type time stepping, un+1i,j = 1 8 (4uni,j+u n i+1,j+u n i,j+1+u n i−1,j+u n i,j−1)−∆t nEni ,j . (2.27) 13

  14. The synchronisation of lower limb responses with a variable metronome: the effect of biomechanical constraints on timing.

    PubMed

    Chen, Hui-Ya; Wing, Alan M; Pratt, David

    2006-04-01

    Stepping in time with a metronome has been reported to improve pathological gait. Although there have been many studies of finger tapping synchronisation tasks with a metronome, the specific details of the influences of metronome timing on walking remain unknown. As a preliminary to studying pathological control of gait timing, we designed an experiment with four synchronisation tasks, unilateral heel tapping in sitting, bilateral heel tapping in sitting, bilateral heel tapping in standing, and stepping on the spot, in order to examine the influence of biomechanical constraints on metronome timing. These four conditions allow study of the effects of bilateral co-ordination and maintenance of balance on timing. Eight neurologically normal participants made heel tapping and stepping responses in synchrony with a metronome producing 500 ms interpulse intervals. In each trial comprising 40 intervals, one interval, selected at random between intervals 15 and 30, was lengthened or shortened, which resulted in a shift in phase of all subsequent metronome pulses. Performance measures were the speed of compensation for the phase shift, in terms of the temporal difference between the response and the metronome pulse, i.e. asynchrony, and the standard deviation of the asynchronies and interresponse intervals of steady state synchronisation. The speed of compensation decreased with increase in the demands of maintaining balance. The standard deviation varied across conditions but was not related to the compensation speed. The implications of these findings for metronome assisted gait are discussed in terms of a first-order linear correction account of synchronisation.

  15. Drift Reduction in Pedestrian Navigation System by Exploiting Motion Constraints and Magnetic Field.

    PubMed

    Ilyas, Muhammad; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-09-09

    Pedestrian navigation systems (PNS) using foot-mounted MEMS inertial sensors use zero-velocity updates (ZUPTs) to reduce drift in navigation solutions and estimate inertial sensor errors. However, it is well known that ZUPTs cannot reduce all errors, especially as heading error is not observable. Hence, the position estimates tend to drift and even cyclic ZUPTs are applied in updated steps of the Extended Kalman Filter (EKF). This urges the use of other motion constraints for pedestrian gait and any other valuable heading reduction information that is available. In this paper, we exploit two more motion constraints scenarios of pedestrian gait: (1) walking along straight paths; (2) standing still for a long time. It is observed that these motion constraints (called "virtual sensor"), though considerably reducing drift in PNS, still need an absolute heading reference. One common absolute heading estimation sensor is the magnetometer, which senses the Earth's magnetic field and, hence, the true heading angle can be calculated. However, magnetometers are susceptible to magnetic distortions, especially in indoor environments. In this work, an algorithm, called magnetic anomaly detection (MAD) and compensation is designed by incorporating only healthy magnetometer data in the EKF updating step, to reduce drift in zero-velocity updated INS. Experiments are conducted in GPS-denied and magnetically distorted environments to validate the proposed algorithms.

  16. Gas Chromatographic Determination of Fatty Acid Compositions.

    ERIC Educational Resources Information Center

    Heinzen, Horacio; And Others

    1985-01-01

    Describes an experiment that: (1) has a derivation step using readily available reagents; (2) requires limited manipulative skills, centering attention on methodology; (3) can be completed within the time constraints of a normal laboratory period; and (4) investigates materials that are easy to acquire and are of great technical/biological…

  17. A WENO-Limited, ADER-DT, Finite-Volume Scheme for Efficient, Robust, and Communication-Avoiding Multi-Dimensional Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norman, Matthew R

    2014-01-01

    The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronizationmore » and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.« less

  18. Time-lapse joint inversion of geophysical data with automatic joint constraints and dynamic attributes

    NASA Astrophysics Data System (ADS)

    Rittgers, J. B.; Revil, A.; Mooney, M. A.; Karaoulis, M.; Wodajo, L.; Hickey, C. J.

    2016-12-01

    Joint inversion and time-lapse inversion techniques of geophysical data are often implemented in an attempt to improve imaging of complex subsurface structures and dynamic processes by minimizing negative effects of random and uncorrelated spatial and temporal noise in the data. We focus on the structural cross-gradient (SCG) approach (enforcing recovered models to exhibit similar spatial structures) in combination with time-lapse inversion constraints applied to surface-based electrical resistivity and seismic traveltime refraction data. The combination of both techniques is justified by the underlying petrophysical models. We investigate the benefits and trade-offs of SCG and time-lapse constraints. Using a synthetic case study, we show that a combined joint time-lapse inversion approach provides an overall improvement in final recovered models. Additionally, we introduce a new approach to reweighting SCG constraints based on an iteratively updated normalized ratio of model sensitivity distributions at each time-step. We refer to the new technique as the Automatic Joint Constraints (AJC) approach. The relevance of the new joint time-lapse inversion process is demonstrated on the synthetic example. Then, these approaches are applied to real time-lapse monitoring field data collected during a quarter-scale earthen embankment induced-piping failure test. The use of time-lapse joint inversion is justified by the fact that a change of porosity drives concomitant changes in seismic velocities (through its effect on the bulk and shear moduli) and resistivities (through its influence upon the formation factor). Combined with the definition of attributes (i.e. specific characteristics) of the evolving target associated with piping, our approach allows localizing the position of the preferential flow path associated with internal erosion. This is not the case using other approaches.

  19. Image superresolution by midfrequency sparse representation and total variation regularization

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-01-01

    Machine learning has provided many good tools for superresolution, whereas existing methods still need to be improved in many aspects. On one hand, the memory and time cost should be reduced. On the other hand, the step edges of the results obtained by the existing methods are not clear enough. We do the following work. First, we propose a method to extract the midfrequency features for dictionary learning. This method brings the benefit of a reduction of the memory and time complexity without sacrificing the performance. Second, we propose a detailed wiping-off total variation (DWO-TV) regularization model to reconstruct the sharp step edges. This model adds a novel constraint on the downsampling version of the high-resolution image to wipe off the details and artifacts and sharpen the step edges. Finally, step edges produced by the DWO-TV regularization and the details provided by learning are fused. Experimental results show that the proposed method offers a desirable compromise between low time and memory cost and the reconstruction quality.

  20. A framework for simultaneous aerodynamic design optimization in the presence of chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Günther, Stefanie, E-mail: stefanie.guenther@scicomp.uni-kl.de; Gauger, Nicolas R.; Wang, Qiqi

    Integrating existing solvers for unsteady partial differential equations into a simultaneous optimization method is challenging due to the forward-in-time information propagation of classical time-stepping methods. This paper applies the simultaneous single-step one-shot optimization method to a reformulated unsteady constraint that allows for both forward- and backward-in-time information propagation. Especially in the presence of chaotic and turbulent flow, solving the initial value problem simultaneously with the optimization problem often scales poorly with the time domain length. The new formulation relaxes the initial condition and instead solves a least squares problem for the discrete partial differential equations. This enables efficient one-shot optimizationmore » that is independent of the time domain length, even in the presence of chaos.« less

  1. Autonomy for Constellation

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Szczur, Martha R. (Technical Monitor)

    2000-01-01

    The newer types of space systems, which are planned for the future, are placing challenging demands for newer autonomy concepts and techniques. Motivating these challenges are resource constraints. Even though onboard computing power will surely increase in the coming years, the resource constraints associated with space-based processes will continue to be a major factor that needs to be considered when dealing with, for example, agent-based spacecraft autonomy. To realize "economical intelligence", i.e., constrained computational intelligence that can reside within a process under severe resource constraints (time, power, space, etc.), is a major goal for such space systems as the Nanosat constellations. To begin to address the new challenges, we are developing approaches to constellation autonomy with constraints in mind. Within the Agent Concepts Testbed (ACT) at the Goddard Space Flight Center we are currently developing a Nanosat-related prototype for the first of the two-step program.

  2. A universal constraint-based formulation for freely moving immersed bodies in fluids

    NASA Astrophysics Data System (ADS)

    Patankar, Neelesh A.

    2012-11-01

    Numerical simulation of moving immersed bodies in fluids is now practiced routinely. A variety of variants of these approaches have been published, most of which rely on using a background mesh for the fluid equations and tracking the body using Lagrangian points. In this talk, generalized constraint-based governing equations will be presented that provide a unified framework for various immersed body techniques. The key idea that is common to these methods is to assume that the entire fluid-body domain is a ``fluid'' and then to constrain the body domain to move in accordance with its governing equations. The immersed body can be rigid or deforming. The governing equations are developed so that they are independent of the nature of temporal or spatial discretization schemes. Specific choices of time stepping and spatial discretization then lead to techniques developed in prior literature ranging from freely moving rigid to elastic self-propelling bodies. To simulate Brownian systems, thermal fluctuations can be included in the fluid equations via additional random stress terms. Solving the fluctuating hydrodynamic equations coupled with the immersed body results in the Brownian motion of that body. The constraint-based formulation leads to fractional time stepping algorithms a la Chorin-type schemes that are suitable for fast computations of rigid or self-propelling bodies whose deformation kinematics are known. Support from NSF is gratefully acknowledged.

  3. Calibrating Urgency: Triage Decision-Making in a Pediatric Emergency Department

    ERIC Educational Resources Information Center

    Patel, Vimla L.; Gutnik, Lily A.; Karlin, Daniel R.; Pusic, Martin

    2008-01-01

    Triage, the first step in the assessment of emergency department patients, occurs in a highly dynamic environment that functions under constraints of time, physical space, and patient needs that may exceed available resources. Through triage, patients are placed into one of a limited number of categories using a subset of diagnostic information.…

  4. Role of step size and max dwell time in anatomy based inverse optimization for prostate implants

    PubMed Central

    Manikandan, Arjunan; Sarkar, Biplab; Rajendran, Vivek Thirupathur; King, Paul R.; Sresty, N.V. Madhusudhana; Holla, Ragavendra; Kotur, Sachin; Nadendla, Sujatha

    2013-01-01

    In high dose rate (HDR) brachytherapy, the source dwell times and dwell positions are vital parameters in achieving a desirable implant dose distribution. Inverse treatment planning requires an optimal choice of these parameters to achieve the desired target coverage with the lowest achievable dose to the organs at risk (OAR). This study was designed to evaluate the optimum source step size and maximum source dwell time for prostate brachytherapy implants using an Ir-192 source. In total, one hundred inverse treatment plans were generated for the four patients included in this study. Twenty-five treatment plans were created for each patient by varying the step size and maximum source dwell time during anatomy-based, inverse-planned optimization. Other relevant treatment planning parameters were kept constant, including the dose constraints and source dwell positions. Each plan was evaluated for target coverage, urethral and rectal dose sparing, treatment time, relative target dose homogeneity, and nonuniformity ratio. The plans with 0.5 cm step size were seen to have clinically acceptable tumor coverage, minimal normal structure doses, and minimum treatment time as compared with the other step sizes. The target coverage for this step size is 87% of the prescription dose, while the urethral and maximum rectal doses were 107.3 and 68.7%, respectively. No appreciable difference in plan quality was observed with variation in maximum source dwell time. The step size plays a significant role in plan optimization for prostate implants. Our study supports use of a 0.5 cm step size for prostate implants. PMID:24049323

  5. Drift Reduction in Pedestrian Navigation System by Exploiting Motion Constraints and Magnetic Field

    PubMed Central

    Ilyas, Muhammad; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    Pedestrian navigation systems (PNS) using foot-mounted MEMS inertial sensors use zero-velocity updates (ZUPTs) to reduce drift in navigation solutions and estimate inertial sensor errors. However, it is well known that ZUPTs cannot reduce all errors, especially as heading error is not observable. Hence, the position estimates tend to drift and even cyclic ZUPTs are applied in updated steps of the Extended Kalman Filter (EKF). This urges the use of other motion constraints for pedestrian gait and any other valuable heading reduction information that is available. In this paper, we exploit two more motion constraints scenarios of pedestrian gait: (1) walking along straight paths; (2) standing still for a long time. It is observed that these motion constraints (called “virtual sensor”), though considerably reducing drift in PNS, still need an absolute heading reference. One common absolute heading estimation sensor is the magnetometer, which senses the Earth’s magnetic field and, hence, the true heading angle can be calculated. However, magnetometers are susceptible to magnetic distortions, especially in indoor environments. In this work, an algorithm, called magnetic anomaly detection (MAD) and compensation is designed by incorporating only healthy magnetometer data in the EKF updating step, to reduce drift in zero-velocity updated INS. Experiments are conducted in GPS-denied and magnetically distorted environments to validate the proposed algorithms. PMID:27618056

  6. Combined Economic and Hydrologic Modeling to Support Collaborative Decision Making Processes

    NASA Astrophysics Data System (ADS)

    Sheer, D. P.

    2008-12-01

    For more than a decade, the core concept of the author's efforts in support of collaborative decision making has been a combination of hydrologic simulation and multi-objective optimization. The modeling has generally been used to support collaborative decision making processes. The OASIS model developed by HydroLogics Inc. solves a multi-objective optimization at each time step using a mixed integer linear program (MILP). The MILP can be configured to include any user defined objective, including but not limited too economic objectives. For example, an estimated marginal value for water for crops and M&I use were included in the objective function to drive trades in a model of the lower Rio Grande. The formulation of the MILP, constraints and objectives, in any time step is conditional: it changes based on the value of state variables and dynamic external forcing functions, such as rainfall, hydrology, market prices, arrival of migratory fish, water temperature, etc. It therefore acts as a dynamic short term multi-objective economic optimization for each time step. MILP is capable of solving a general problem that includes a very realistic representation of the physical system characteristics in addition to the normal multi-objective optimization objectives and constraints included in economic models. In all of these models, the short term objective function is a surrogate for achieving long term multi-objective results. The long term performance for any alternative (especially including operating strategies) is evaluated by simulation. An operating rule is the combination of conditions, parameters, constraints and objectives used to determine the formulation of the short term optimization in each time step. Heuristic wrappers for the simulation program have been developed improve the parameters of an operating rule, and are initiating research on a wrapper that will allow us to employ a genetic algorithm to improve the form of the rule (conditions, constraints, and short term objectives) as well. In the models operating rules represent different models of human behavior, and the objective of the modeling is to find rules for human behavior that perform well in terms of long term human objectives. The conceptual model used to represent human behavior incorporates economic multi-objective optimization for surrogate objectives, and rules that set those objectives based on current conditions and accounting for uncertainty, at least implicitly. The author asserts that real world operating rules follow this form and have evolved because they have been perceived as successful in the past. Thus, the modeling efforts focus on human behavior in much the same way that economic models focus on human behavior. This paper illustrates the above concepts with real world examples.

  7. Efficient QoS-aware Service Composition

    NASA Astrophysics Data System (ADS)

    Alrifai, Mohammad; Risse, Thomas

    Web service composition requests are usually combined with endto-end QoS requirements, which are specified in terms of non-functional properties (e.g. response time, throughput and price). The goal of QoS-aware service composition is to find the best combination of services such that their aggregated QoS values meet these end-to-end requirements. Local selection techniques are very efficient but fail short in handling global QoS constraints. Global optimization techniques, on the other hand, can handle global constraints, but their poor performance render them inappropriate for applications with dynamic and real-time requirements. In this paper we address this problem and propose a solution that combines global optimization with local selection techniques for achieving a better performance. The proposed solution consists of two steps: first we use mixed integer linear programming (MILP) to find the optimal decomposition of global QoS constraints into local constraints. Second, we use local search to find the best web services that satisfy these local constraints. Unlike existing MILP-based global planning solutions, the size of the MILP model in our case is much smaller and independent on the number of available services, yields faster computation and more scalability. Preliminary experiments have been conducted to evaluate the performance of the proposed solution.

  8. Real-time inextensible surgical thread simulation.

    PubMed

    Xu, Lang; Liu, Qian

    2018-03-27

    This paper discusses a real-time simulation method of inextensible surgical thread based on the Cosserat rod theory using position-based dynamics (PBD). The method realizes stable twining and knotting of surgical thread while including inextensibility, bending, twisting and coupling effects. The Cosserat rod theory is used to model the nonlinear elastic behavior of surgical thread. The surgical thread model is solved with PBD to achieve a real-time, extremely stable simulation. Due to the one-dimensional linear structure of surgical thread, the direct solution of the distance constraint based on tridiagonal matrix algorithm is used to enhance stretching resistance in every constraint projection iteration. In addition, continuous collision detection and collision response guarantee a large time step and high performance. Furthermore, friction is integrated into the constraint projection process to stabilize the twining of multiple threads and complex contact situations. Through comparisons with existing methods, the surgical thread maintains constant length under large deformation after applying the direct distance constraint in our method. The twining and knotting of multiple threads correspond to stable solutions to contact and friction forces. A surgical suture scene is also modeled to demonstrate the practicality and simplicity of our method. Our method achieves stable and fast simulation of inextensible surgical thread. Benefiting from the unified particle framework, the rigid body, elastic rod, and soft body can be simultaneously simulated. The method is appropriate for applications in virtual surgery that require multiple dynamic bodies.

  9. A new algorithm for stand table projection models.

    Treesearch

    Quang V. Cao; V. Clark Baldwin

    1999-01-01

    The constrained least squares method is proposed as an algorithm for projecting stand tables through time. This method consists of three steps: (1) predict survival in each diameter class, (2) predict diameter growth, and (3) use the least squares approach to adjust the stand table to satisfy the constraints of future survival, average diameter, and stand basal area....

  10. Engineering design constraints of the lunar surface environment

    NASA Technical Reports Server (NTRS)

    Morrison, D. A.

    1992-01-01

    Living and working on the lunar surface will be difficult. Design of habitats, machines, tools, and operational scenarios in order to allow maximum flexibility in human activity will require paying attention to certain constraints imposed by conditions at the surface and the characteristics of lunar material. Primary design drivers for habitat, crew health and safety, and crew equipment are: ionizing radiation, the meteoroid flux, and the thermal environment. Secondary constraints for engineering derive from: the physical and chemical properties of lunar surface materials, rock distributions and regolith thicknesses, topography, electromagnetic properties, and seismicity. Protection from ionizing radiation is essential for crew health and safety. The total dose acquired by a crew member will be the sum of the dose acquired during EVA time (when shielding will be least) plus the dose acquired during time spent in the habitat (when shielding will be maximum). Minimizing the dose acquired in the habitat extends the time allowable for EVA's before a dose limit is reached. Habitat shielding is enabling, and higher precision in predicting secondary fluxes produced in shielding material would be desirable. Means for minimizing dose during a solar flare event while on extended EVA will be essential. Early warning of the onset of flare activity (at least a half-hour is feasible) will dictate the time available to take mitigating steps. Warning capability affects design of rovers (or rover tools) and site layout. Uncertainty in solar flare timing is a design constraint that points to the need for quickly accessible or constructible safe havens.

  11. Engineering design constraints of the lunar surface environment

    NASA Astrophysics Data System (ADS)

    Morrison, D. A.

    1992-02-01

    Living and working on the lunar surface will be difficult. Design of habitats, machines, tools, and operational scenarios in order to allow maximum flexibility in human activity will require paying attention to certain constraints imposed by conditions at the surface and the characteristics of lunar material. Primary design drivers for habitat, crew health and safety, and crew equipment are: ionizing radiation, the meteoroid flux, and the thermal environment. Secondary constraints for engineering derive from: the physical and chemical properties of lunar surface materials, rock distributions and regolith thicknesses, topography, electromagnetic properties, and seismicity. Protection from ionizing radiation is essential for crew health and safety. The total dose acquired by a crew member will be the sum of the dose acquired during EVA time (when shielding will be least) plus the dose acquired during time spent in the habitat (when shielding will be maximum). Minimizing the dose acquired in the habitat extends the time allowable for EVA's before a dose limit is reached. Habitat shielding is enabling, and higher precision in predicting secondary fluxes produced in shielding material would be desirable. Means for minimizing dose during a solar flare event while on extended EVA will be essential. Early warning of the onset of flare activity (at least a half-hour is feasible) will dictate the time available to take mitigating steps. Warning capability affects design of rovers (or rover tools) and site layout. Uncertainty in solar flare timing is a design constraint that points to the need for quickly accessible or constructible safe havens.

  12. Joint Chance-Constrained Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob

    2012-01-01

    This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.

  13. Numerical calculations of velocity and pressure distribution around oscillating airfoils

    NASA Technical Reports Server (NTRS)

    Bratanow, T.; Ecer, A.; Kobiske, M.

    1974-01-01

    An analytical procedure based on the Navier-Stokes equations was developed for analyzing and representing properties of unsteady viscous flow around oscillating obstacles. A variational formulation of the vorticity transport equation was discretized in finite element form and integrated numerically. At each time step of the numerical integration, the velocity field around the obstacle was determined for the instantaneous vorticity distribution from the finite element solution of Poisson's equation. The time-dependent boundary conditions around the oscillating obstacle were introduced as external constraints, using the Lagrangian Multiplier Technique, at each time step of the numerical integration. The procedure was then applied for determining pressures around obstacles oscillating in unsteady flow. The obtained results for a cylinder and an airfoil were illustrated in the form of streamlines and vorticity and pressure distributions.

  14. A Discrete Constraint for Entropy Conservation and Sound Waves in Cloud-Resolving Modeling

    NASA Technical Reports Server (NTRS)

    Zeng, Xi-Ping; Tao, Wei-Kuo; Simpson, Joanne

    2003-01-01

    Ideal cloud-resolving models contain little-accumulative errors. When their domain is so large that synoptic large-scale circulations are accommodated, they can be used for the simulation of the interaction between convective clouds and the large-scale circulations. This paper sets up a framework for the models, using moist entropy as a prognostic variable and employing conservative numerical schemes. The models possess no accumulative errors of thermodynamic variables when they comply with a discrete constraint on entropy conservation and sound waves. Alternatively speaking, the discrete constraint is related to the correct representation of the large-scale convergence and advection of moist entropy. Since air density is involved in entropy conservation and sound waves, the challenge is how to compute sound waves efficiently under the constraint. To address the challenge, a compensation method is introduced on the basis of a reference isothermal atmosphere whose governing equations are solved analytically. Stability analysis and numerical experiments show that the method allows the models to integrate efficiently with a large time step.

  15. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method.

    PubMed

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-21

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.

  16. Renormalized Hamiltonian for a peptide chain: Digitalizing the protein folding problem

    NASA Astrophysics Data System (ADS)

    Fernández, Ariel; Colubri, Andrés

    2000-05-01

    A renormalized Hamiltonian for a flexible peptide chain is derived to generate the long-time limit dynamics compatible with a coarsening of torsional conformation space. The renormalization procedure is tailored taking into account the coarse graining imposed by the backbone torsional constraints due to the local steric hindrance and the local backbone-side-group interactions. Thus, the torsional degrees of freedom for each residue are resolved modulo basins of attraction in its so-called Ramachandran map. This Ramachandran renormalization (RR) procedure is implemented so that the chain is energetically driven to form contact patterns as their respective collective topological constraints are fulfilled within the coarse description. In this way, the torsional dynamics are digitalized and become codified as an evolving pattern in a binary matrix. Each accepted Monte Carlo step in a canonical ensemble simulation is correlated with the real mean first passage time it takes to reach the destination coarse topological state. This real-time correlation enables us to test the RR dynamics by comparison with experimentally probed kinetic bottlenecks along the dominant folding pathway. Such intermediates are scarcely populated at any given time, but they determine the kinetic funnel leading to the active structure. This landscape region is reached through kinetically controlled steps needed to overcome the conformational entropy of the random coil. The results are specialized for the bovine pancreatic trypsin inhibitor, corroborating the validity of our method.

  17. Stochastic online appointment scheduling of multi-step sequential procedures in nuclear medicine.

    PubMed

    Pérez, Eduardo; Ntaimo, Lewis; Malavé, César O; Bailey, Carla; McCormack, Peter

    2013-12-01

    The increased demand for medical diagnosis procedures has been recognized as one of the contributors to the rise of health care costs in the U.S. in the last few years. Nuclear medicine is a subspecialty of radiology that uses advanced technology and radiopharmaceuticals for the diagnosis and treatment of medical conditions. Procedures in nuclear medicine require the use of radiopharmaceuticals, are multi-step, and have to be performed under strict time window constraints. These characteristics make the scheduling of patients and resources in nuclear medicine challenging. In this work, we derive a stochastic online scheduling algorithm for patient and resource scheduling in nuclear medicine departments which take into account the time constraints imposed by the decay of the radiopharmaceuticals and the stochastic nature of the system when scheduling patients. We report on a computational study of the new methodology applied to a real clinic. We use both patient and clinic performance measures in our study. The results show that the new method schedules about 600 more patients per year on average than a scheduling policy that was used in practice by improving the way limited resources are managed at the clinic. The new methodology finds the best start time and resources to be used for each appointment. Furthermore, the new method decreases patient waiting time for an appointment by about two days on average.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fath, L., E-mail: lukas.fath@kit.edu; Hochbruck, M., E-mail: marlis.hochbruck@kit.edu; Singh, C.V., E-mail: chandraveer.singh@utoronto.ca

    Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementationmore » in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.« less

  19. Solving the infeasible trust-region problem using approximations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, John E.; Perez, Victor M.; Eldred, Michael Scott

    2004-07-01

    The use of optimization in engineering design has fueled the development of algorithms for specific engineering needs. When the simulations are expensive to evaluate or the outputs present some noise, the direct use of nonlinear optimizers is not advisable, since the optimization process will be expensive and may result in premature convergence. The use of approximations for both cases is an alternative investigated by many researchers including the authors. When approximations are present, a model management is required for proper convergence of the algorithm. In nonlinear programming, the use of trust-regions for globalization of a local algorithm has been provenmore » effective. The same approach has been used to manage the local move limits in sequential approximate optimization frameworks as in Alexandrov et al., Giunta and Eldred, Perez et al. , Rodriguez et al., etc. The experience in the mathematical community has shown that more effective algorithms can be obtained by the specific inclusion of the constraints (SQP type of algorithms) rather than by using a penalty function as in the augmented Lagrangian formulation. The presence of explicit constraints in the local problem bounded by the trust region, however, may have no feasible solution. In order to remedy this problem the mathematical community has developed different versions of a composite steps approach. This approach consists of a normal step to reduce the amount of constraint violation and a tangential step to minimize the objective function maintaining the level of constraint violation attained at the normal step. Two of the authors have developed a different approach for a sequential approximate optimization framework using homotopy ideas to relax the constraints. This algorithm called interior-point trust-region sequential approximate optimization (IPTRSAO) presents some similarities to the two normal-tangential steps algorithms. In this paper, a description of the similarities is presented and an expansion of the two steps algorithm is presented for the case of approximations.« less

  20. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  1. Nonlinearly preconditioned semismooth Newton methods for variational inequality solution of two-phase flow in porous media

    NASA Astrophysics Data System (ADS)

    Yang, Haijian; Sun, Shuyu; Yang, Chao

    2017-03-01

    Most existing methods for solving two-phase flow problems in porous media do not take the physically feasible saturation fractions between 0 and 1 into account, which often destroys the numerical accuracy and physical interpretability of the simulation. To calculate the solution without the loss of this basic requirement, we introduce a variational inequality formulation of the saturation equilibrium with a box inequality constraint, and use a conservative finite element method for the spatial discretization and a backward differentiation formula with adaptive time stepping for the temporal integration. The resulting variational inequality system at each time step is solved by using a semismooth Newton algorithm. To accelerate the Newton convergence and improve the robustness, we employ a family of adaptive nonlinear elimination methods as a nonlinear preconditioner. Some numerical results are presented to demonstrate the robustness and efficiency of the proposed algorithm. A comparison is also included to show the superiority of the proposed fully implicit approach over the classical IMplicit Pressure-Explicit Saturation (IMPES) method in terms of the time step size and the total execution time measured on a parallel computer.

  2. Performance Limiting Flow Processes in High-State Loading High-Mach Number Compressors

    DTIC Science & Technology

    2008-03-13

    the Doctoral Thesis Committee of the doctoral student. 3 3.0 Technical Background A strong incentive exists to reduce airfoil count in aircraft engine ...Advanced Turbine Engine ). A basic constraint on blade reduction is seen from the Euler turbine equation, which shows that, although a design can be carried...on the vane to rotor blade ratio of 8:11). Within the MSU Turbo code, specifying a small number of time steps requires more iteration at each time

  3. The Observing Time Distribution in Major Groundbased Observatories - a Complex Task

    NASA Astrophysics Data System (ADS)

    Breysacher, J.

    The aim of the present paper is to give, first, a brief description of the different steps related to the general procedure of telescope time allocation at the European Southern Observatory, and then, a detailed review of the various constraints one has to take into account when preparing the final observing schedule on the various telescopes installed at La Silla. A succinct discussion will be given of how, in the future, remote control observing may facilitate the coordination of multiwavelength investigations.

  4. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less

  5. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models

    DOE PAGES

    Haraldsdóttir, Hulda S.; Cousins, Ben; Thiele, Ines; ...

    2017-01-31

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. Wemore » apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks.« less

  6. Web-based software tool for constraint-based design specification of synthetic biological systems.

    PubMed

    Oberortner, Ernst; Densmore, Douglas

    2015-06-19

    miniEugene provides computational support for solving combinatorial design problems, enabling users to specify and enumerate designs for novel biological systems based on sets of biological constraints. This technical note presents a brief tutorial for biologists and software engineers in the field of synthetic biology on how to use miniEugene. After reading this technical note, users should know which biological constraints are available in miniEugene, understand the syntax and semantics of these constraints, and be able to follow a step-by-step guide to specify the design of a classical synthetic biological system-the genetic toggle switch.1 We also provide links and references to more information on the miniEugene web application and the integration of the miniEugene software library into sophisticated Computer-Aided Design (CAD) tools for synthetic biology ( www.eugenecad.org ).

  7. Incorporating Demand and Supply Constraints into Economic Evaluations in Low‐Income and Middle‐Income Countries

    PubMed Central

    Mangham‐Jefferies, Lindsay; Gomez, Gabriela B.; Pitt, Catherine; Foster, Nicola

    2016-01-01

    Abstract Global guidelines for new technologies are based on cost and efficacy data from a limited number of trial locations. Country‐level decision makers need to consider whether cost‐effectiveness analysis used to inform global guidelines are sufficient for their situation or whether to use models that adjust cost‐effectiveness results taking into account setting‐specific epidemiological and cost heterogeneity. However, demand and supply constraints will also impact cost‐effectiveness by influencing the standard of care and the use and implementation of any new technology. These constraints may also vary substantially by setting. We present two case studies of economic evaluations of the introduction of new diagnostics for malaria and tuberculosis control. These case studies are used to analyse how the scope of economic evaluations of each technology expanded to account for and then address demand and supply constraints over time. We use these case studies to inform a conceptual framework that can be used to explore the characteristics of intervention complexity and the influence of demand and supply constraints. Finally, we describe a number of feasible steps that researchers who wish to apply our framework in cost‐effectiveness analyses. PMID:26786617

  8. Analytical design of an industrial two-term controller for optimal regulatory control of open-loop unstable processes under operational constraints.

    PubMed

    Tchamna, Rodrigue; Lee, Moonyong

    2018-01-01

    This paper proposes a novel optimization-based approach for the design of an industrial two-term proportional-integral (PI) controller for the optimal regulatory control of unstable processes subjected to three common operational constraints related to the process variable, manipulated variable and its rate of change. To derive analytical design relations, the constrained optimal control problem in the time domain was transformed into an unconstrained optimization problem in a new parameter space via an effective parameterization. The resulting optimal PI controller has been verified to yield optimal performance and stability of an open-loop unstable first-order process under operational constraints. The proposed analytical design method explicitly takes into account the operational constraints in the controller design stage and also provides useful insights into the optimal controller design. Practical procedures for designing optimal PI parameters and a feasible constraint set exclusive of complex optimization steps are also proposed. The proposed controller was compared with several other PI controllers to illustrate its performance. The robustness of the proposed controller against plant-model mismatch has also been investigated. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  9. [Addictions: Motivated or forced care].

    PubMed

    Cottencin, Olivier; Bence, Camille

    2016-12-01

    Patients presenting with addictions are often obliged to consult. This constraint can be explicit (partner, children, parents, doctor, police, justice) or can be implicit (for their children, for their families, or for their health). Thus, beyond the fact that the caregiver faces the paradox of caring for subjects who do not ask treatment, he faces as well a double bind considered to be supporter of the social order or helper of patients. The transtheoretical model of change is complex showing us that it was neither fixed in time, nor perpetual for a given individual. This model includes ambivalence, resistance and even relapse, but it still considers constraint as a brake than an effective tool. Therapist must have adequate communication tools to enable everyone (forced or not) understand that involvement in care will enable him/her to regain his free will, even though it took to go through coercion. We propose in this article to detail the first steps with the patient presenting with addiction looking for constraint (implicit or explicit), how to work with constraint, avoid making resistances ourselves and make of constraint a powerful motivator for change. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  10. Simplicity constraints: A 3D toy model for loop quantum gravity

    NASA Astrophysics Data System (ADS)

    Charles, Christoph

    2018-05-01

    In loop quantum gravity, tremendous progress has been made using the Ashtekar-Barbero variables. These variables, defined in a gauge fixing of the theory, correspond to a parametrization of the solutions of the so-called simplicity constraints. Their geometrical interpretation is however unsatisfactory as they do not constitute a space-time connection. It would be possible to resolve this point by using a full Lorentz connection or, equivalently, by using the self-dual Ashtekar variables. This leads however to simplicity constraints or reality conditions which are notoriously difficult to implement in the quantum theory. We explore in this paper the possibility of using completely degenerate actions to impose such constraints at the quantum level in the context of canonical quantization. To do so, we define a simpler model, in 3D, with similar constraints by extending the phase space to include an independent vielbein. We define the classical model and show that a precise quantum theory by gauge unfixing can be defined out of it, completely equivalent to the standard 3D Euclidean quantum gravity. We discuss possible future explorations around this model as it could help as a stepping stone to define full-fledged covariant loop quantum gravity.

  11. Continuous bind-and-elute protein A capture chromatography: Optimization under process scale column constraints and comparison to batch operation.

    PubMed

    Kaltenbrunner, Oliver; Diaz, Luis; Hu, Xiaochun; Shearer, Michael

    2016-07-08

    Recently, continuous downstream processing has become a topic of discussion and analysis at conferences while no industrial applications of continuous downstream processing for biopharmaceutical manufacturing have been reported. There is significant potential to increase the productivity of a Protein A capture step by converting the operation to simulated moving bed (SMB) mode. In this mode, shorter columns are operated at higher process flow and corresponding short residence times. The ability to significantly shorten the product residence time during loading without appreciable capacity loss can dramatically increase productivity of the capture step and consequently reduce the amount of Protein A resin required in the process. Previous studies have not considered the physical limitations of how short columns can be packed and the flow rate limitations due to pressure drop of stacked columns. In this study, we are evaluating the process behavior of a continuous Protein A capture column cycling operation under the known pressure drop constraints of a compressible media. The results are compared to the same resin operated under traditional batch operating conditions. We analyze the optimum system design point for a range of feed concentrations, bed heights, and load residence times and determine achievable productivity for any feed concentration and any column bed height. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:938-948, 2016. © 2016 American Institute of Chemical Engineers.

  12. Insight into the ten-penny problem: guiding search by constraints and maximization.

    PubMed

    Öllinger, Michael; Fedor, Anna; Brodt, Svenja; Szathmáry, Eörs

    2017-09-01

    For a long time, insight problem solving has been either understood as nothing special or as a particular class of problem solving. The first view implicates the necessity to find efficient heuristics that restrict the search space, the second, the necessity to overcome self-imposed constraints. Recently, promising hybrid cognitive models attempt to merge both approaches. In this vein, we were interested in the interplay of constraints and heuristic search, when problem solvers were asked to solve a difficult multi-step problem, the ten-penny problem. In three experimental groups and one control group (N = 4 × 30) we aimed at revealing, what constraints drive problem difficulty in this problem, and how relaxing constraints, and providing an efficient search criterion facilitates the solution. We also investigated how the search behavior of successful problem solvers and non-solvers differ. We found that relaxing constraints was necessary but not sufficient to solve the problem. Without efficient heuristics that facilitate the restriction of the search space, and testing the progress of the problem solving process, the relaxation of constraints was not effective. Relaxing constraints and applying the search criterion are both necessary to effectively increase solution rates. We also found that successful solvers showed promising moves earlier and had a higher maximization and variation rate across solution attempts. We propose that this finding sheds light on how different strategies contribute to solving difficult problems. Finally, we speculate about the implications of our findings for insight problem solving.

  13. A discrete classical space-time could require 6 extra-dimensions

    NASA Astrophysics Data System (ADS)

    Guillemant, Philippe; Medale, Marc; Abid, Cherifa

    2018-01-01

    We consider a discrete space-time in which conservation laws are computed in such a way that the density of information is kept bounded. We use a 2D billiard as a toy model to compute the uncertainty propagation in ball positions after every shock and the corresponding loss of phase information. Our main result is the computation of a critical time step above which billiard calculations are no longer deterministic, meaning that a multiverse of distinct billiard histories begins to appear, caused by the lack of information. Then, we highlight unexpected properties of this critical time step and the subsequent exponential evolution of the number of histories with time, to observe that after certain duration all billiard states could become possible final states, independent of initial conditions. We conclude that if our space-time is really a discrete one, one would need to introduce extra-dimensions in order to provide supplementary constraints that specify which history should be played.

  14. Assessment of power step performances of variable speed pump-turbine unit by means of hydro-electrical system simulation

    NASA Astrophysics Data System (ADS)

    Béguin, A.; Nicolet, C.; Hell, J.; Moreira, C.

    2017-04-01

    The paper explores the improvement in ancillary services that variable speed technologies can provide for the case of an existing pumped storage power plant of 2x210 MVA which conversion from fixed speed to variable speed is investigated with a focus on the power step performances of the units. First two motor-generator variable speed technologies are introduced, namely the Doubly Fed Induction Machine (DFIM) and the Full Scale Frequency Converter (FSFC). Then a detailed numerical simulation model of the investigated power plant used to simulate power steps response and comprising the waterways, the pump-turbine unit, the motor-generator, the grid connection and the control systems is presented. Hydroelectric system time domain simulations are performed in order to determine the shortest response time achievable, taking into account the constraints from the maximum penstock pressure and from the rotational speed limits. It is shown that the maximum instantaneous power step response up and down depends on the hydro-mechanical characteristics of the pump-turbine unit and of the motor-generator speed limits. As a results, for the investigated test case, the FSFC solution offer the best power step response performances.

  15. Integrated payload and mission planning, phase 3. Volume 2: Logic/Methodology for preliminary grouping of spacelab and mixed cargo payloads

    NASA Technical Reports Server (NTRS)

    Rodgers, T. E.; Johnson, J. F.

    1977-01-01

    The logic and methodology for a preliminary grouping of Spacelab and mixed-cargo payloads is proposed in a form that can be readily coded into a computer program by NASA. The logic developed for this preliminary cargo grouping analysis is summarized. Principal input data include the NASA Payload Model, payload descriptive data, Orbiter and Spacelab capabilities, and NASA guidelines and constraints. The first step in the process is a launch interval selection in which the time interval for payload grouping is identified. Logic flow steps are then taken to group payloads and define flight configurations based on criteria that includes dedication, volume, area, orbital parameters, pointing, g-level, mass, center of gravity, energy, power, and crew time.

  16. Scheduling double round-robin tournaments with divisional play using constraint programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey

    We study a tournament format that extends a traditional double round-robin format with divisional single round-robin tournaments. Elitserien, the top Swedish handball league, uses such a format for its league schedule. We present a constraint programming model that characterizes the general double round-robin plus divisional single round-robin format. This integrated model allows scheduling to be performed in a single step, as opposed to common multistep approaches that decompose scheduling into smaller problems and possibly miss optimal solutions. In addition to general constraints, we introduce Elitserien-specific requirements for its tournament. These general and league-specific constraints allow us to identify implicit andmore » symmetry-breaking properties that reduce the time to solution from hours to seconds. A scalability study of the number of teams shows that our approach is reasonably fast for even larger league sizes. The experimental evaluation of the integrated approach takes considerably less computational effort to schedule Elitserien than does the previous decomposed approach. (C) 2016 Elsevier B.V. All rights reserved« less

  17. Dynamic ADMM for Real-Time Optimal Power Flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhang, Yijian; Hong, Mingyi

    This paper considers distribution networks featuring distributed energy resources (DERs), and develops a dynamic optimization method to maximize given operational objectives in real time while adhering to relevant network constraints. The design of the dynamic algorithm is based on suitable linearization of the AC power flow equations, and it leverages the so-called alternating direction method of multipliers (ADMM). The steps of the ADMM, however, are suitably modified to accommodate appropriate measurements from the distribution network and the DERs. With the aid of these measurements, the resultant algorithm can enforce given operational constraints in spite of inaccuracies in the representation ofmore » the AC power flows, and it avoids ubiquitous metering to gather the state of noncontrollable resources. Optimality and convergence of the proposed algorithm are established in terms of tracking of the solution of a convex surrogate of the AC optimal power flow problem.« less

  18. Dynamic ADMM for Real-Time Optimal Power Flow: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhang, Yijian; Hong, Mingyi

    This paper considers distribution networks featuring distributed energy resources (DERs), and develops a dynamic optimization method to maximize given operational objectives in real time while adhering to relevant network constraints. The design of the dynamic algorithm is based on suitable linearizations of the AC power flow equations, and it leverages the so-called alternating direction method of multipliers (ADMM). The steps of the ADMM, however, are suitably modified to accommodate appropriate measurements from the distribution network and the DERs. With the aid of these measurements, the resultant algorithm can enforce given operational constraints in spite of inaccuracies in the representation ofmore » the AC power flows, and it avoids ubiquitous metering to gather the state of non-controllable resources. Optimality and convergence of the propose algorithm are established in terms of tracking of the solution of a convex surrogate of the AC optimal power flow problem.« less

  19. Analysis of the sweeped actuator line method

    DOE PAGES

    Nathan, Jörn; Masson, Christian; Dufresne, Louis; ...

    2015-10-16

    The actuator line method made it possible to describe the near wake of a wind turbine more accurately than with the actuator disk method. Whereas the actuator line generates the helicoidal vortex system shed from the tip blades, the actuator disk method sheds a vortex sheet from the edge of the rotor plane. But with the actuator line come also temporal and spatial constraints, such as the need for a much smaller time step than with actuator disk. While the latter one only has to obey the Courant-Friedrichs-Lewy condition, the former one is also restricted by the grid resolution andmore » the rotor tip-speed. Additionally the spatial resolution has to be finer for the actuator line than with the actuator disk, for well resolving the tip vortices. Therefore this work is dedicated to examining a method in between of actuator line and actuator disk, which is able to model the transient behavior, such as the rotating blades, but which also relaxes the temporal constraint. Therefore a larger time-step is used and the blade forces are swept over a certain area. As a result, the main focus of this article is on the aspect of the blade tip vortex generation in comparison with the standard actuator line and actuator disk.« less

  20. Analysis of the sweeped actuator line method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nathan, Jörn; Masson, Christian; Dufresne, Louis

    The actuator line method made it possible to describe the near wake of a wind turbine more accurately than with the actuator disk method. Whereas the actuator line generates the helicoidal vortex system shed from the tip blades, the actuator disk method sheds a vortex sheet from the edge of the rotor plane. But with the actuator line come also temporal and spatial constraints, such as the need for a much smaller time step than with actuator disk. While the latter one only has to obey the Courant-Friedrichs-Lewy condition, the former one is also restricted by the grid resolution andmore » the rotor tip-speed. Additionally the spatial resolution has to be finer for the actuator line than with the actuator disk, for well resolving the tip vortices. Therefore this work is dedicated to examining a method in between of actuator line and actuator disk, which is able to model the transient behavior, such as the rotating blades, but which also relaxes the temporal constraint. Therefore a larger time-step is used and the blade forces are swept over a certain area. As a result, the main focus of this article is on the aspect of the blade tip vortex generation in comparison with the standard actuator line and actuator disk.« less

  1. The impact of weight classification on safety: timing steps to adapt to external constraints

    PubMed Central

    Gill, S.V.

    2015-01-01

    Objectives: The purpose of the current study was to evaluate how weight classification influences safety by examining adults’ ability to meet a timing constraint: walking to the pace of an audio metronome. Methods: With a cross-sectional design, walking parameters were collected as 55 adults with normal (n=30) and overweight (n=25) body mass index scores walked to slow, normal, and fast audio metronome paces. Results: Between group comparisons showed that at the fast pace, those with overweight body mass index (BMI) had longer double limb support and stance times and slower cadences than the normal weight group (all ps<0.05). Examinations of participants’ ability to meet the metronome paces revealed that participants who were overweight had higher cadences at the slow and fast paces (all ps<0.05). Conclusions: Findings suggest that those with overweight BMI alter their gait to maintain biomechanical stability. Understanding how excess weight influences gait adaptation can inform interventions to improve safety for individuals with obesity. PMID:25730658

  2. Noisy image magnification with total variation regularization and order-changed dictionary learning

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-12-01

    Noisy low resolution (LR) images are always obtained in real applications, but many existing image magnification algorithms can not get good result from a noisy LR image. We propose a two-step image magnification algorithm to solve this problem. The proposed algorithm takes the advantages of both regularization-based method and learning-based method. The first step is based on total variation (TV) regularization and the second step is based on sparse representation. In the first step, we add a constraint on the TV regularization model to magnify the LR image and at the same time to suppress the noise in it. In the second step, we propose an order-changed dictionary training algorithm to train the dictionaries which is dominated by texture details. Experimental results demonstrate that the proposed algorithm performs better than many other algorithms when the noise is not serious. The proposed algorithm can also provide better visual quality on natural LR images.

  3. Automated region selection for analysis of dynamic cardiac SPECT data

    NASA Astrophysics Data System (ADS)

    Di Bella, E. V. R.; Gullberg, G. T.; Barclay, A. B.; Eisner, R. L.

    1997-06-01

    Dynamic cardiac SPECT using Tc-99m labeled teboroxime can provide kinetic parameters (washin, washout) indicative of myocardial blood flow. A time-consuming and subjective step of the data analysis is drawing regions of interest to delineate blood pool and myocardial tissue regions. The time-activity curves of the regions are then used to estimate local kinetic parameters. In this work, the appropriate regions are found automatically, in a manner similar to that used for calculating maximum count circumferential profiles in conventional static cardiac studies. The drawbacks to applying standard static circumferential profile methods are the high noise level and high liver uptake common in dynamic teboroxime studies. Searching along each ray for maxima to locate the myocardium does not typically provide useful information. Here we propose an iterative scheme in which constraints are imposed on the radii searched along each ray. The constraints are based on the shape of the time-activity curves of the circumferential profile members and on an assumption that the short axis slices are approximately circular. The constraints eliminate outliers and help to reduce the effects of noise and liver activity. Kinetic parameter estimates from the automatically generated regions were comparable to estimates from manually selected regions in dynamic canine teboroxime studies.

  4. Constraint Embedding for Multibody System Dynamics

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan

    2009-01-01

    This paper describes a constraint embedding approach for the handling of local closure constraints in multibody system dynamics. The approach uses spatial operator techniques to eliminate local-loop constraints from the system and effectively convert the system into tree-topology systems. This approach allows the direct derivation of recursive O(N) techniques for solving the system dynamics and avoiding the expensive steps that would otherwise be required for handling the closedchain dynamics. The approach is very effective for systems where the constraints are confined to small-subgraphs within the system topology. The paper provides background on the spatial operator O(N) algorithms, the extensions for handling embedded constraints, and concludes with some examples of such constraints.

  5. Animal Construction as a Free Boundary Problem: Evidence of Fractal Scaling Laws

    NASA Astrophysics Data System (ADS)

    Nicolis, S. C.

    2014-12-01

    We suggest that the main features of animal construction can be understood as the sum of locally independent actions of non-interacting individuals subjected to the global constraints imposed by the nascent structure. We first formulate an analytically tractable oscopic description of construction which predicts a 1/3 power law for how the length of the structure grows with time. We further show how the power law is modified when biases in random walk performed by the constructors as well as halting times between consecutive construction steps are included.

  6. Novel Fourier-domain constraint for fast phase retrieval in coherent diffraction imaging.

    PubMed

    Latychevskaia, Tatiana; Longchamp, Jean-Nicolas; Fink, Hans-Werner

    2011-09-26

    Coherent diffraction imaging (CDI) for visualizing objects at atomic resolution has been realized as a promising tool for imaging single molecules. Drawbacks of CDI are associated with the difficulty of the numerical phase retrieval from experimental diffraction patterns; a fact which stimulated search for better numerical methods and alternative experimental techniques. Common phase retrieval methods are based on iterative procedures which propagate the complex-valued wave between object and detector plane. Constraints in both, the object and the detector plane are applied. While the constraint in the detector plane employed in most phase retrieval methods requires the amplitude of the complex wave to be equal to the squared root of the measured intensity, we propose a novel Fourier-domain constraint, based on an analogy to holography. Our method allows achieving a low-resolution reconstruction already in the first step followed by a high-resolution reconstruction after further steps. In comparison to conventional schemes this Fourier-domain constraint results in a fast and reliable convergence of the iterative reconstruction process. © 2011 Optical Society of America

  7. Coastal Acoustic Tomography Data Constraints Applied to a Coastal Ocean Circulation Model

    DTIC Science & Technology

    1994-04-01

    Postgraduate School Monterey, CA 93943-5100 Abstract A direct insertion scheme for assimilating coastal acoustic tomo- graphic ( CAT ) vertical...days of this control run were taken to represent "actuality." A series of assimilation experiments was carried out in which CAT temperature slices...synthesized from different CAT configurations based on the "true ocean" were inserted into the n.odel at various time steps to examine the convergence of

  8. Incorporating Demand and Supply Constraints into Economic Evaluations in Low-Income and Middle-Income Countries.

    PubMed

    Vassall, Anna; Mangham-Jefferies, Lindsay; Gomez, Gabriela B; Pitt, Catherine; Foster, Nicola

    2016-02-01

    Global guidelines for new technologies are based on cost and efficacy data from a limited number of trial locations. Country-level decision makers need to consider whether cost-effectiveness analysis used to inform global guidelines are sufficient for their situation or whether to use models that adjust cost-effectiveness results taking into account setting-specific epidemiological and cost heterogeneity. However, demand and supply constraints will also impact cost-effectiveness by influencing the standard of care and the use and implementation of any new technology. These constraints may also vary substantially by setting. We present two case studies of economic evaluations of the introduction of new diagnostics for malaria and tuberculosis control. These case studies are used to analyse how the scope of economic evaluations of each technology expanded to account for and then address demand and supply constraints over time. We use these case studies to inform a conceptual framework that can be used to explore the characteristics of intervention complexity and the influence of demand and supply constraints. Finally, we describe a number of feasible steps that researchers who wish to apply our framework in cost-effectiveness analyses. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.

  9. Configuring Airspace Sectors with Approximate Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Bloem, Michael; Gupta, Pramod

    2010-01-01

    In response to changing traffic and staffing conditions, supervisors dynamically configure airspace sectors by assigning them to control positions. A finite horizon airspace sector configuration problem models this supervisor decision. The problem is to select an airspace configuration at each time step while considering a workload cost, a reconfiguration cost, and a constraint on the number of control positions at each time step. Three algorithms for this problem are proposed and evaluated: a myopic heuristic, an exact dynamic programming algorithm, and a rollouts approximate dynamic programming algorithm. On problem instances from current operations with only dozens of possible configurations, an exact dynamic programming solution gives the optimal cost value. The rollouts algorithm achieves costs within 2% of optimal for these instances, on average. For larger problem instances that are representative of future operations and have thousands of possible configurations, excessive computation time prohibits the use of exact dynamic programming. On such problem instances, the rollouts algorithm reduces the cost achieved by the heuristic by more than 15% on average with an acceptable computation time.

  10. Time limited field of regard search

    NASA Astrophysics Data System (ADS)

    Flug, Eric; Maurer, Tana; Nguyen, Oanh-Tho

    2005-05-01

    Recent work by the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has led to the Time-Limited Search (TLS) model, which has given new formulations for the field of view (FOV) search times. The next step in the evaluation of the overall search model (ACQUIRE) is to apply these parameters to the field of regard (FOR) model. Human perception experiments were conducted using synthetic imagery developed at NVESD. The experiments were competitive player-on-player search tests with the intention of imposing realistic time constraints on the observers. FOR detection probabilities, search times, and false alarm data are analyzed and compared to predictions using both the TLS model and ACQUIRE.

  11. {gamma} parameter and Solar System constraint in chameleon-Brans-Dicke theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saaidi, Kh.; Mohammadi, A.; Sheikhahmadi, H.

    2011-05-15

    The post Newtonian parameter is considered in the chameleon-Brans-Dicke model. In the first step, the general form of this parameter and also effective gravitational constant is obtained. An arbitrary function for f({Phi}), which indicates the coupling between matter and scalar field, is introduced to investigate validity of solar system constraint. It is shown that the chameleon-Brans-Dicke model can satisfy the solar system constraint and gives us an {omega} parameter of order 10{sup 4}, which is in comparable to the constraint which has been indicated in [19].

  12. Modified unified kinetic scheme for all flow regimes.

    PubMed

    Liu, Sha; Zhong, Chengwen

    2012-06-01

    A modified unified kinetic scheme for the prediction of fluid flow behaviors in all flow regimes is described. The time evolution of macrovariables at the cell interface is calculated with the idea that both free transport and collision mechanisms should be considered. The time evolution of macrovariables is obtained through the conservation constraints. The time evolution of local Maxwellian distribution is obtained directly through the one-to-one mapping from the evolution of macrovariables. These improvements provide more physical realities in flow behaviors and more accurate numerical results in all flow regimes especially in the complex transition flow regime. In addition, the improvement steps introduce no extra computational complexity.

  13. Constraint factor graph cut-based active contour method for automated cellular image segmentation in RNAi screening.

    PubMed

    Chen, C; Li, H; Zhou, X; Wong, S T C

    2008-05-01

    Image-based, high throughput genome-wide RNA interference (RNAi) experiments are increasingly carried out to facilitate the understanding of gene functions in intricate biological processes. Automated screening of such experiments generates a large number of images with great variations in image quality, which makes manual analysis unreasonably time-consuming. Therefore, effective techniques for automatic image analysis are urgently needed, in which segmentation is one of the most important steps. This paper proposes a fully automatic method for cells segmentation in genome-wide RNAi screening images. The method consists of two steps: nuclei and cytoplasm segmentation. Nuclei are extracted and labelled to initialize cytoplasm segmentation. Since the quality of RNAi image is rather poor, a novel scale-adaptive steerable filter is designed to enhance the image in order to extract long and thin protrusions on the spiky cells. Then, constraint factor GCBAC method and morphological algorithms are combined to be an integrated method to segment tight clustered cells. Compared with the results obtained by using seeded watershed and the ground truth, that is, manual labelling results by experts in RNAi screening data, our method achieves higher accuracy. Compared with active contour methods, our method consumes much less time. The positive results indicate that the proposed method can be applied in automatic image analysis of multi-channel image screening data.

  14. Computer aided segmentation of kidneys using locally shape constrained deformable models on CT images

    NASA Astrophysics Data System (ADS)

    Erdt, Marius; Sakas, Georgios

    2010-03-01

    This work presents a novel approach for model based segmentation of the kidney in images acquired by Computed Tomography (CT). The developed computer aided segmentation system is expected to support computer aided diagnosis and operation planning. We have developed a deformable model based approach based on local shape constraints that prevents the model from deforming into neighboring structures while allowing the global shape to adapt freely to the data. Those local constraints are derived from the anatomical structure of the kidney and the presence and appearance of neighboring organs. The adaptation process is guided by a rule-based deformation logic in order to improve the robustness of the segmentation in areas of diffuse organ boundaries. Our work flow consists of two steps: 1.) a user guided positioning and 2.) an automatic model adaptation using affine and free form deformation in order to robustly extract the kidney. In cases which show pronounced pathologies, the system also offers real time mesh editing tools for a quick refinement of the segmentation result. Evaluation results based on 30 clinical cases using CT data sets show an average dice correlation coefficient of 93% compared to the ground truth. The results are therefore in most cases comparable to manual delineation. Computation times of the automatic adaptation step are lower than 6 seconds which makes the proposed system suitable for an application in clinical practice.

  15. Using activity-based costing and theory of constraints to guide continuous improvement in managed care.

    PubMed

    Roybal, H; Baxendale, S J; Gupta, M

    1999-01-01

    Activity-based costing and the theory of constraints have been applied successfully in many manufacturing organizations. Recently, those concepts have been applied in service organizations. This article describes the application of activity-based costing and the theory of constraints in a managed care mental health and substance abuse organization. One of the unique aspects of this particular application was the integration of activity-based costing and the theory of constraints to guide process improvement efforts. This article describes the activity-based costing model and the application of the theory of constraint's focusing steps with an emphasis on unused capacities of activities in the organization.

  16. Comparison of IMRT planning with two-step and one-step optimization: a strategy for improving therapeutic gain and reducing the integral dose

    NASA Astrophysics Data System (ADS)

    Abate, A.; Pressello, M. C.; Benassi, M.; Strigari, L.

    2009-12-01

    The aim of this study was to evaluate the effectiveness and efficiency in inverse IMRT planning of one-step optimization with the step-and-shoot (SS) technique as compared to traditional two-step optimization using the sliding windows (SW) technique. The Pinnacle IMRT TPS allows both one-step and two-step approaches. The same beam setup for five head-and-neck tumor patients and dose-volume constraints were applied for all optimization methods. Two-step plans were produced converting the ideal fluence with or without a smoothing filter into the SW sequence. One-step plans, based on direct machine parameter optimization (DMPO), had the maximum number of segments per beam set at 8, 10, 12, producing a directly deliverable sequence. Moreover, the plans were generated whether a split-beam was used or not. Total monitor units (MUs), overall treatment time, cost function and dose-volume histograms (DVHs) were estimated for each plan. PTV conformality and homogeneity indexes and normal tissue complication probability (NTCP) that are the basis for improving therapeutic gain, as well as non-tumor integral dose (NTID), were evaluated. A two-sided t-test was used to compare quantitative variables. All plans showed similar target coverage. Compared to two-step SW optimization, the DMPO-SS plans resulted in lower MUs (20%), NTID (4%) as well as NTCP values. Differences of about 15-20% in the treatment delivery time were registered. DMPO generates less complex plans with identical PTV coverage, providing lower NTCP and NTID, which is expected to reduce the risk of secondary cancer. It is an effective and efficient method and, if available, it should be favored over the two-step IMRT planning.

  17. Implicit and semi-implicit schemes in the Versatile Advection Code: numerical tests

    NASA Astrophysics Data System (ADS)

    Toth, G.; Keppens, R.; Botchev, M. A.

    1998-04-01

    We describe and evaluate various implicit and semi-implicit time integration schemes applied to the numerical simulation of hydrodynamical and magnetohydrodynamical problems. The schemes were implemented recently in the software package Versatile Advection Code, which uses modern shock capturing methods to solve systems of conservation laws with optional source terms. The main advantage of implicit solution strategies over explicit time integration is that the restrictive constraint on the allowed time step can be (partially) eliminated, thus the computational cost is reduced. The test problems cover one and two dimensional, steady state and time accurate computations, and the solutions contain discontinuities. For each test, we confront explicit with implicit solution strategies.

  18. Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation

    NASA Astrophysics Data System (ADS)

    Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji

    2018-04-01

    In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.

  19. Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation.

    PubMed

    Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji

    2018-04-28

    In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.

  20. A Coarse-to-Fine Geometric Scale-Invariant Feature Transform for Large Size High Resolution Satellite Image Registration

    PubMed Central

    Chang, Xueli; Du, Siliang; Li, Yingying; Fang, Shenghui

    2018-01-01

    Large size high resolution (HR) satellite image matching is a challenging task due to local distortion, repetitive structures, intensity changes and low efficiency. In this paper, a novel matching approach is proposed for the large size HR satellite image registration, which is based on coarse-to-fine strategy and geometric scale-invariant feature transform (SIFT). In the coarse matching step, a robust matching method scale restrict (SR) SIFT is implemented at low resolution level. The matching results provide geometric constraints which are then used to guide block division and geometric SIFT in the fine matching step. The block matching method can overcome the memory problem. In geometric SIFT, with area constraints, it is beneficial for validating the candidate matches and decreasing searching complexity. To further improve the matching efficiency, the proposed matching method is parallelized using OpenMP. Finally, the sensing image is rectified to the coordinate of reference image via Triangulated Irregular Network (TIN) transformation. Experiments are designed to test the performance of the proposed matching method. The experimental results show that the proposed method can decrease the matching time and increase the number of matching points while maintaining high registration accuracy. PMID:29702589

  1. Stereoscopic filming for investigating evasive side-stepping and anterior cruciate ligament injury risk

    NASA Astrophysics Data System (ADS)

    Lee, Marcus J. C.; Bourke, Paul; Alderson, Jacqueline A.; Lloyd, David G.; Lay, Brendan

    2010-02-01

    Non-contact anterior cruciate ligament (ACL) injuries are serious and debilitating, often resulting from the performance of evasive sides-stepping (Ssg) by team sport athletes. Previous laboratory based investigations of evasive Ssg have used generic visual stimuli to simulate realistic time and space constraints that athletes experience in the preparation and execution of the manoeuvre. However, the use of unrealistic visual stimuli to impose these constraints may not be accurately identifying the relationship between the perceptual demands and ACL loading during Ssg in actual game environments. We propose that stereoscopically filmed footage featuring sport specific opposing defender/s simulating a tackle on the viewer, when used as visual stimuli, could improve the ecological validity of laboratory based investigations of evasive Ssg. Due to the need for precision and not just the experience of viewing depth in these scenarios, a rigorous filming process built on key geometric considerations and equipment development to enable a separation of 6.5 cm between two commodity cameras had to be undertaken. Within safety limits, this could be an invaluable tool in enabling more accurate investigations of the associations between evasive Ssg and ACL injury risk.

  2. A Vertically Lagrangian Finite-Volume Dynamical Core for Global Models

    NASA Technical Reports Server (NTRS)

    Lin, Shian-Jiann

    2003-01-01

    A finite-volume dynamical core with a terrain-following Lagrangian control-volume discretization is described. The vertically Lagrangian discretization reduces the dimensionality of the physical problem from three to two with the resulting dynamical system closely resembling that of the shallow water dynamical system. The 2D horizontal-to-Lagrangian-surface transport and dynamical processes are then discretized using the genuinely conservative flux-form semi-Lagrangian algorithm. Time marching is split- explicit, with large-time-step for scalar transport, and small fractional time step for the Lagrangian dynamics, which permits the accurate propagation of fast waves. A mass, momentum, and total energy conserving algorithm is developed for mapping the state variables periodically from the floating Lagrangian control-volume to an Eulerian terrain-following coordinate for dealing with physical parameterizations and to prevent severe distortion of the Lagrangian surfaces. Deterministic baroclinic wave growth tests and long-term integrations using the Held-Suarez forcing are presented. Impact of the monotonicity constraint is discussed.

  3. New variables for classical and quantum gravity in all dimensions: I. Hamiltonian analysis

    NASA Astrophysics Data System (ADS)

    Bodendorfer, N.; Thiemann, T.; Thurn, A.

    2013-02-01

    Loop quantum gravity (LQG) relies heavily on a connection formulation of general relativity such that (1) the connection Poisson commutes with itself and (2) the corresponding gauge group is compact. This can be achieved starting from the Palatini or Holst action when imposing the time gauge. Unfortunately, this method is restricted to D + 1 = 4 spacetime dimensions. However, interesting string theories and supergravity theories require higher dimensions and it would therefore be desirable to have higher dimensional supergravity loop quantizations at one’s disposal in order to compare these approaches. In this series of papers we take first steps toward this goal. The present first paper develops a classical canonical platform for a higher dimensional connection formulation of the purely gravitational sector. The new ingredient is a different extension of the ADM phase space than the one used in LQG which does not require the time gauge and which generalizes to any dimension D > 1. The result is a Yang-Mills theory phase space subject to Gauß, spatial diffeomorphism and Hamiltonian constraint as well as one additional constraint, called the simplicity constraint. The structure group can be chosen to be SO(1, D) or SO(D + 1) and the latter choice is preferred for purposes of quantization.

  4. Information constraints in medical encounters.

    PubMed

    Hollander, R D

    1984-01-01

    This article describes three kinds of information constraints in medical encounters that have not been discussed at length in the medical ethics literature: constraints from the concept of a disease, from the diffusion of medical innovation, and from withholding information. It describes how these limit the reliance rational people can justifiably put in their doctors, and even the reliance doctors can have on their own advice. It notes the implications of these constraints for the value of informed consent, identifies several procedural steps that could increase the value of the latter and improve diffusion of innovation, and argues that recognition of these constraints should lead us to devise protections which intrude on but can improve these encounters.

  5. Development of the Modified Four Square Step Test and its reliability and validity in people with stroke.

    PubMed

    Roos, Margaret A; Reisman, Darcy S; Hicks, Gregory; Rose, William; Rudolph, Katherine S

    2016-01-01

    Adults with stroke have difficulty avoiding obstacles when walking, especially when a time constraint is imposed. The Four Square Step Test (FSST) evaluates dynamic balance by requiring individuals to step over canes in multiple directions while being timed, but many people with stroke are unable to complete it. The purposes of this study were to (1) modify the FSST by replacing the canes with tape so that more persons with stroke could successfully complete the test and (2) examine the reliability and validity of the modified version. Fifty-five subjects completed the Modified FSST (mFSST) by stepping over tape in all four directions while being timed. The mFSST resulted in significantly greater numbers of subjects completing the test than the FSST (39/55 [71%] and 33/55 [60%], respectively) (p < 0.04). The test-retest, intrarater, and interrater reliability of the mFSST were excellent (intraclass correlation coefficient ranges: 0.81-0.99). Construct and concurrent validity of the mFSST were also established. The minimal detectable change was 6.73 s. The mFSST, an ideal measure of dynamic balance, can identify progress in people with stroke in varied settings and can be completed by a wide range of people with stroke in approximately 5 min with the use of minimal equipment (tape, stop watch).

  6. An algorithm for the solution of dynamic linear programs

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    1989-01-01

    The algorithm's objective is to efficiently solve Dynamic Linear Programs (DLP) by taking advantage of their special staircase structure. This algorithm constitutes a stepping stone to an improved algorithm for solving Dynamic Quadratic Programs, which, in turn, would make the nonlinear programming method of Successive Quadratic Programs more practical for solving trajectory optimization problems. The ultimate goal is to being trajectory optimization solution speeds into the realm of real-time control. The algorithm exploits the staircase nature of the large constraint matrix of the equality-constrained DLPs encountered when solving inequality-constrained DLPs by an active set approach. A numerically-stable, staircase QL factorization of the staircase constraint matrix is carried out starting from its last rows and columns. The resulting recursion is like the time-varying Riccati equation from multi-stage LQR theory. The resulting factorization increases the efficiency of all of the typical LP solution operations over that of a dense matrix LP code. At the same time numerical stability is ensured. The algorithm also takes advantage of dynamic programming ideas about the cost-to-go by relaxing active pseudo constraints in a backwards sweeping process. This further decreases the cost per update of the LP rank-1 updating procedure, although it may result in more changes of the active set that if pseudo constraints were relaxed in a non-stagewise fashion. The usual stability of closed-loop Linear/Quadratic optimally-controlled systems, if it carries over to strictly linear cost functions, implies that the saving due to reduced factor update effort may outweigh the cost of an increased number of updates. An aerospace example is presented in which a ground-to-ground rocket's distance is maximized. This example demonstrates the applicability of this class of algorithms to aerospace guidance. It also sheds light on the efficacy of the proposed pseudo constraint relaxation scheme.

  7. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE PAGES

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...

    2018-04-17

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  8. Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models

    NASA Astrophysics Data System (ADS)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.

    2018-04-01

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.

  9. Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, David J.; Guerra, Jorge E.; Hamon, François P.

    The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less

  10. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  11. Kalman Filter Estimation of Spinning Spacecraft Attitude using Markley Variables

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph E.; Harman, Richard

    2004-01-01

    There are several different ways to represent spacecraft attitude and its time rate of change. For spinning or momentum-biased spacecraft, one particular representation has been put forward as a superior parameterization for numerical integration. Markley has demonstrated that these new variables have fewer rapidly varying elements for spinning spacecraft than other commonly used representations and provide advantages when integrating the equations of motion. The current work demonstrates how a Kalman filter can be devised to estimate the attitude using these new variables. The seven Markley variables are subject to one constraint condition, making the error covariance matrix singular. The filter design presented here explicitly accounts for this constraint by using a six-component error state in the filter update step. The reduced dimension error state is unconstrained and its covariance matrix is nonsingular.

  12. Cortical Specializations Underlying Fast Computations

    PubMed Central

    Volgushev, Maxim

    2016-01-01

    The time course of behaviorally relevant environmental events sets temporal constraints on neuronal processing. How does the mammalian brain make use of the increasingly complex networks of the neocortex, while making decisions and executing behavioral reactions within a reasonable time? The key parameter determining the speed of computations in neuronal networks is a time interval that neuronal ensembles need to process changes at their input and communicate results of this processing to downstream neurons. Theoretical analysis identified basic requirements for fast processing: use of neuronal populations for encoding, background activity, and fast onset dynamics of action potentials in neurons. Experimental evidence shows that populations of neocortical neurons fulfil these requirements. Indeed, they can change firing rate in response to input perturbations very quickly, within 1 to 3 ms, and encode high-frequency components of the input by phase-locking their spiking to frequencies up to 300 to 1000 Hz. This implies that time unit of computations by cortical ensembles is only few, 1 to 3 ms, which is considerably faster than the membrane time constant of individual neurons. The ability of cortical neuronal ensembles to communicate on a millisecond time scale allows for complex, multiple-step processing and precise coordination of neuronal activity in parallel processing streams, while keeping the speed of behavioral reactions within environmentally set temporal constraints. PMID:25689988

  13. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  14. Scalable explicit implementation of anisotropic diffusion with Runge-Kutta-Legendre super-time stepping

    NASA Astrophysics Data System (ADS)

    Vaidya, Bhargav; Prasad, Deovrat; Mignone, Andrea; Sharma, Prateek; Rickler, Luca

    2017-12-01

    An important ingredient in numerical modelling of high temperature magnetized astrophysical plasmas is the anisotropic transport of heat along magnetic field lines from higher to lower temperatures. Magnetohydrodynamics typically involves solving the hyperbolic set of conservation equations along with the induction equation. Incorporating anisotropic thermal conduction requires to also treat parabolic terms arising from the diffusion operator. An explicit treatment of parabolic terms will considerably reduce the simulation time step due to its dependence on the square of the grid resolution (Δx) for stability. Although an implicit scheme relaxes the constraint on stability, it is difficult to distribute efficiently on a parallel architecture. Treating parabolic terms with accelerated super-time-stepping (STS) methods has been discussed in literature, but these methods suffer from poor accuracy (first order in time) and also have difficult-to-choose tuneable stability parameters. In this work, we highlight a second-order (in time) Runge-Kutta-Legendre (RKL) scheme (first described by Meyer, Balsara & Aslam 2012) that is robust, fast and accurate in treating parabolic terms alongside the hyperbolic conversation laws. We demonstrate its superiority over the first-order STS schemes with standard tests and astrophysical applications. We also show that explicit conduction is particularly robust in handling saturated thermal conduction. Parallel scaling of explicit conduction using RKL scheme is demonstrated up to more than 104 processors.

  15. CHRR: coordinate hit-and-run with rounding for uniform sampling of constraint-based models.

    PubMed

    Haraldsdóttir, Hulda S; Cousins, Ben; Thiele, Ines; Fleming, Ronan M T; Vempala, Santosh

    2017-06-01

    In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. We apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks. https://github.com/opencobra/cobratoolbox . ronan.mt.fleming@gmail.com or vempala@cc.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  16. Space Shuttle capabilities, constraints, and cost

    NASA Technical Reports Server (NTRS)

    Lee, C. M.

    1980-01-01

    The capabilities, constraints, and costs of the Space Transportation System (STS), which combines reusable and expendable components, are reviewed, and an overview of the current planning activities for operating the STS in an efficient and cost-effective manner is presented. Traffic forecasts, performance constraints and enhancements, and potential new applications are discussed. Attention is given to operating costs, pricing policies, and the steps involved in 'getting on board', which includes all the interfaces between NASA and the users necessary to come to launch service agreements.

  17. An algorithm for fast elastic wave simulation using a vectorized finite difference operator

    NASA Astrophysics Data System (ADS)

    Malkoti, Ajay; Vedanti, Nimisha; Tiwari, Ram Krishna

    2018-07-01

    Modern geophysical imaging techniques exploit the full wavefield information which can be simulated numerically. These numerical simulations are computationally expensive due to several factors, such as a large number of time steps and nodes, big size of the derivative stencil and huge model size. Besides these constraints, it is also important to reformulate the numerical derivative operator for improved efficiency. In this paper, we have introduced a vectorized derivative operator over the staggered grid with shifted coordinate systems. The operator increases the efficiency of simulation by exploiting the fact that each variable can be represented in the form of a matrix. This operator allows updating all nodes of a variable defined on the staggered grid, in a manner similar to the collocated grid scheme and thereby reducing the computational run-time considerably. Here we demonstrate an application of this operator to simulate the seismic wave propagation in elastic media (Marmousi model), by discretizing the equations on a staggered grid. We have compared the performance of this operator on three programming languages, which reveals that it can increase the execution speed by a factor of at least 2-3 times for FORTRAN and MATLAB; and nearly 100 times for Python. We have further carried out various tests in MATLAB to analyze the effect of model size and the number of time steps on total simulation run-time. We find that there is an additional, though small, computational overhead for each step and it depends on total number of time steps used in the simulation. A MATLAB code package, 'FDwave', for the proposed simulation scheme is available upon request.

  18. A Brownian dynamics study on ferrofluid colloidal dispersions using an iterative constraint method to satisfy Maxwell’s equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubina, Sean Hyun, E-mail: sdubin2@uic.edu; Wedgewood, Lewis Edward, E-mail: wedge@uic.edu

    2016-07-15

    Ferrofluids are often favored for their ability to be remotely positioned via external magnetic fields. The behavior of particles in ferromagnetic clusters under uniformly applied magnetic fields has been computationally simulated using the Brownian dynamics, Stokesian dynamics, and Monte Carlo methods. However, few methods have been established that effectively handle the basic principles of magnetic materials, namely, Maxwell’s equations. An iterative constraint method was developed to satisfy Maxwell’s equations when a uniform magnetic field is imposed on ferrofluids in a heterogeneous Brownian dynamics simulation that examines the impact of ferromagnetic clusters in a mesoscale particle collection. This was accomplished bymore » allowing a particulate system in a simple shear flow to advance by a time step under a uniformly applied magnetic field, then adjusting the ferroparticles via an iterative constraint method applied over sub-volume length scales until Maxwell’s equations were satisfied. The resultant ferrofluid model with constraints demonstrates that the magnetoviscosity contribution is not as substantial when compared to homogeneous simulations that assume the material’s magnetism is a direct response to the external magnetic field. This was detected across varying intensities of particle-particle interaction, Brownian motion, and shear flow. Ferroparticle aggregation was still extensively present but less so than typically observed.« less

  19. The gauge transformations of the constrained q-deformed KP hierarchy

    NASA Astrophysics Data System (ADS)

    Geng, Lumin; Chen, Huizhan; Li, Na; Cheng, Jipeng

    2018-06-01

    In this paper, we mainly study the gauge transformations of the constrained q-deformed Kadomtsev-Petviashvili (q-KP) hierarchy. Different from the usual case, we have to consider the additional constraints on the Lax operator of the constrained q-deformed KP hierarchy, since the form of the Lax operator must be kept when constructing the gauge transformations. For this reason, the selections of generating functions in elementary gauge transformation operators TD and TI must be very special, which are from the constraints in the Lax operator. At last, we consider the successive applications of n-step of TD and k-step of TI gauge transformations.

  20. Bed crisis and elective surgery late cancellations: An approach using the theory of constraints.

    PubMed

    Sahraoui, Abderrazak; Elarref, Mohamed

    2014-01-01

    Late cancellations of scheduled elective surgery limit the ability of the surgical care service to achieve its goals. Attributes of these cancellations differ between hospitals and regions. The rate of late cancellations of elective surgery conducted in Hamad General Hospital, Doha, Qatar was found to be 13.14% which is similar to rates reported in hospitals elsewhere in the world; although elective surgery is performed six days a week from 7:00 am to 10:00 pm in our hospital. Simple and systematic analysis of these attributes typically provides limited solutions to the cancellation problem. Alternatively, the application of the theory of constraints with its five focusing steps, which analyze the system in its totality, is more likely to provide a better solution to the cancellation problem. To find the constraint, as a first focusing step, we carried out a retrospective and descriptive study using a quantitative approach combined with the Pareto Principle to find the main causes of cancellations, followed by a qualitative approach to find the main and ultimate underlying cause which pointed to the bed crisis. The remaining four focusing steps provided workable and effective solutions to reduce the cancellation rate of elective surgery.

  1. Bed crisis and elective surgery late cancellations: An approach using the theory of constraints

    PubMed Central

    Sahraoui, Abderrazak; Elarref, Mohamed

    2014-01-01

    Late cancellations of scheduled elective surgery limit the ability of the surgical care service to achieve its goals. Attributes of these cancellations differ between hospitals and regions. The rate of late cancellations of elective surgery conducted in Hamad General Hospital, Doha, Qatar was found to be 13.14% which is similar to rates reported in hospitals elsewhere in the world; although elective surgery is performed six days a week from 7:00 am to 10:00 pm in our hospital. Simple and systematic analysis of these attributes typically provides limited solutions to the cancellation problem. Alternatively, the application of the theory of constraints with its five focusing steps, which analyze the system in its totality, is more likely to provide a better solution to the cancellation problem. To find the constraint, as a first focusing step, we carried out a retrospective and descriptive study using a quantitative approach combined with the Pareto Principle to find the main causes of cancellations, followed by a qualitative approach to find the main and ultimate underlying cause which pointed to the bed crisis. The remaining four focusing steps provided workable and effective solutions to reduce the cancellation rate of elective surgery. PMID:25320686

  2. Multi-period equilibrium/near-equilibrium in electricity markets based on locational marginal prices

    NASA Astrophysics Data System (ADS)

    Garcia Bertrand, Raquel

    In this dissertation we propose an equilibrium procedure that coordinates the point of view of every market agent resulting in an equilibrium that simultaneously maximizes the independent objective of every market agent and satisfies network constraints. Therefore, the activities of the generating companies, consumers and an independent system operator are modeled: (1) The generating companies seek to maximize profits by specifying hourly step functions of productions and minimum selling prices, and bounds on productions. (2) The goals of the consumers are to maximize their economic utilities by specifying hourly step functions of demands and maximum buying prices, and bounds on demands. (3) The independent system operator then clears the market taking into account consistency conditions as well as capacity and line losses so as to achieve maximum social welfare. Then, we approach this equilibrium problem using complementarity theory in order to have the capability of imposing constraints on dual variables, i.e., on prices, such as minimum profit conditions for the generating units or maximum cost conditions for the consumers. In this way, given the form of the individual optimization problems, the Karush-Kuhn-Tucker conditions for the generating companies, the consumers and the independent system operator are both necessary and sufficient. The simultaneous solution to all these conditions constitutes a mixed linear complementarity problem. We include minimum profit constraints imposed by the units in the market equilibrium model. These constraints are added as additional constraints to the equivalent quadratic programming problem of the mixed linear complementarity problem previously described. For the sake of clarity, the proposed equilibrium or near-equilibrium is first developed for the particular case considering only one time period. Afterwards, we consider an equilibrium or near-equilibrium applied to a multi-period framework. This model embodies binary decisions, i.e., on/off status for the units, and therefore optimality conditions cannot be directly applied. To avoid limitations provoked by binary variables, while retaining the advantages of using optimality conditions, we define the multi-period market equilibrium using Benders decomposition, which allows computing binary variables through the master problem and continuous variables through the subproblem. Finally, we illustrate these market equilibrium concepts through several case studies.

  3. Kinetic measures of restabilisation during volitional stepping reveal age-related alterations in the control of mediolateral dynamic stability.

    PubMed

    Singer, Jonathan C; McIlroy, William E; Prentice, Stephen D

    2014-11-07

    Research examining age-related changes in dynamic stability during stepping has recognised the importance of the restabilisation phase, subsequent to foot-contact. While regulation of the net ground reaction force (GRFnet) line of action is believed to influence dynamic stability during steady-state locomotion, such control during restabilisation remains unknown. This work explored the origins of age-related decline in mediolateral dynamic stability by examining the line of action of GRFnet relative to the centre of mass (COM) during restabilisation following voluntary stepping. Healthy younger and older adults (n=20 per group) performed three single-step tasks (varying speed and step placement), altering the challenge to stability control. Age-related differences in magnitude and intertrial variability of the angle of divergence of GRFnet line of action relative to the COM were quantified, along with the peak mediolateral and vertical GRFnet components. The angle of divergence was further examined at discrete points during restabilisation, to uncover events of potential importance to stability control. Older adults exhibited a reduced angle of divergence throughout restabilisation. Temporal and spatial constraints on stepping increased the magnitude and intertrial variability of the angle of divergence, although not differentially among the older adults. Analysis of the time-varying angle of divergence revealed age-related reductions in magnitude, with increases in timing and intertrial timing variability during the later phase of restabilisation. This work further supports the idea that age-related challenges in lateral stability control emerge during restabilisation. Age-related alterations during the later phase of restabilisation may signify challenges with reactive control. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Martian stepped-delta formation by rapid water release.

    PubMed

    Kraal, Erin R; van Dijk, Maurits; Postma, George; Kleinhans, Maarten G

    2008-02-21

    Deltas and alluvial fans preserved on the surface of Mars provide an important record of surface water flow. Understanding how surface water flow could have produced the observed morphology is fundamental to understanding the history of water on Mars. To date, morphological studies have provided only minimum time estimates for the longevity of martian hydrologic events, which range from decades to millions of years. Here we use sand flume studies to show that the distinct morphology of martian stepped (terraced) deltas could only have originated from a single basin-filling event on a timescale of tens of years. Stepped deltas therefore provide a minimum and maximum constraint on the duration and magnitude of some surface flows on Mars. We estimate that the amount of water required to fill the basin and deposit the delta is comparable to the amount of water discharged by large terrestrial rivers, such as the Mississippi. The massive discharge, short timescale, and the associated short canyon lengths favour the hypothesis that stepped fans are terraced delta deposits draped over an alluvial fan and formed by water released suddenly from subsurface storage.

  5. Large time-step stability of explicit one-dimensional advection schemes

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.

    1993-01-01

    There is a wide-spread belief that most explicit one-dimensional advection schemes need to satisfy the so-called 'CFL condition' - that the Courant number, c = udelta(t)/delta(x), must be less than or equal to one, for stability in the von Neumann sense. This puts severe limitations on the time-step in high-speed, fine-grid calculations and is an impetus for the development of implicit schemes, which often require less restrictive time-step conditions for stability, but are more expensive per time-step. However, it turns out that, at least in one dimension, if explicit schemes are formulated in a consistent flux-based conservative finite-volume form, von Neumann stability analysis does not place any restriction on the allowable Courant number. Any explicit scheme that is stable for c is less than 1, with a complex amplitude ratio, G(c), can be easily extended to arbitrarily large c. The complex amplitude ratio is then given by exp(- (Iota)(Nu)(Theta)) G(delta(c)), where N is the integer part of c, and delta(c) = c - N (less than 1); this is clearly stable. The CFL condition is, in fact, not a stability condition at all, but, rather, a 'range restriction' on the 'pieces' in a piece-wise polynomial interpolation. When a global view is taken of the interpolation, the need for a CFL condition evaporates. A number of well-known explicit advection schemes are considered and thus extended to large delta(t). The analysis also includes a simple interpretation of (large delta(t)) total-variation-diminishing (TVD) constraints.

  6. Active Semi-Supervised Community Detection Based on Must-Link and Cannot-Link Constraints

    PubMed Central

    Cheng, Jianjun; Leng, Mingwei; Li, Longjie; Zhou, Hanhai; Chen, Xiaoyun

    2014-01-01

    Community structure detection is of great importance because it can help in discovering the relationship between the function and the topology structure of a network. Many community detection algorithms have been proposed, but how to incorporate the prior knowledge in the detection process remains a challenging problem. In this paper, we propose a semi-supervised community detection algorithm, which makes full utilization of the must-link and cannot-link constraints to guide the process of community detection and thereby extracts high-quality community structures from networks. To acquire the high-quality must-link and cannot-link constraints, we also propose a semi-supervised component generation algorithm based on active learning, which actively selects nodes with maximum utility for the proposed semi-supervised community detection algorithm step by step, and then generates the must-link and cannot-link constraints by accessing a noiseless oracle. Extensive experiments were carried out, and the experimental results show that the introduction of active learning into the problem of community detection makes a success. Our proposed method can extract high-quality community structures from networks, and significantly outperforms other comparison methods. PMID:25329660

  7. An attribute-driven statistics generator for use in a G.I.S. environment

    NASA Technical Reports Server (NTRS)

    Thomas, R. W.; Ritter, P. R.; Kaugars, A.

    1984-01-01

    When performing research using digital geographic information it is often useful to produce quantitative characterizations of the data, usually within some constraints. In the research environment the different combinations of required data and constraints can often become quite complex. This paper describes a technique that gives the researcher a powerful and flexible way to set up many possible combinations of data and constraints without having to perform numerous intermediate steps or create temporary data bands. This method provides an efficient way to produce descriptive statistics in such situations.

  8. Quantitative Susceptibility Mapping using Structural Feature based Collaborative Reconstruction (SFCR) in the Human Brain

    PubMed Central

    Cai, Congbo; Chen, Zhong; van Zijl, Peter C.M.

    2017-01-01

    The reconstruction of MR quantitative susceptibility mapping (QSM) from local phase measurements is an ill posed inverse problem and different regularization strategies incorporating a priori information extracted from magnitude and phase images have been proposed. However, the anatomy observed in magnitude and phase images does not always coincide spatially with that in susceptibility maps, which could give erroneous estimation in the reconstructed susceptibility map. In this paper, we develop a structural feature based collaborative reconstruction (SFCR) method for QSM including both magnitude and susceptibility based information. The SFCR algorithm is composed of two consecutive steps corresponding to complementary reconstruction models, each with a structural feature based l1 norm constraint and a voxel fidelity based l2 norm constraint, which allows both the structure edges and tiny features to be recovered, whereas the noise and artifacts could be reduced. In the M-step, the initial susceptibility map is reconstructed by employing a k-space based compressed sensing model incorporating magnitude prior. In the S-step, the susceptibility map is fitted in spatial domain using weighted constraints derived from the initial susceptibility map from the M-step. Simulations and in vivo human experiments at 7T MRI show that the SFCR method provides high quality susceptibility maps with improved RMSE and MSSIM. Finally, the susceptibility values of deep gray matter are analyzed in multiple head positions, with the supine position most approximate to the gold standard COSMOS result. PMID:27019480

  9. Improved Shaping Approach to the Preliminary Design of Low-Thrust Trajectories

    NASA Astrophysics Data System (ADS)

    Novak, D. M.; Vasile, M.

    2011-01-01

    This paper presents a general framework for the development of shape-based approaches to low-thrust trajectory design. A novel shaping method, based on a three-dimensional description of the trajectory in spherical coordinates, is developed within this general framework. Both the exponential sinusoid and the inverse polynomial shaping are demonstrated to be particular two-dimensional cases of the spherical one. The pseudoequinoctial shaping is revisited within the new framework, and the nonosculating nature of the pseudoequinoctial elements is analyzed. A two-step approach is introduced to solve the time of flight constraint, related to the design of low-thrust arcs with boundary constraints for both spherical and pseudoequinoctial shaping. The solution derived from the shaping approach is improved with a feedback linear-quadratic controller and compared against a direct collocation method based on finite elements in time. The new shaping approach and the combination of shaping and linear-quadratic controller are tested on three case studies: a mission to Mars, a mission to asteroid 1989ML, a mission to comet Tempel-1, and a mission to Neptune.

  10. Kinematic Constraints Associated with the Acquisition of Overarm Throwing Part I: Step and Trunk Actions

    ERIC Educational Resources Information Center

    Stodden, David F.; Langendorfer, Stephen J.; Fleisig, Glenn S.; Andrews, James R.

    2006-01-01

    The purposes of this study were to: (a) examine differences within specific kinematic variables and ball velocity associated with developmental component levels of step and trunk action (Roberton & Halverson, 1984), and (b) if the differences in kinematic variables were significantly associated with the differences in component levels, determine…

  11. Aspect-object alignment with Integer Linear Programming in opinion mining.

    PubMed

    Zhao, Yanyan; Qin, Bing; Liu, Ting; Yang, Wei

    2015-01-01

    Target extraction is an important task in opinion mining. In this task, a complete target consists of an aspect and its corresponding object. However, previous work has always simply regarded the aspect as the target itself and has ignored the important "object" element. Thus, these studies have addressed incomplete targets, which are of limited use for practical applications. This paper proposes a novel and important sentiment analysis task, termed aspect-object alignment, to solve the "object neglect" problem. The objective of this task is to obtain the correct corresponding object for each aspect. We design a two-step framework for this task. We first provide an aspect-object alignment classifier that incorporates three sets of features, namely, the basic, relational, and special target features. However, the objects that are assigned to aspects in a sentence often contradict each other and possess many complicated features that are difficult to incorporate into a classifier. To resolve these conflicts, we impose two types of constraints in the second step: intra-sentence constraints and inter-sentence constraints. These constraints are encoded as linear formulations, and Integer Linear Programming (ILP) is used as an inference procedure to obtain a final global decision that is consistent with the constraints. Experiments on a corpus in the camera domain demonstrate that the three feature sets used in the aspect-object alignment classifier are effective in improving its performance. Moreover, the classifier with ILP inference performs better than the classifier without it, thereby illustrating that the two types of constraints that we impose are beneficial.

  12. A two-level approach to large mixed-integer programs with application to cogeneration in energy-efficient buildings

    DOE PAGES

    Lin, Fu; Leyffer, Sven; Munson, Todd

    2016-04-12

    We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less

  13. A Study of Interactions between Mixing and Chemical Reaction Using the Rate-Controlled Constrained-Equilibrium Method

    NASA Astrophysics Data System (ADS)

    Hadi, Fatemeh; Janbozorgi, Mohammad; Sheikhi, M. Reza H.; Metghalchi, Hameed

    2016-10-01

    The rate-controlled constrained-equilibrium (RCCE) method is employed to study the interactions between mixing and chemical reaction. Considering that mixing can influence the RCCE state, the key objective is to assess the accuracy and numerical performance of the method in simulations involving both reaction and mixing. The RCCE formulation includes rate equations for constraint potentials, density and temperature, which allows taking account of mixing alongside chemical reaction without splitting. The RCCE is a dimension reduction method for chemical kinetics based on thermodynamics laws. It describes the time evolution of reacting systems using a series of constrained-equilibrium states determined by RCCE constraints. The full chemical composition at each state is obtained by maximizing the entropy subject to the instantaneous values of the constraints. The RCCE is applied to a spatially homogeneous constant pressure partially stirred reactor (PaSR) involving methane combustion in oxygen. Simulations are carried out over a wide range of initial temperatures and equivalence ratios. The chemical kinetics, comprised of 29 species and 133 reaction steps, is represented by 12 RCCE constraints. The RCCE predictions are compared with those obtained by direct integration of the same kinetics, termed detailed kinetics model (DKM). The RCCE shows accurate prediction of combustion in PaSR with different mixing intensities. The method also demonstrates reduced numerical stiffness and overall computational cost compared to DKM.

  14. A two-level approach to large mixed-integer programs with application to cogeneration in energy-efficient buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Fu; Leyffer, Sven; Munson, Todd

    We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less

  15. Clustering of financial time series with application to index and enhanced index tracking portfolio

    NASA Astrophysics Data System (ADS)

    Dose, Christian; Cincotti, Silvano

    2005-09-01

    A stochastic-optimization technique based on time series cluster analysis is described for index tracking and enhanced index tracking problems. Our methodology solves the problem in two steps, i.e., by first selecting a subset of stocks and then setting the weight of each stock as a result of an optimization process (asset allocation). Present formulation takes into account constraints on the number of stocks and on the fraction of capital invested in each of them, whilst not including transaction costs. Computational results based on clustering selection are compared to those of random techniques and show the importance of clustering in noise reduction and robust forecasting applications, in particular for enhanced index tracking.

  16. Small Angle Neutron Scattering Observation of Chain Retraction after a Large Step Deformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanchard, A.; Heinrich, M.; Pyckhout-Hintzen, W.

    The process of retraction in entangled linear chains after a fast nonlinear stretch was detected from time-resolved but quenched small angle neutron scattering (SANS) experiments on long, well-entangled polyisoprene chains. The statically obtained SANS data cover the relevant time regime for retraction, and they provide a direct, microscopic verification of this nonlinear process as predicted by the tube model. Clear, quantitative agreement is found with recent theories of contour length fluctuations and convective constraint release, using parameters obtained mainly from linear rheology. The theory captures the full range of scattering vectors once the crossover to fluctuations on length scales belowmore » the tube diameter is accounted for.« less

  17. Report of the Science Working Group: Science with a lunar optical interferometer

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Resolution is the greatest constraint in observational astronomy. The Earth's atmosphere causes on optical image to blur to about 1 arcsec or greater. Interferometric techniques have been developed to overcome atmospheric limitations for both filled aperture conventional telescopes and for partially filled aperture telescopes, such as the Michelson or the radio interferometer. The Hubble Space Telescope (HST) represents the first step toward space based optical astronomy. The HST represents an immediate short term evolution of observational optical astronomy. A longer time scale of evolution is focused on and the benefits are considered to astronomy of placing an array of telescopes on the Moon at a time when a permanent base may exist there.

  18. Cascade Optimization Strategy for Aircraft and Air-Breathing Propulsion System Concepts

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Lavelle, Thomas M.; Hopkins, Dale A.; Coroneos, Rula M.

    1996-01-01

    Design optimization for subsonic and supersonic aircraft and for air-breathing propulsion engine concepts has been accomplished by soft-coupling the Flight Optimization System (FLOPS) and the NASA Engine Performance Program analyzer (NEPP), to the NASA Lewis multidisciplinary optimization tool COMETBOARDS. Aircraft and engine design problems, with their associated constraints and design variables, were cast as nonlinear optimization problems with aircraft weight and engine thrust as the respective merit functions. Because of the diversity of constraint types and the overall distortion of the design space, the most reliable single optimization algorithm available in COMETBOARDS could not produce a satisfactory feasible optimum solution. Some of COMETBOARDS' unique features, which include a cascade strategy, variable and constraint formulations, and scaling devised especially for difficult multidisciplinary applications, successfully optimized the performance of both aircraft and engines. The cascade method has two principal steps: In the first, the solution initiates from a user-specified design and optimizer, in the second, the optimum design obtained in the first step with some random perturbation is used to begin the next specified optimizer. The second step is repeated for a specified sequence of optimizers or until a successful solution of the problem is achieved. A successful solution should satisfy the specified convergence criteria and have several active constraints but no violated constraints. The cascade strategy available in the combined COMETBOARDS, FLOPS, and NEPP design tool converges to the same global optimum solution even when it starts from different design points. This reliable and robust design tool eliminates manual intervention in the design of aircraft and of air-breathing propulsion engines where it eases the cycle analysis procedures. The combined code is also much easier to use, which is an added benefit. This paper describes COMETBOARDS and its cascade strategy and illustrates the capability of the combined design tool through the optimization of a subsonic aircraft and a high-bypass-turbofan wave-rotor-topped engine.

  19. A semi-implicit finite element method for viscous lipid membranes

    NASA Astrophysics Data System (ADS)

    Rodrigues, Diego S.; Ausas, Roberto F.; Mut, Fernando; Buscaglia, Gustavo C.

    2015-10-01

    A finite element formulation to approximate the behavior of lipid membranes is proposed. The mathematical model incorporates tangential viscous stresses and bending elastic forces, together with the inextensibility constraint and the enclosed volume constraint. The membrane is discretized by a surface mesh made up of planar triangles, over which a mixed formulation (velocity-curvature) is built based on the viscous bilinear form (Boussinesq-Scriven operator) and the Laplace-Beltrami identity relating position and curvature. A semi-implicit approach is then used to discretize in time, with piecewise linear interpolants for all variables. Two stabilization terms are needed: The first one stabilizes the inextensibility constraint by a pressure-gradient-projection scheme (Codina and Blasco (1997) [33]), the second couples curvature and velocity to improve temporal stability, as proposed by Bänsch (2001) [36]. The volume constraint is handled by a Lagrange multiplier (which turns out to be the internal pressure), and an analogous strategy is used to filter out rigid-body motions. The nodal positions are updated in a Lagrangian manner according to the velocity solution at each time step. An automatic remeshing strategy maintains suitable refinement and mesh quality throughout the simulation. Numerical experiments show the convergent and robust behavior of the proposed method. Stability limits are obtained from numerous relaxation tests, and convergence with mesh refinement is confirmed both in the relaxation transient and in the final equilibrium shape. Virtual tweezing experiments are also reported, computing the dependence of the deformed membrane shape with the tweezing velocity (a purely dynamical effect). For sufficiently high velocities, a tether develops which shows good agreement, both in its final radius and in its transient behavior, with available analytical solutions. Finally, simulation results of a membrane subject to the simultaneous action of six tweezers illustrate the robustness of the method.

  20. Analysis of stability for stochastic delay integro-differential equations.

    PubMed

    Zhang, Yu; Li, Longsuo

    2018-01-01

    In this paper, we concern stability of numerical methods applied to stochastic delay integro-differential equations. For linear stochastic delay integro-differential equations, it is shown that the mean-square stability is derived by the split-step backward Euler method without any restriction on step-size, while the Euler-Maruyama method could reproduce the mean-square stability under a step-size constraint. We also confirm the mean-square stability of the split-step backward Euler method for nonlinear stochastic delay integro-differential equations. The numerical experiments further verify the theoretical results.

  1. Divergence Times and the Evolutionary Radiation of New World Monkeys (Platyrrhini, Primates): An Analysis of Fossil and Molecular Data.

    PubMed

    Perez, S Ivan; Tejedor, Marcelo F; Novo, Nelson M; Aristide, Leandro

    2013-01-01

    The estimation of phylogenetic relationships and divergence times among a group of organisms is a fundamental first step toward understanding its biological diversification. The time of the most recent or last common ancestor (LCA) of extant platyrrhines is one of the most controversial among scholars of primate evolution. Here we use two molecular based approaches to date the initial divergence of the platyrrhine clade, Bayesian estimations under a relaxed-clock model and substitution rate plus generation time and body size, employing the fossil record and genome datasets. We also explore the robustness of our estimations with respect to changes in topology, fossil constraints and substitution rate, and discuss the implications of our findings for understanding the platyrrhine radiation. Our results suggest that fossil constraints, topology and substitution rate have an important influence on our divergence time estimates. Bayesian estimates using conservative but realistic fossil constraints suggest that the LCA of extant platyrrhines existed at ca. 29 Ma, with the 95% confidence limit for the node ranging from 27-31 Ma. The LCA of extant platyrrhine monkeys based on substitution rate corrected by generation time and body size was established between 21-29 Ma. The estimates based on the two approaches used in this study recalibrate the ages of the major platyrrhine clades and corroborate the hypothesis that they constitute very old lineages. These results can help reconcile several controversial points concerning the affinities of key early Miocene fossils that have arisen among paleontologists and molecular systematists. However, they cannot resolve the controversy of whether these fossil species truly belong to the extant lineages or to a stem platyrrhine clade. That question can only be resolved by morphology. Finally, we show that the use of different approaches and well supported fossil information gives a more robust divergence time estimate of a clade.

  2. Divergence Times and the Evolutionary Radiation of New World Monkeys (Platyrrhini, Primates): An Analysis of Fossil and Molecular Data

    PubMed Central

    Perez, S. Ivan; Tejedor, Marcelo F.; Novo, Nelson M.; Aristide, Leandro

    2013-01-01

    The estimation of phylogenetic relationships and divergence times among a group of organisms is a fundamental first step toward understanding its biological diversification. The time of the most recent or last common ancestor (LCA) of extant platyrrhines is one of the most controversial among scholars of primate evolution. Here we use two molecular based approaches to date the initial divergence of the platyrrhine clade, Bayesian estimations under a relaxed-clock model and substitution rate plus generation time and body size, employing the fossil record and genome datasets. We also explore the robustness of our estimations with respect to changes in topology, fossil constraints and substitution rate, and discuss the implications of our findings for understanding the platyrrhine radiation. Our results suggest that fossil constraints, topology and substitution rate have an important influence on our divergence time estimates. Bayesian estimates using conservative but realistic fossil constraints suggest that the LCA of extant platyrrhines existed at ca. 29 Ma, with the 95% confidence limit for the node ranging from 27–31 Ma. The LCA of extant platyrrhine monkeys based on substitution rate corrected by generation time and body size was established between 21–29 Ma. The estimates based on the two approaches used in this study recalibrate the ages of the major platyrrhine clades and corroborate the hypothesis that they constitute very old lineages. These results can help reconcile several controversial points concerning the affinities of key early Miocene fossils that have arisen among paleontologists and molecular systematists. However, they cannot resolve the controversy of whether these fossil species truly belong to the extant lineages or to a stem platyrrhine clade. That question can only be resolved by morphology. Finally, we show that the use of different approaches and well supported fossil information gives a more robust divergence time estimate of a clade. PMID:23826358

  3. An Integrated Constraint Programming Approach to Scheduling Sports Leagues with Divisional and Round-robin Tournaments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlsson, Mats; Johansson, Mikael; Larson, Jeffrey

    Previous approaches for scheduling a league with round-robin and divisional tournaments involved decomposing the problem into easier subproblems. This approach, used to schedule the top Swedish handball league Elitserien, reduces the problem complexity but can result in suboptimal schedules. This paper presents an integrated constraint programming model that allows to perform the scheduling in a single step. Particular attention is given to identifying implied and symmetry-breaking constraints that reduce the computational complexity significantly. The experimental evaluation of the integrated approach takes considerably less computational effort than the previous approach.

  4. Brief Lags in Interrupted Sequential Performance: Evaluating a Model and Model Evaluation Method

    DTIC Science & Technology

    2015-01-05

    rehearsal mechanism in the model. To evaluate the model we developed a simple new goodness-of-fit test based on analysis of variance that offers an...repeated step). Sequen- tial constraints are common in medicine, equipment maintenance, computer programming and technical support, data analysis ...legal analysis , accounting, and many other home and workplace environ- ments. Sequential constraints also play a role in such basic cognitive processes

  5. Modeling protein conformational changes by iterative fitting of distance constraints using reoriented normal modes.

    PubMed

    Zheng, Wenjun; Brooks, Bernard R

    2006-06-15

    Recently we have developed a normal-modes-based algorithm that predicts the direction of protein conformational changes given the initial state crystal structure together with a small number of pairwise distance constraints for the end state. Here we significantly extend this method to accurately model both the direction and amplitude of protein conformational changes. The new protocol implements a multisteps search in the conformational space that is driven by iteratively minimizing the error of fitting the given distance constraints and simultaneously enforcing the restraint of low elastic energy. At each step, an incremental structural displacement is computed as a linear combination of the lowest 10 normal modes derived from an elastic network model, whose eigenvectors are reorientated to correct for the distortions caused by the structural displacements in the previous steps. We test this method on a list of 16 pairs of protein structures for which relatively large conformational changes are observed (root mean square deviation >3 angstroms), using up to 10 pairwise distance constraints selected by a fluctuation analysis of the initial state structures. This method has achieved a near-optimal performance in almost all cases, and in many cases the final structural models lie within root mean square deviation of 1 approximately 2 angstroms from the native end state structures.

  6. OGS#PETSc approach for robust and efficient simulations of strongly coupled hydrothermal processes in EGS reservoirs

    NASA Astrophysics Data System (ADS)

    Watanabe, Norihiro; Blucher, Guido; Cacace, Mauro; Kolditz, Olaf

    2016-04-01

    A robust and computationally efficient solution is important for 3D modelling of EGS reservoirs. This is particularly the case when the reservoir model includes hydraulic conduits such as induced or natural fractures, fault zones, and wellbore open-hole sections. The existence of such hydraulic conduits results in heterogeneous flow fields and in a strengthened coupling between fluid flow and heat transport processes via temperature dependent fluid properties (e.g. density and viscosity). A commonly employed partitioned solution (or operator-splitting solution) may not robustly work for such strongly coupled problems its applicability being limited by small time step sizes (e.g. 5-10 days) whereas the processes have to be simulated for 10-100 years. To overcome this limitation, an alternative approach is desired which can guarantee a robust solution of the coupled problem with minor constraints on time step sizes. In this work, we present a Newton-Raphson based monolithic coupling approach implemented in the OpenGeoSys simulator (OGS) combined with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library. The PETSc library is used for both linear and nonlinear solvers as well as MPI-based parallel computations. The suggested method has been tested by application to the 3D reservoir site of Groß Schönebeck, in northern Germany. Results show that the exact Newton-Raphson approach can also be limited to small time step sizes (e.g. one day) due to slight oscillations in the temperature field. The usage of a line search technique and modification of the Jacobian matrix were necessary to achieve robust convergence of the nonlinear solution. For the studied example, the proposed monolithic approach worked even with a very large time step size of 3.5 years.

  7. Event-Triggered Distributed Average Consensus Over Directed Digital Networks With Limited Communication Bandwidth.

    PubMed

    Li, Huaqing; Chen, Guo; Huang, Tingwen; Dong, Zhaoyang; Zhu, Wei; Gao, Lan

    2016-12-01

    In this paper, we consider the event-triggered distributed average-consensus of discrete-time first-order multiagent systems with limited communication data rate and general directed network topology. In the framework of digital communication network, each agent has a real-valued state but can only exchange finite-bit binary symbolic data sequence with its neighborhood agents at each time step due to the digital communication channels with energy constraints. Novel event-triggered dynamic encoder and decoder for each agent are designed, based on which a distributed control algorithm is proposed. A scheme that selects the number of channel quantization level (number of bits) at each time step is developed, under which all the quantizers in the network are never saturated. The convergence rate of consensus is explicitly characterized, which is related to the scale of network, the maximum degree of nodes, the network structure, the scaling function, the quantization interval, the initial states of agents, the control gain and the event gain. It is also found that under the designed event-triggered protocol, by selecting suitable parameters, for any directed digital network containing a spanning tree, the distributed average consensus can be always achieved with an exponential convergence rate based on merely one bit information exchange between each pair of adjacent agents at each time step. Two simulation examples are provided to illustrate the feasibility of presented protocol and the correctness of the theoretical results.

  8. Beyond space and time: advanced selection for seismological data

    NASA Astrophysics Data System (ADS)

    Trabant, C. M.; Van Fossen, M.; Ahern, T. K.; Casey, R. E.; Weertman, B.; Sharer, G.; Benson, R. B.

    2017-12-01

    Separating the available raw data from that useful for any given study is often a tedious step in a research project, particularly for first-order data quality problems such as broken sensors, incorrect response information, and non-continuous time series. With the ever increasing amounts of data available to researchers, this chore becomes more and more time consuming. To assist users in this pre-processing of data, the IRIS Data Management Center (DMC) has created a system called Research Ready Data Sets (RRDS). The RRDS system allows researchers to apply filters that constrain their data request using criteria related to signal quality, response correctness, and high resolution data availability. In addition to the traditional selection methods of stations at a geographic location for given time spans, RRDS will provide enhanced criteria for data selection based on many of the measurements available in the DMC's MUSTANG quality control system. This means that data may be selected based on background noise (tolerance relative to high and low noise Earth models), signal-to-noise ratio for earthquake arrivals, signal RMS, instrument response corrected signal correlation with Earth tides, time tear (gaps/overlaps) counts, timing quality (when reported in the raw data by the datalogger) and more. The new RRDS system is available as a web service designed to operate as a request filter. A request is submitted containing the traditional station and time constraints as well as data quality constraints. The request is then filtered and a report is returned that indicates 1) the request that would subsequently be submitted to a data access service, 2) a record of the quality criteria specified and 3) a record of the data rejected based on those criteria, including the relevant values. This service can be used to either filter a request prior to requesting the actual data or to explore which data match a set of enhanced criteria without downloading the data. We are optimistic this capability will reduce the initial data culling steps most researchers go through. Additionally, use of this service should reduce the amount of data transmitted from the DMC, easing the workload for our finite shared resources.

  9. Regularized two-step brain activity reconstruction from spatiotemporal EEG data

    NASA Astrophysics Data System (ADS)

    Alecu, Teodor I.; Voloshynovskiy, Sviatoslav; Pun, Thierry

    2004-10-01

    We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EEG data can be thought of as a collection of time continuous streams from sparse locations. The measured electric potential on one electrode is the result of the superposition of synchronized synaptic activity from sources in all the brain volume. Consequently, the EEG inverse problem is a highly underdetermined (and ill-posed) problem. Moreover, each source contribution is linear with respect to its amplitude but non-linear with respect to its localization and orientation. In order to overcome these drawbacks we propose a novel two-step inversion procedure. The solution is based on a double scale division of the solution space. The first step uses a coarse discretization and has the sole purpose of globally identifying the active regions, via a sparse approximation algorithm. The second step is applied only on the retained regions and makes use of a fine discretization of the space, aiming at detailing the brain activity. The local configuration of sources is recovered using an iterative stochastic estimator with adaptive joint minimum energy and directional consistency constraints.

  10. A stabilized Runge–Kutta–Legendre method for explicit super-time-stepping of parabolic and mixed equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-15

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s{sup 2} times larger than amore » single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.« less

  11. A stabilized Runge-Kutta-Legendre method for explicit super-time-stepping of parabolic and mixed equations

    NASA Astrophysics Data System (ADS)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-01

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge-Kutta-like time-steps to advance the parabolic terms by a time-step that is s2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge-Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems - a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.

  12. An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization

    DOE PAGES

    Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.

    2017-04-17

    We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less

  13. An Augmented Lagrangian Filter Method for Real-Time Embedded Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai -Yuan; Huang, Rui; Zavala, Victor M.

    We present a filter line-search algorithm for nonconvex continuous optimization that combines an augmented Lagrangian function and a constraint violation metric to accept and reject steps. The approach is motivated by real-time optimization applications that need to be executed on embedded computing platforms with limited memory and processor speeds. The proposed method enables primal–dual regularization of the linear algebra system that in turn permits the use of solution strategies with lower computing overheads. We prove that the proposed algorithm is globally convergent and we demonstrate the developments using a nonconvex real-time optimization application for a building heating, ventilation, and airmore » conditioning system. Our numerical tests are performed on a standard processor and on an embedded platform. Lastly, we demonstrate that the approach reduces solution times by a factor of over 1000.« less

  14. Applications of Multi-Body Dynamical Environments: The ARTEMIS Transfer Trajectory Design

    NASA Technical Reports Server (NTRS)

    Folta, David C.; Woodard, Mark; Howell, Kathleen; Patterson, Chris; Schlei, Wayne

    2010-01-01

    The application of forces in multi-body dynamical environments to pennit the transfer of spacecraft from Earth orbit to Sun-Earth weak stability regions and then return to the Earth-Moon libration (L1 and L2) orbits has been successfully accomplished for the first time. This demonstrated transfer is a positive step in the realization of a design process that can be used to transfer spacecraft with minimal Delta-V expenditures. Initialized using gravity assists to overcome fuel constraints; the ARTEMIS trajectory design has successfully placed two spacecraft into EarthMoon libration orbits by means of these applications.

  15. Two-agent cooperative search using game models with endurance-time constraints

    NASA Astrophysics Data System (ADS)

    Sujit, P. B.; Ghose, Debasish

    2010-07-01

    In this article, the problem of two Unmanned Aerial Vehicles (UAVs) cooperatively searching an unknown region is addressed. The search region is discretized into hexagonal cells and each cell is assumed to possess an uncertainty value. The UAVs have to cooperatively search these cells taking limited endurance, sensor and communication range constraints into account. Due to limited endurance, the UAVs need to return to the base station for refuelling and also need to select a base station when multiple base stations are present. This article proposes a route planning algorithm that takes endurance time constraints into account and uses game theoretical strategies to reduce the uncertainty. The route planning algorithm selects only those cells that ensure the agent will return to any one of the available bases. A set of paths are formed using these cells which the game theoretical strategies use to select a path that yields maximum uncertainty reduction. We explore non-cooperative Nash, cooperative and security strategies from game theory to enhance the search effectiveness. Monte-Carlo simulations are carried out which show the superiority of the game theoretical strategies over greedy strategy for different look ahead step length paths. Within the game theoretical strategies, non-cooperative Nash and cooperative strategy perform similarly in an ideal case, but Nash strategy performs better than the cooperative strategy when the perceived information is different. We also propose a heuristic based on partitioning of the search space into sectors to reduce computational overhead without performance degradation.

  16. Using Perturbed Physics Ensembles and Machine Learning to Select Parameters for Reducing Regional Biases in a Global Climate Model

    NASA Astrophysics Data System (ADS)

    Li, S.; Rupp, D. E.; Hawkins, L.; Mote, P.; McNeall, D. J.; Sarah, S.; Wallom, D.; Betts, R. A.

    2017-12-01

    This study investigates the potential to reduce known summer hot/dry biases over Pacific Northwest in the UK Met Office's atmospheric model (HadAM3P) by simultaneously varying multiple model parameters. The bias-reduction process is done through a series of steps: 1) Generation of perturbed physics ensemble (PPE) through the volunteer computing network weather@home; 2) Using machine learning to train "cheap" and fast statistical emulators of climate model, to rule out regions of parameter spaces that lead to model variants that do not satisfy observational constraints, where the observational constraints (e.g., top-of-atmosphere energy flux, magnitude of annual temperature cycle, summer/winter temperature and precipitation) are introduced sequentially; 3) Designing a new PPE by "pre-filtering" using the emulator results. Steps 1) through 3) are repeated until results are considered to be satisfactory (3 times in our case). The process includes a sensitivity analysis to find dominant parameters for various model output metrics, which reduces the number of parameters to be perturbed with each new PPE. Relative to observational uncertainty, we achieve regional improvements without introducing large biases in other parts of the globe. Our results illustrate the potential of using machine learning to train cheap and fast statistical emulators of climate model, in combination with PPEs in systematic model improvement.

  17. Using gender-based analyses to understand physical inactivity among women in Yellowstone County, Montana.

    PubMed

    Duin, Diane K; Golbeck, Amanda L; Keippel, April Ennis; Ciemins, Elizabeth; Hanson, Hillary; Neary, Tracy; Fink, Heather

    2015-08-01

    Physical inactivity contributes to many health problems. Gender, the socially constructed roles and activities deemed appropriate for men and women, is an important factor in women's physical inactivity. To better understand how gender influences participation in leisure-time physical activity, a gender analysis was conducted using sex-disaggregated data from a county-wide health assessment phone survey and a qualitative analysis of focus group transcripts. From this gender analysis, several gender-based constraints emerged, including women's roles as caregivers, which left little time or energy for physical activity, women's leisure time activities and hobbies, which were less active than men's hobbies, and expectations for women's appearance that made them uncomfortable sweating in front of strangers. Gender-based opportunities included women's enjoyment of activity as a social connection, less rigid gender roles for younger women, and a sense of responsibility to set a good example for their families. The gender analysis was used to gain a deeper understanding of gender-based constraints and opportunities related to physical activity. This understanding is being used in the next step of our research to develop a gender-specific intervention to promote physical activity in women that addresses the underlying causes of physical inactivity through accommodation or transformation of those gender norms. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Multi-objective four-dimensional vehicle motion planning in large dynamic environments.

    PubMed

    Wu, Paul P-Y; Campbell, Duncan; Merz, Torsten

    2011-06-01

    This paper presents Multi-Step A∗ (MSA∗), a search algorithm based on A∗ for multi-objective 4-D vehicle motion planning (three spatial and one time dimensions). The research is principally motivated by the need for offline and online motion planning for autonomous unmanned aerial vehicles (UAVs). For UAVs operating in large dynamic uncertain 4-D environments, the motion plan consists of a sequence of connected linear tracks (or trajectory segments). The track angle and velocity are important parameters that are often restricted by assumptions and a grid geometry in conventional motion planners. Many existing planners also fail to incorporate multiple decision criteria and constraints such as wind, fuel, dynamic obstacles, and the rules of the air. It is shown that MSA∗ finds a cost optimal solution using variable length, angle, and velocity trajectory segments. These segments are approximated with a grid-based cell sequence that provides an inherent tolerance to uncertainty. The computational efficiency is achieved by using variable successor operators to create a multiresolution memory-efficient lattice sampling structure. The simulation studies on the UAV flight planning problem show that MSA∗ meets the time constraints of online replanning and finds paths of equivalent cost but in a quarter of the time (on average) of a vector neighborhood-based A∗.

  19. A robot and control algorithm that can synchronously assist in naturalistic motion during body-weight-supported gait training following neurologic injury.

    PubMed

    Aoyagi, Daisuke; Ichinose, Wade E; Harkema, Susan J; Reinkensmeyer, David J; Bobrow, James E

    2007-09-01

    Locomotor training using body weight support on a treadmill and manual assistance is a promising rehabilitation technique following neurological injuries, such as spinal cord injury (SCI) and stroke. Previous robots that automate this technique impose constraints on naturalistic walking due to their kinematic structure, and are typically operated in a stiff mode, limiting the ability of the patient or human trainer to influence the stepping pattern. We developed a pneumatic gait training robot that allows for a full range of natural motion of the legs and pelvis during treadmill walking, and provides compliant assistance. However, we observed an unexpected consequence of the device's compliance: unimpaired and SCI individuals invariably began walking out-of-phase with the device. Thus, the robot perturbed rather than assisted stepping. To address this problem, we developed a novel algorithm that synchronizes the device in real-time to the actual motion of the individual by sensing the state error and adjusting the replay timing to reduce this error. This paper describes data from experiments with individuals with SCI that demonstrate the effectiveness of the synchronization algorithm, and the potential of the device for relieving the trainers of strenuous work while maintaining naturalistic stepping.

  20. Facilitating telemedicine project sustainability in medically underserved areas: a healthcare provider participant perspective.

    PubMed

    Paul, David L; McDaniel, Reuben R

    2016-04-26

    Very few telemedicine projects in medically underserved areas have been sustained over time. This research furthers understanding of telemedicine service sustainability by examining teleconsultation projects from the perspective of healthcare providers. Drivers influencing healthcare providers' continued participation in teleconsultation projects and how projects can be designed to effectively and efficiently address these drivers is examined. Case studies of fourteen teleconsultation projects that were part of two health sciences center (HSC) based telemedicine networks was utilized. Semi-structured interviews of 60 key informants (clinicians, administrators, and IT professionals) involved in teleconsultation projects were the primary data collection method. Two key drivers influenced providers' continued participation. First was severe time constraints. Second was remote site healthcare providers' (RSHCPs) sense of professional isolation. Two design steps to address these were identified. One involved implementing relatively simple technology and process solutions to make participation convenient. The more critical and difficult design step focused on designing teleconsultation projects for collaborative, active learning. This learning empowered participating RSHCPs by leveraging HSC specialists' expertise. In order to increase sustainability the fundamental purpose of teleconsultation projects needs to be re-conceptualized. Doing so requires HSC specialists and RSHCPs to assume new roles and highlights the importance of trust. By implementing these design steps, healthcare delivery in medically underserved areas can be positively impacted.

  1. Real-time biscuit tile image segmentation method based on edge detection.

    PubMed

    Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter

    2018-05-01

    In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Blind beam-hardening correction from Poisson measurements

    NASA Astrophysics Data System (ADS)

    Gu, Renliang; Dogandžić, Aleksandar

    2016-02-01

    We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.

  3. Single-Receiver GPS Phase Bias Resolution

    NASA Technical Reports Server (NTRS)

    Bertiger, William I.; Haines, Bruce J.; Weiss, Jan P.; Harvey, Nathaniel E.

    2010-01-01

    Existing software has been modified to yield the benefits of integer fixed double-differenced GPS-phased ambiguities when processing data from a single GPS receiver with no access to any other GPS receiver data. When the double-differenced combination of phase biases can be fixed reliably, a significant improvement in solution accuracy is obtained. This innovation uses a large global set of GPS receivers (40 to 80 receivers) to solve for the GPS satellite orbits and clocks (along with any other parameters). In this process, integer ambiguities are fixed and information on the ambiguity constraints is saved. For each GPS transmitter/receiver pair, the process saves the arc start and stop times, the wide-lane average value for the arc, the standard deviation of the wide lane, and the dual-frequency phase bias after bias fixing for the arc. The second step of the process uses the orbit and clock information, the bias information from the global solution, and only data from the single receiver to resolve double-differenced phase combinations. It is called "resolved" instead of "fixed" because constraints are introduced into the problem with a finite data weight to better account for possible errors. A receiver in orbit has much shorter continuous passes of data than a receiver fixed to the Earth. The method has parameters to account for this. In particular, differences in drifting wide-lane values must be handled differently. The first step of the process is automated, using two JPL software sets, Longarc and Gipsy-Oasis. The resulting orbit/clock and bias information files are posted on anonymous ftp for use by any licensed Gipsy-Oasis user. The second step is implemented in the Gipsy-Oasis executable, gd2p.pl, which automates the entire process, including fetching the information from anonymous ftp

  4. A distributed model predictive control scheme for leader-follower multi-agent systems

    NASA Astrophysics Data System (ADS)

    Franzè, Giuseppe; Lucia, Walter; Tedesco, Francesco

    2018-02-01

    In this paper, we present a novel receding horizon control scheme for solving the formation problem of leader-follower configurations. The algorithm is based on set-theoretic ideas and is tuned for agents described by linear time-invariant (LTI) systems subject to input and state constraints. The novelty of the proposed framework relies on the capability to jointly use sequences of one-step controllable sets and polyhedral piecewise state-space partitions in order to online apply the 'better' control action in a distributed receding horizon fashion. Moreover, we prove that the design of both robust positively invariant sets and one-step-ahead controllable regions is achieved in a distributed sense. Simulations and numerical comparisons with respect to centralised and local-based strategies are finally performed on a group of mobile robots to demonstrate the effectiveness of the proposed control strategy.

  5. Practice increases procedural errors after task interruption.

    PubMed

    Altmann, Erik M; Hambrick, David Z

    2017-05-01

    Positive effects of practice are ubiquitous in human performance, but a finding from memory research suggests that negative effects are possible also. The finding is that memory for items on a list depends on the time interval between item presentations. This finding predicts a negative effect of practice on procedural performance under conditions of task interruption. As steps of a procedure are performed more quickly, memory for past performance should become less accurate, increasing the rate of skipped or repeated steps after an interruption. We found this effect, with practice generally improving speed and accuracy, but impairing accuracy after interruptions. The results show that positive effects of practice can interact with architectural constraints on episodic memory to have negative effects on performance. In practical terms, the results suggest that practice can be a risk factor for procedural errors in task environments with a high incidence of task interruption. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Analysis of two-equation turbulence models for recirculating flows

    NASA Technical Reports Server (NTRS)

    Thangam, S.

    1991-01-01

    The two-equation kappa-epsilon model is used to analyze turbulent separated flow past a backward-facing step. It is shown that if the model constraints are modified to be consistent with the accepted energy decay rate for isotropic turbulence, the dominant features of the flow field, namely the size of the separation bubble and the streamwise component of the mean velocity, can be accurately predicted. In addition, except in the vicinity of the step, very good predictions for the turbulent shear stress, the wall pressure, and the wall shear stress are obtained. The model is also shown to provide good predictions for the turbulence intensity in the region downstream of the reattachment point. Estimated long time growth rates for the turbulent kinetic energy and dissipation rate of homogeneous shear flow are utilized to develop an optimal set of constants for the two equation kappa-epsilon model. The physical implications of the model performance are also discussed.

  7. Establishing a successful clinical research program.

    PubMed

    Scoglio, Daniele; Fichera, Alessandro

    2014-06-01

    Clinical research (CR) is a natural corollary to clinical surgery. It gives an investigator the opportunity to critically review their results and develop new strategies. This article covers the critical factors and the important components of a successful CR program. The first and most important step is to build a dedicated research team to overcome time constraints and enable a surgical practice to make CR a priority. With the research team in place, the next step is to create a program on the basis of an original idea and new clinical hypotheses. This often comes from personal experience supported by a review of the available evidence. Randomized controlled (clinical) trials are the most stringent way of determining whether a cause-effect relationship exists between the intervention and the outcome. In the proper setting, translational research may offer additional avenues allowing clinical application of basic science discoveries.

  8. Ares I Flight Control System Design

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; Alaniz, Abran; Hall, Robert; Bedrossian, Nazareth; Hall, Charles; Ryan, Stephen; Jackson, Mark

    2010-01-01

    The Ares I launch vehicle represents a challenging flex-body structural environment for flight control system design. This paper presents a design methodology for employing numerical optimization to develop the Ares I flight control system. The design objectives include attitude tracking accuracy and robust stability with respect to rigid body dynamics, propellant slosh, and flex. Under the assumption that the Ares I time-varying dynamics and control system can be frozen over a short period of time, the flight controllers are designed to stabilize all selected frozen-time launch control systems in the presence of parametric uncertainty. Flex filters in the flight control system are designed to minimize the flex components in the error signals before they are sent to the attitude controller. To ensure adequate response to guidance command, step response specifications are introduced as constraints in the optimization problem. Imposing these constraints minimizes performance degradation caused by the addition of the flex filters. The first stage bending filter design achieves stability by adding lag to the first structural frequency to phase stabilize the first flex mode while gain stabilizing the higher modes. The upper stage bending filter design gain stabilizes all the flex bending modes. The flight control system designs provided here have been demonstrated to provide stable first and second stage control systems in both Draper Ares Stability Analysis Tool (ASAT) and the MSFC 6DOF nonlinear time domain simulation.

  9. Nonlinear robust controller design for multi-robot systems with unknown payloads

    NASA Technical Reports Server (NTRS)

    Song, Y. D.; Anderson, J. N.; Homaifar, A.; Lai, H. Y.

    1992-01-01

    This work is concerned with the control problem of a multi-robot system handling a payload with unknown mass properties. Force constraints at the grasp points are considered. Robust control schemes are proposed that cope with the model uncertainty and achieve asymptotic path tracking. To deal with the force constraints, a strategy for optimally sharing the task is suggested. This strategy basically consists of two steps. The first detects the robots that need help and the second arranges that help. It is shown that the overall system is not only robust to uncertain payload parameters, but also satisfies the force constraints.

  10. A design study for the use of a multiple aperture deployable antenna for soil moisture remote sensing satellite applications

    NASA Technical Reports Server (NTRS)

    Foldes, P.

    1986-01-01

    The instrumentation problems associated with the measurement of soil moisture with a meaningful spatial and temperature resolution at a global scale are addressed. For this goal only medium term available affordable technology will be considered. The study while limited in scope, will utilize a large scale antenna structure, which is being developed presently as an experimental model. The interface constraints presented by a singel Space Transportation System (STS) flight will be assumed. Methodology consists of the following steps: review of science requirements; analyze effects of these requirements; present basic system engineering considerations and trade-offs related to orbit parameters, number of spacecraft and their lifetime, observation angles, beamwidth, crossover and swath, coverage percentage, beam quality and resolution, instrument quantities, and integration time; bracket the key system characteristics and develop an electromagnetic design of the antenna-passive radiometer system. Several aperture division combinations and feed array concepts are investigated to achieve maximum feasible performacne within the stated STS constraints.

  11. Tensor network method for reversible classical computation

    NASA Astrophysics Data System (ADS)

    Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.

    2018-03-01

    We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.

  12. Efficiency versus speed in quantum heat engines: Rigorous constraint from Lieb-Robinson bound

    NASA Astrophysics Data System (ADS)

    Shiraishi, Naoto; Tajima, Hiroyasu

    2017-08-01

    A long-standing open problem whether a heat engine with finite power achieves the Carnot efficiency is investgated. We rigorously prove a general trade-off inequality on thermodynamic efficiency and time interval of a cyclic process with quantum heat engines. In a first step, employing the Lieb-Robinson bound we establish an inequality on the change in a local observable caused by an operation far from support of the local observable. This inequality provides a rigorous characterization of the following intuitive picture that most of the energy emitted from the engine to the cold bath remains near the engine when the cyclic process is finished. Using this description, we prove an upper bound on efficiency with the aid of quantum information geometry. Our result generally excludes the possibility of a process with finite speed at the Carnot efficiency in quantum heat engines. In particular, the obtained constraint covers engines evolving with non-Markovian dynamics, which almost all previous studies on this topic fail to address.

  13. Efficiency versus speed in quantum heat engines: Rigorous constraint from Lieb-Robinson bound.

    PubMed

    Shiraishi, Naoto; Tajima, Hiroyasu

    2017-08-01

    A long-standing open problem whether a heat engine with finite power achieves the Carnot efficiency is investgated. We rigorously prove a general trade-off inequality on thermodynamic efficiency and time interval of a cyclic process with quantum heat engines. In a first step, employing the Lieb-Robinson bound we establish an inequality on the change in a local observable caused by an operation far from support of the local observable. This inequality provides a rigorous characterization of the following intuitive picture that most of the energy emitted from the engine to the cold bath remains near the engine when the cyclic process is finished. Using this description, we prove an upper bound on efficiency with the aid of quantum information geometry. Our result generally excludes the possibility of a process with finite speed at the Carnot efficiency in quantum heat engines. In particular, the obtained constraint covers engines evolving with non-Markovian dynamics, which almost all previous studies on this topic fail to address.

  14. Dynamic visualization techniques for high consequence software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pollock, G.M.

    1998-02-01

    This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification. The prototype tool is described along with the requirements constraint language after a brief literature review is presented. Examples of howmore » the tool can be used are also presented. In conclusion, the most significant advantage of this tool is to provide a first step in evaluating specification completeness, and to provide a more productive method for program comprehension and debugging. The expected payoff is increased software surety confidence, increased program comprehension, and reduced development and debugging time.« less

  15. An adaptive model for vanadium redox flow battery and its application for online peak power estimation

    NASA Astrophysics Data System (ADS)

    Wei, Zhongbao; Meng, Shujuan; Tseng, King Jet; Lim, Tuti Mariana; Soong, Boon Hee; Skyllas-Kazacos, Maria

    2017-03-01

    An accurate battery model is the prerequisite for reliable state estimate of vanadium redox battery (VRB). As the battery model parameters are time varying with operating condition variation and battery aging, the common methods where model parameters are empirical or prescribed offline lacks accuracy and robustness. To address this issue, this paper proposes to use an online adaptive battery model to reproduce the VRB dynamics accurately. The model parameters are online identified with both the recursive least squares (RLS) and the extended Kalman filter (EKF). Performance comparison shows that the RLS is superior with respect to the modeling accuracy, convergence property, and computational complexity. Based on the online identified battery model, an adaptive peak power estimator which incorporates the constraints of voltage limit, SOC limit and design limit of current is proposed to fully exploit the potential of the VRB. Experiments are conducted on a lab-scale VRB system and the proposed peak power estimator is verified with a specifically designed "two-step verification" method. It is shown that different constraints dominate the allowable peak power at different stages of cycling. The influence of prediction time horizon selection on the peak power is also analyzed.

  16. Exactly energy conserving semi-implicit particle in cell formulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lapenta, Giovanni, E-mail: giovanni.lapenta@kuleuven.be

    We report a new particle in cell (PIC) method based on the semi-implicit approach. The novelty of the new method is that unlike any of its semi-implicit predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. Recent research has presented fully implicit methods where energy conservation is obtained as part of a non-linear iteration procedure. The new method (referred to as Energy Conserving Semi-Implicit Method, ECSIM), instead, does not require any non-linear iteration and its computational cycle is similar to that of explicit PIC. The properties of the new method are: i) it conservesmore » energy exactly to round-off for any time step or grid spacing; ii) it is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency and allowing the user to select any desired time step; iii) it eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length; iv) the particle mover has a computational complexity identical to that of the explicit PIC, only the field solver has an increased computational cost. The new ECSIM is tested in a number of benchmarks where accuracy and computational performance are tested. - Highlights: • We present a new fully energy conserving semi-implicit particle in cell (PIC) method based on the implicit moment method (IMM). The new method is called Energy Conserving Implicit Moment Method (ECIMM). • The novelty of the new method is that unlike any of its predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. • The new method is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency. • The new method eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length. • These features are achieved at a reduced cost compared with either previous IMM or fully implicit implementation of PIC.« less

  17. Real-time automated failure analysis for on-orbit operations

    NASA Technical Reports Server (NTRS)

    Kirby, Sarah; Lauritsen, Janet; Pack, Ginger; Ha, Anhhoang; Jowers, Steven; Mcnenny, Robert; Truong, The; Dell, James

    1993-01-01

    A system which is to provide real-time failure analysis support to controllers at the NASA Johnson Space Center Control Center Complex (CCC) for both Space Station and Space Shuttle on-orbit operations is described. The system employs monitored systems' models of failure behavior and model evaluation algorithms which are domain-independent. These failure models are viewed as a stepping stone to more robust algorithms operating over models of intended function. The described system is designed to meet two sets of requirements. It must provide a useful failure analysis capability enhancement to the mission controller. It must satisfy CCC operational environment constraints such as cost, computer resource requirements, verification, and validation. The underlying technology and how it may be used to support operations is also discussed.

  18. Effect of resource constraints on intersimilar coupled networks.

    PubMed

    Shai, S; Dobson, S

    2012-12-01

    Most real-world networks do not live in isolation but are often coupled together within a larger system. Recent studies have shown that intersimilarity between coupled networks increases the connectivity of the overall system. However, unlike connected nodes in a single network, coupled nodes often share resources, like time, energy, and memory, which can impede flow processes through contention when intersimilarly coupled. We study a model of a constrained susceptible-infected-recovered (SIR) process on a system consisting of two random networks sharing the same set of nodes, where nodes are limited to interact with (and therefore infect) a maximum number of neighbors at each epidemic time step. We obtain that, in agreement with previous studies, when no limit exists (regular SIR model), positively correlated (intersimilar) coupling results in a lower epidemic threshold than negatively correlated (interdissimilar) coupling. However, in the case of the constrained SIR model, the obtained epidemic threshold is lower with negatively correlated coupling. The latter finding differentiates our work from previous studies and provides another step towards revealing the qualitative differences between single and coupled networks.

  19. Effect of resource constraints on intersimilar coupled networks

    NASA Astrophysics Data System (ADS)

    Shai, S.; Dobson, S.

    2012-12-01

    Most real-world networks do not live in isolation but are often coupled together within a larger system. Recent studies have shown that intersimilarity between coupled networks increases the connectivity of the overall system. However, unlike connected nodes in a single network, coupled nodes often share resources, like time, energy, and memory, which can impede flow processes through contention when intersimilarly coupled. We study a model of a constrained susceptible-infected-recovered (SIR) process on a system consisting of two random networks sharing the same set of nodes, where nodes are limited to interact with (and therefore infect) a maximum number of neighbors at each epidemic time step. We obtain that, in agreement with previous studies, when no limit exists (regular SIR model), positively correlated (intersimilar) coupling results in a lower epidemic threshold than negatively correlated (interdissimilar) coupling. However, in the case of the constrained SIR model, the obtained epidemic threshold is lower with negatively correlated coupling. The latter finding differentiates our work from previous studies and provides another step towards revealing the qualitative differences between single and coupled networks.

  20. High-throughput screening of chromatographic separations: IV. Ion-exchange.

    PubMed

    Kelley, Brian D; Switzer, Mary; Bastek, Patrick; Kramarczyk, Jack F; Molnar, Kathleen; Yu, Tianning; Coffman, Jon

    2008-08-01

    Ion-exchange (IEX) chromatography steps are widely applied in protein purification processes because of their high capacity, selectivity, robust operation, and well-understood principles. Optimization of IEX steps typically involves resin screening and selection of the pH and counterion concentrations of the load, wash, and elution steps. Time and material constraints associated with operating laboratory columns often preclude evaluating more than 20-50 conditions during early stages of process development. To overcome this limitation, a high-throughput screening (HTS) system employing a robotic liquid handling system and 96-well filterplates was used to evaluate various operating conditions for IEX steps for monoclonal antibody (mAb) purification. A screening study for an adsorptive cation-exchange step evaluated eight different resins. Sodium chloride concentrations defining the operating boundaries of product binding and elution were established at four different pH levels for each resin. Adsorption isotherms were measured for 24 different pH and salt combinations for a single resin. An anion-exchange flowthrough step was then examined, generating data on mAb adsorption for 48 different combinations of pH and counterion concentration for three different resins. The mAb partition coefficients were calculated and used to estimate the characteristic charge of the resin-protein interaction. Host cell protein and residual Protein A impurity levels were also measured, providing information on selectivity within this operating window. The HTS system shows promise for accelerating process development of IEX steps, enabling rapid acquisition of large datasets addressing the performance of the chromatography step under many different operating conditions. (c) 2008 Wiley Periodicals, Inc.

  1. Balanced sections and the propagation of décollement: A Jura perspective

    NASA Astrophysics Data System (ADS)

    Laubscher, Hans

    2003-12-01

    The propagation of thrusting is an important problem in tectonics that is usually approached by forward (kinematical) modeling of balanced sections. Although modeling techniques are similar in most foreland fold-thrust belts, it turns out that in the Jura, there are modeling problems that require modifications of widely used techniques. In particular, attention is called to the role of model constraints that complement the set of observational constraints in order to fully define the model. In the eastern Jura, such model constraints may be inferred from the regional geology, which shows a peculiar noncoaxial relation between thrusts and subsequent folds. This relation implies changes in the direction of translation and the mode of deformation in the course of the propagation of décollement. These changes are conjectured to be the result of a change in partial decoupling between the thin-skinned fold-thrust system (nappe) and the obliquely subducted foreland. As a particularly instructive case in point, a cross section through the Weissenstein range is discussed. A two-step forward (kinematical) model is proposed that uses both local observational constraints as well as model constraints inferred from regional data. As a first step, a fault bend fold is generated in the hanging wall of a thrust of 1500 m shortening. As a second step, this structure is transferred by flexural slip into the actual fold observed at the surface. This requires an additional 1600 m of shortening and leads to folding of the original thrust. Thereafter, the footwall is deformed so as to respect the constraint that this deformation must fit into the space defined by the folded thrust as the upper boundary and the décollement surface as the lower boundary, and that, in addition, should be confined to the area immediately below the fold. In modeling the footwall deformation a mix of balancing methods is used: fault propagation folds for the competent intervals of the stratigraphic column and area balancing for the incompetent ones. Further propagation of décollement into the foreland is made possible by the folding process, which is dominated by a sort of kinking and which is the main contribution to structural elevation and hence to producing a sort of critical taper of the moving thin-skinned wedge.

  2. Computationally optimized ECoG stimulation with local safety constraints.

    PubMed

    Guler, Seyhmus; Dannhauer, Moritz; Roig-Solvas, Biel; Gkogkidis, Alexis; Macleod, Rob; Ball, Tonio; Ojemann, Jeffrey G; Brooks, Dana H

    2018-06-01

    Direct stimulation of the cortical surface is used clinically for cortical mapping and modulation of local activity. Future applications of cortical modulation and brain-computer interfaces may also use cortical stimulation methods. One common method to deliver current is through electrocorticography (ECoG) stimulation in which a dense array of electrodes are placed subdurally or epidurally to stimulate the cortex. However, proximity to cortical tissue limits the amount of current that can be delivered safely. It may be desirable to deliver higher current to a specific local region of interest (ROI) while limiting current to other local areas more stringently than is guaranteed by global safety limits. Two commonly used global safety constraints bound the total injected current and individual electrode currents. However, these two sets of constraints may not be sufficient to prevent high current density locally (hot-spots). In this work, we propose an efficient approach that prevents current density hot-spots in the entire brain while optimizing ECoG stimulus patterns for targeted stimulation. Specifically, we maximize the current along a particular desired directional field in the ROI while respecting three safety constraints: one on the total injected current, one on individual electrode currents, and the third on the local current density magnitude in the brain. This third set of constraints creates a computational barrier due to the huge number of constraints needed to bound the current density at every point in the entire brain. We overcome this barrier by adopting an efficient two-step approach. In the first step, the proposed method identifies the safe brain region, which cannot contain any hot-spots solely based on the global bounds on total injected current and individual electrode currents. In the second step, the proposed algorithm iteratively adjusts the stimulus pattern to arrive at a solution that exhibits no hot-spots in the remaining brain. We report on simulations on a realistic finite element (FE) head model with five anatomical ROIs and two desired directional fields. We also report on the effect of ROI depth and desired directional field on the focality of the stimulation. Finally, we provide an analysis of optimization runtime as a function of different safety and modeling parameters. Our results suggest that optimized stimulus patterns tend to differ from those used in clinical practice. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Robust model predictive control for multi-step short range spacecraft rendezvous

    NASA Astrophysics Data System (ADS)

    Zhu, Shuyi; Sun, Ran; Wang, Jiaolong; Wang, Jihe; Shao, Xiaowei

    2018-07-01

    This work presents a robust model predictive control (MPC) approach for the multi-step short range spacecraft rendezvous problem. During the specific short range phase concerned, the chaser is supposed to be initially outside the line-of-sight (LOS) cone. Therefore, the rendezvous process naturally includes two steps: the first step is to transfer the chaser into the LOS cone and the second step is to transfer the chaser into the aimed region with its motion confined within the LOS cone. A novel MPC framework named after Mixed MPC (M-MPC) is proposed, which is the combination of the Variable-Horizon MPC (VH-MPC) framework and the Fixed-Instant MPC (FI-MPC) framework. The M-MPC framework enables the optimization for the two steps to be implemented jointly rather than to be separated factitiously, and its computation workload is acceptable for the usually low-power processors onboard spacecraft. Then considering that disturbances including modeling error, sensor noise and thrust uncertainty may induce undesired constraint violations, a robust technique is developed and it is attached to the above M-MPC framework to form a robust M-MPC approach. The robust technique is based on the chance-constrained idea, which ensures that constraints can be satisfied with a prescribed probability. It improves the robust technique proposed by Gavilan et al., because it eliminates the unnecessary conservativeness by explicitly incorporating known statistical properties of the navigation uncertainty. The efficacy of the robust M-MPC approach is shown in a simulation study.

  4. Two-Stage Path Planning Approach for Designing Multiple Spacecraft Reconfiguration Maneuvers

    NASA Technical Reports Server (NTRS)

    Aoude, Georges S.; How, Jonathan P.; Garcia, Ian M.

    2007-01-01

    The paper presents a two-stage approach for designing optimal reconfiguration maneuvers for multiple spacecraft. These maneuvers involve well-coordinated and highly-coupled motions of the entire fleet of spacecraft while satisfying an arbitrary number of constraints. This problem is particularly difficult because of the nonlinearity of the attitude dynamics, the non-convexity of some of the constraints, and the coupling between the positions and attitudes of all spacecraft. As a result, the trajectory design must be solved as a single 6N DOF problem instead of N separate 6 DOF problems. The first stage of the solution approach quickly provides a feasible initial solution by solving a simplified version without differential constraints using a bi-directional Rapidly-exploring Random Tree (RRT) planner. A transition algorithm then augments this guess with feasible dynamics that are propagated from the beginning to the end of the trajectory. The resulting output is a feasible initial guess to the complete optimal control problem that is discretized in the second stage using a Gauss pseudospectral method (GPM) and solved using an off-the-shelf nonlinear solver. This paper also places emphasis on the importance of the initialization step in pseudospectral methods in order to decrease their computation times and enable the solution of a more complex class of problems. Several examples are presented and discussed.

  5. Detached eddy simulation for turbulent fluid-structure interaction of moving bodies using the constraint-based immersed boundary method

    NASA Astrophysics Data System (ADS)

    Nangia, Nishant; Bhalla, Amneet P. S.; Griffith, Boyce E.; Patankar, Neelesh A.

    2016-11-01

    Flows over bodies of industrial importance often contain both an attached boundary layer region near the structure and a region of massively separated flow near its trailing edge. When simulating these flows with turbulence modeling, the Reynolds-averaged Navier-Stokes (RANS) approach is more efficient in the former, whereas large-eddy simulation (LES) is more accurate in the latter. Detached-eddy simulation (DES), based on the Spalart-Allmaras model, is a hybrid method that switches from RANS mode of solution in attached boundary layers to LES in detached flow regions. Simulations of turbulent flows over moving structures on a body-fitted mesh incur an enormous remeshing cost every time step. The constraint-based immersed boundary (cIB) method eliminates this operation by placing the structure on a Cartesian mesh and enforcing a rigidity constraint as an additional forcing in the Navier-Stokes momentum equation. We outline the formulation and development of a parallel DES-cIB method using adaptive mesh refinement. We show preliminary validation results for flows past stationary bodies with both attached and separated boundary layers along with results for turbulent flows past moving bodies. This work is supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1324585.

  6. Deterministic methods for multi-control fuel loading optimization

    NASA Astrophysics Data System (ADS)

    Rahman, Fariz B. Abdul

    We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.

  7. Fast Low-Rank Shared Dictionary Learning for Image Classification.

    PubMed

    Tiep Huu Vu; Monga, Vishal

    2017-11-01

    Despite the fact that different objects possess distinct class-specific features, they also usually share common patterns. This observation has been exploited partially in a recently proposed dictionary learning framework by separating the particularity and the commonality (COPAR). Inspired by this, we propose a novel method to explicitly and simultaneously learn a set of common patterns as well as class-specific features for classification with more intuitive constraints. Our dictionary learning framework is hence characterized by both a shared dictionary and particular (class-specific) dictionaries. For the shared dictionary, we enforce a low-rank constraint, i.e., claim that its spanning subspace should have low dimension and the coefficients corresponding to this dictionary should be similar. For the particular dictionaries, we impose on them the well-known constraints stated in the Fisher discrimination dictionary learning (FDDL). Furthermore, we develop new fast and accurate algorithms to solve the subproblems in the learning step, accelerating its convergence. The said algorithms could also be applied to FDDL and its extensions. The efficiencies of these algorithms are theoretically and experimentally verified by comparing their complexities and running time with those of other well-known dictionary learning methods. Experimental results on widely used image data sets establish the advantages of our method over the state-of-the-art dictionary learning methods.

  8. Fast Low-Rank Shared Dictionary Learning for Image Classification

    NASA Astrophysics Data System (ADS)

    Vu, Tiep Huu; Monga, Vishal

    2017-11-01

    Despite the fact that different objects possess distinct class-specific features, they also usually share common patterns. This observation has been exploited partially in a recently proposed dictionary learning framework by separating the particularity and the commonality (COPAR). Inspired by this, we propose a novel method to explicitly and simultaneously learn a set of common patterns as well as class-specific features for classification with more intuitive constraints. Our dictionary learning framework is hence characterized by both a shared dictionary and particular (class-specific) dictionaries. For the shared dictionary, we enforce a low-rank constraint, i.e. claim that its spanning subspace should have low dimension and the coefficients corresponding to this dictionary should be similar. For the particular dictionaries, we impose on them the well-known constraints stated in the Fisher discrimination dictionary learning (FDDL). Further, we develop new fast and accurate algorithms to solve the subproblems in the learning step, accelerating its convergence. The said algorithms could also be applied to FDDL and its extensions. The efficiencies of these algorithms are theoretically and experimentally verified by comparing their complexities and running time with those of other well-known dictionary learning methods. Experimental results on widely used image datasets establish the advantages of our method over state-of-the-art dictionary learning methods.

  9. Geometry and Material Constraint Effects on Creep Crack Growth Behavior in Welded Joints

    NASA Astrophysics Data System (ADS)

    Li, Y.; Wang, G. Z.; Xuan, F. Z.; Tu, S. T.

    2017-02-01

    In this work, the geometry and material constraint effects on creep crack growth (CCG) and behavior in welded joints were investigated. The CCG paths and rates of two kinds of specimen geometry (C(T) and M(T)) with initial cracks located at soft HAZ (heat-affected zone with lower creep strength) and different material mismatches were simulated. The effect of constraint on creep crack initiation (CCI) time was discussed. The results show that there exists interaction between geometry and material constraints in terms of their effects on CCG rate and CCI time of welded joints. Under the condition of low geometry constraint, the effect of material constraint on CCG rate and CCI time becomes more obvious. Higher material constraint can promote CCG due to the formation of higher stress triaxiality around crack tip. Higher geometry constraint can increase CCG rate and reduce CCI time of welded joints. Both geometry and material constraints should be considered in creep life assessment and design for high-temperature welded components.

  10. Probability-based constrained MPC for structured uncertain systems with state and random input delays

    NASA Astrophysics Data System (ADS)

    Lu, Jianbo; Li, Dewei; Xi, Yugeng

    2013-07-01

    This article is concerned with probability-based constrained model predictive control (MPC) for systems with both structured uncertainties and time delays, where a random input delay and multiple fixed state delays are included. The process of input delay is governed by a discrete-time finite-state Markov chain. By invoking an appropriate augmented state, the system is transformed into a standard structured uncertain time-delay Markov jump linear system (MJLS). For the resulting system, a multi-step feedback control law is utilised to minimise an upper bound on the expected value of performance objective. The proposed design has been proved to stabilise the closed-loop system in the mean square sense and to guarantee constraints on control inputs and system states. Finally, a numerical example is given to illustrate the proposed results.

  11. Parallel consistent labeling algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samal, A.; Henderson, T.

    Mackworth and Freuder have analyzed the time complexity of several constraint satisfaction algorithms. Mohr and Henderson have given new algorithms, AC-4 and PC-3, for arc and path consistency, respectively, and have shown that the arc consistency algorithm is optimal in time complexity and of the same order space complexity as the earlier algorithms. In this paper, they give parallel algorithms for solving node and arc consistency. They show that any parallel algorithm for enforcing arc consistency in the worst case must have O(na) sequential steps, where n is number of nodes, and a is the number of labels per node.more » They give several parallel algorithms to do arc consistency. It is also shown that they all have optimal time complexity. The results of running the parallel algorithms on a BBN Butterfly multiprocessor are also presented.« less

  12. Research on millisecond load recovery strategy in the late period of UHVDC fault dispose

    NASA Astrophysics Data System (ADS)

    Qiu, Chenguang; Qian, Tiantian; Cheng, Jinmin; Wang, Ke

    2018-06-01

    When UHVDC has a fault, it needs to quickly cut off the load so that the entire system can keep balance. In the late period of fault dispose, it needs to recover the load step by step. The recovery strategy of millisecond load is studied in this paper. Aimed at the maximum recovery load in one step, combined with grid security constraints, the recovery model of millisecond load is built, and then solved by Genetic Algorithms. The simulation example is established to verify the effectiveness of proposed method.

  13. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial decrease of the required number of function evaluations for detecting the optimal management policy, using an innovative, surrogate-assisted global optimization approach.

  14. Evolutionary stability for matrix games under time constraints.

    PubMed

    Garay, József; Csiszár, Villő; Móri, Tamás F

    2017-02-21

    Game theory focuses on payoffs and typically ignores time constraints that play an important role in evolutionary processes where the repetition of games can depend on the strategies, too. We introduce a matrix game under time constraints, where each pairwise interaction has two consequences: both players receive a payoff and they cannot play the next game for a specified time duration. Thus our model is defined by two matrices: a payoff matrix and an average time duration matrix. Maynard Smith's concept of evolutionary stability is extended to this class of games. We illustrate the effect of time constraints by the well-known prisoner's dilemma game, where additional time constraints can ensure the existence of unique evolutionary stable strategies (ESS), both pure and mixed, or the coexistence of two pure ESS. Our general results may be useful in several fields of biology where evolutionary game theory is applied, principally in ecological games, where time constraints play an inevitable role. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Finite-element approach to Brownian dynamics of polymers.

    PubMed

    Cyron, Christian J; Wall, Wolfgang A

    2009-12-01

    In the last decades simulation tools for Brownian dynamics of polymers have attracted more and more interest. Such simulation tools have been applied to a large variety of problems and accelerated the scientific progress significantly. However, the currently most frequently used explicit bead models exhibit severe limitations, especially with respect to time step size, the necessity of artificial constraints and the lack of a sound mathematical foundation. Here we present a framework for simulations of Brownian polymer dynamics based on the finite-element method. This approach allows simulating a wide range of physical phenomena at a highly attractive computational cost on the basis of a far-developed mathematical background.

  16. Analysis of Preconditioning and Relaxation Operators for the Discontinuous Galerkin Method Applied to Diffusion

    NASA Technical Reports Server (NTRS)

    Atkins, H. L.; Shu, Chi-Wang

    2001-01-01

    The explicit stability constraint of the discontinuous Galerkin method applied to the diffusion operator decreases dramatically as the order of the method is increased. Block Jacobi and block Gauss-Seidel preconditioner operators are examined for their effectiveness at accelerating convergence. A Fourier analysis for methods of order 2 through 6 reveals that both preconditioner operators bound the eigenvalues of the discrete spatial operator. Additionally, in one dimension, the eigenvalues are grouped into two or three regions that are invariant with order of the method. Local relaxation methods are constructed that rapidly damp high frequencies for arbitrarily large time step.

  17. An Automatic and Robust Algorithm of Reestablishment of Digital Dental Occlusion

    PubMed Central

    Chang, Yu-Bing; Xia, James J.; Gateno, Jaime; Xiong, Zixiang; Zhou, Xiaobo; Wong, Stephen T. C.

    2017-01-01

    In the field of craniomaxillofacial (CMF) surgery, surgical planning can be performed on composite 3-D models that are generated by merging a computerized tomography scan with digital dental models. Digital dental models can be generated by scanning the surfaces of plaster dental models or dental impressions with a high-resolution laser scanner. During the planning process, one of the essential steps is to reestablish the dental occlusion. Unfortunately, this task is time-consuming and often inaccurate. This paper presents a new approach to automatically and efficiently reestablish dental occlusion. It includes two steps. The first step is to initially position the models based on dental curves and a point matching technique. The second step is to reposition the models to the final desired occlusion based on iterative surface-based minimum distance mapping with collision constraints. With linearization of rotation matrix, the alignment is modeled by solving quadratic programming. The simulation was completed on 12 sets of digital dental models. Two sets of dental models were partially edentulous, and another two sets have first premolar extractions for orthodontic treatment. Two validation methods were applied to the articulated models. The results show that using our method, the dental models can be successfully articulated with a small degree of deviations from the occlusion achieved with the gold-standard method. PMID:20529735

  18. Pareto Tracer: a predictor-corrector method for multi-objective optimization problems

    NASA Astrophysics Data System (ADS)

    Martín, Adanay; Schütze, Oliver

    2018-03-01

    This article proposes a novel predictor-corrector (PC) method for the numerical treatment of multi-objective optimization problems (MOPs). The algorithm, Pareto Tracer (PT), is capable of performing a continuation along the set of (local) solutions of a given MOP with k objectives, and can cope with equality and box constraints. Additionally, the first steps towards a method that manages general inequality constraints are also introduced. The properties of PT are first discussed theoretically and later numerically on several examples.

  19. A new algorithm for real-time optimal dispatch of active and reactive power generation retaining nonlinearity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roy, L.; Rao, N.D.

    1983-04-01

    This paper presents a new method for optimal dispatch of real and reactive power generation which is based on cartesian coordinate formulation of economic dispatch problem and reclassification of state and control variables associated with generator buses. The voltage and power at these buses are classified as parametric and functional inequality constraints, and are handled by reduced gradient technique and penalty factor approach respectively. The advantage of this classification is the reduction in the size of the equality constraint model, leading to less storage requirement. The rectangular coordinate formulation results in an exact equality constraint model in which the coefficientmore » matrix is real, sparse, diagonally dominant, smaller in size and need be computed and factorized once only in each gradient step. In addition, Lagragian multipliers are calculated using a new efficient procedure. A natural outcome of these features is the solution of the economic dispatch problem, faster than other methods available to date in the literature. Rapid and reliable convergence is an additional desirable characteristic of the method. Digital simulation results are presented on several IEEE test systems to illustrate the range of application of the method visa-vis the popular Dommel-Tinney (DT) procedure. It is found that the proposed method is more reliable, 3-4 times faster and requires 20-30 percent less storage compared to the DT algorithm, while being just as general. Thus, owing to its exactness, robust mathematical model and less computational requirements, the method developed in the paper is shown to be a practically feasible algorithm for on-line optimal power dispatch.« less

  20. A family of compact high order coupled time-space unconditionally stable vertical advection schemes

    NASA Astrophysics Data System (ADS)

    Lemarié, Florian; Debreu, Laurent

    2016-04-01

    Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost.

  1. Synthetic Constraint of Ecosystem C Models Using Radiocarbon and Net Primary Production (NPP) in New Zealand Grazing Land

    NASA Astrophysics Data System (ADS)

    Baisden, W. T.

    2011-12-01

    Time-series radiocarbon measurements have substantial ability to constrain the size and residence time of the soil C pools commonly represented in ecosystem models. Radiocarbon remains unique in the ability to constrain the large stabilized C pool with decadal residence times. Radiocarbon also contributes usefully to constraining the size and turnover rate of the passive pool, but typically struggles to constrain pools with residence times less than a few years. Overall, the number of pools and associated turnover rates that can be constrained depends upon the number of time-series samples available, the appropriateness of chemical or physical fractions to isolate unequivocal pools, and the utility of additional C flux data to provide additional constraints. In New Zealand pasture soils, we demonstrate the ability to constrain decadal turnover times with in a few years for the stabilized pool and reasonably constrain the passive fraction. Good constraint is obtained with two time-series samples spaced 10 or more years apart after 1970. Three or more time-series samples further improve the level of constraint. Work within this context shows that a two-pool model does explain soil radiocarbon data for the most detailed profiles available (11 time-series samples), and identifies clear and consistent differences in rates of C turnover and passive fraction in Andisols vs Non-Andisols. Furthermore, samples from multiple horizons can commonly be combined, yielding consistent residence times and passive fraction estimates that are stable with, or increase with, depth in different sites. Radiocarbon generally fails to quantify rapid C turnover, however. Given that the strength of radiocarbon is estimating the size and turnover of the stabilized (decadal) and passive (millennial) pools, the magnitude of fast cycling pool(s) can be estimated by subtracting the radiocarbon-based estimates of turnover within stabilized and passive pools from total estimates of NPP. In grazing land, these estimates can be derived primarily from measured aboveground NPP and calculated belowground NPP. Results suggest that only 19-36% of heterotrophic soil respiration is derived from the soil C with rapid turnover times. A final logical step in synthesis is the analysis of temporal variation in NPP, primarily due to climate, as driver of changes in plant inputs and resulting in dynamic changes in rapid and decadal soil C pools. In sites with good time series samples from 1959-1975, we examine the apparent impacts of measured or modelled (Biome-BGC) NPP on soil Δ14C. Ultimately, these approaches have the ability to empirically constrain, and provide limited verification, of the soil C cycle as commonly depicted ecosystem biogeochemistry models.

  2. Active traffic management : the next step in congestion management

    DOT National Transportation Integrated Search

    2007-07-01

    The combination of continued travel growth and budget constraints makes it difficult for transportation agencies to provide sufficient roadway capacity in major metropolitan areas. The Federal Highway Administration, American Association of State Hig...

  3. Toward Scalable Trustworthy Computing Using the Human-Physiology-Immunity Metaphor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hively, Lee M; Sheldon, Frederick T

    The cybersecurity landscape consists of an ad hoc patchwork of solutions. Optimal cybersecurity is difficult for various reasons: complexity, immense data and processing requirements, resource-agnostic cloud computing, practical time-space-energy constraints, inherent flaws in 'Maginot Line' defenses, and the growing number and sophistication of cyberattacks. This article defines the high-priority problems and examines the potential solution space. In that space, achieving scalable trustworthy computing and communications is possible through real-time knowledge-based decisions about cyber trust. This vision is based on the human-physiology-immunity metaphor and the human brain's ability to extract knowledge from data and information. The article outlines future steps towardmore » scalable trustworthy systems requiring a long-term commitment to solve the well-known challenges.« less

  4. Optimal impulsive time-fixed orbital rendezvous and interception with path constraints

    NASA Technical Reports Server (NTRS)

    Taur, D.-R.; Prussing, J. E.; Coverstone-Carroll, V.

    1990-01-01

    Minimum-fuel, impulsive, time-fixed solutions are obtained for the problem of orbital rendezvous and interception with interior path constraints. Transfers between coplanar circular orbits in an inverse-square gravitational field are considered, subject to a circular path constraint representing a minimum or maximum permissible orbital radius. Primer vector theory is extended to incorporate path constraints. The optimal number of impulses, their times and positions, and the presence of initial or final coasting arcs are determined. The existence of constraint boundary arcs and boundary points is investigated as well as the optimality of a class of singular arc solutions. To illustrate the complexities introduced by path constraints, an analysis is made of optimal rendezvous in field-free space subject to a minimum radius constraint.

  5. Advancing RF pulse design using an open-competition format: Report from the 2015 ISMRM challenge.

    PubMed

    Grissom, William A; Setsompop, Kawin; Hurley, Samuel A; Tsao, Jeffrey; Velikina, Julia V; Samsonov, Alexey A

    2017-10-01

    To advance the best solutions to two important RF pulse design problems with an open head-to-head competition. Two sub-challenges were formulated in which contestants competed to design the shortest simultaneous multislice (SMS) refocusing pulses and slice-selective parallel transmission (pTx) excitation pulses, subject to realistic hardware and safety constraints. Short refocusing pulses are needed for spin echo SMS imaging at high multiband factors, and short slice-selective pTx pulses are needed for multislice imaging in ultra-high field MRI. Each sub-challenge comprised two phases, in which the first phase posed problems with a low barrier of entry, and the second phase encouraged solutions that performed well in general. The Challenge ran from October 2015 to May 2016. The pTx Challenge winners developed a spokes pulse design method that combined variable-rate selective excitation with an efficient method to enforce SAR constraints, which achieved 10.6 times shorter pulse durations than conventional approaches. The SMS Challenge winners developed a time-optimal control multiband pulse design algorithm that achieved 5.1 times shorter pulse durations than conventional approaches. The Challenge led to rapid step improvements in solutions to significant problems in RF excitation for SMS imaging and ultra-high field MRI. Magn Reson Med 78:1352-1361, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  6. Linking Land Surface Phenology and Growth Limiting Factor Shifts over the Past 30 Years

    NASA Astrophysics Data System (ADS)

    Garonna, I.; Schenkel, D.; de Jong, R.; Schaepman, M. E.

    2015-12-01

    The study of global vegetation dynamics contributes to a better understanding of global change drivers and how these affect ecosystems and ecological diversity. Land-surface phenology (LSP) is a key response and feedback of vegetation to the climate system, and hence a parameter that needs to be accurately represented in terrestrial biosphere models [1]. However, the effects of climatic changes on LSP depend on the relative importance of climatic constraints in specific regions - which are not well understood at global scale. In this study, we analyzed a Phenology Reanalysis dataset [2] to evaluate shifts in three climatic drivers of phenology at global scale and over the last 30 years (1982-2012): incoming radiation, evaporative demand and minimum temperature. As a first step, we compared LAI as modeled from these three factors (LAIre) to remotely sensed observations of LSP (LAI3g, [3]) over the same time period. As a second step, we examined temporal trends in the climatic constraints at Start- and End- of the Growing Season. There was good agreement between phenology metrics as derived form LAI3g and LAIre over the last 30 years - thus providing confidence in the climatic constraints underlying the modeled data. Our analysis reveals inter-annual variation in the relative importance of the three climatic factors in limiting vegetation growth at Start- and End- of the Growing Season over the last 30 years. High northern latitudes, as well as northern Europe and central Asia, appear to have undergone significant changes in dominance between the three controls. We also find that evaporative demand has become increasingly limiting for growth in many parts of the world, in particular in South America and eastern Asia. [1] Richardson, A.D. et al. Global Change Biology 18, 566-584 (2012). [2] Stöckli, R. et al. J. Geophys. Res 116, G03020 (2011). [3] Zhu, Z. et al. Remote Sensing 5, 927-948 (2013).

  7. A extract method of mountainous area settlement place information from GF-1 high resolution optical remote sensing image under semantic constraints

    NASA Astrophysics Data System (ADS)

    Guo, H., II

    2016-12-01

    Spatial distribution information of mountainous area settlement place is of great significance to the earthquake emergency work because most of the key earthquake hazardous areas of china are located in the mountainous area. Remote sensing has the advantages of large coverage and low cost, it is an important way to obtain the spatial distribution information of mountainous area settlement place. At present, fully considering the geometric information, spectral information and texture information, most studies have applied object-oriented methods to extract settlement place information, In this article, semantic constraints is to be added on the basis of object-oriented methods. The experimental data is one scene remote sensing image of domestic high resolution satellite (simply as GF-1), with a resolution of 2 meters. The main processing consists of 3 steps, the first is pretreatment, including ortho rectification and image fusion, the second is Object oriented information extraction, including Image segmentation and information extraction, the last step is removing the error elements under semantic constraints, in order to formulate these semantic constraints, the distribution characteristics of mountainous area settlement place must be analyzed and the spatial logic relation between settlement place and other objects must be considered. The extraction accuracy calculation result shows that the extraction accuracy of object oriented method is 49% and rise up to 86% after the use of semantic constraints. As can be seen from the extraction accuracy, the extract method under semantic constraints can effectively improve the accuracy of mountainous area settlement place information extraction. The result shows that it is feasible to extract mountainous area settlement place information form GF-1 image, so the article proves that it has a certain practicality to use domestic high resolution optical remote sensing image in earthquake emergency preparedness.

  8. Method and System for Air Traffic Rerouting for Airspace Constraint Resolution

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz (Inventor); Morando, Alexander R. (Inventor); Sheth, Kapil S. (Inventor); McNally, B. David (Inventor); Clymer, Alexis A. (Inventor); Shih, Fu-tai (Inventor)

    2017-01-01

    A dynamic constraint avoidance route system automatically analyzes routes of aircraft flying, or to be flown, in or near constraint regions and attempts to find more time and fuel efficient reroutes around current and predicted constraints. The dynamic constraint avoidance route system continuously analyzes all flight routes and provides reroute advisories that are dynamically updated in real time. The dynamic constraint avoidance route system includes a graphical user interface that allows users to visualize, evaluate, modify if necessary, and implement proposed reroutes.

  9. A survey of methods of feasible directions for the solution of optimal control problems

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1972-01-01

    Three methods of feasible directions for optimal control are reviewed. These methods are an extension of the Frank-Wolfe method, a dual method devised by Pironneau and Polack, and a Zontendijk method. The categories of continuous optimal control problems are shown as: (1) fixed time problems with fixed initial state, free terminal state, and simple constraints on the control; (2) fixed time problems with inequality constraints on both the initial and the terminal state and no control constraints; (3) free time problems with inequality constraints on the initial and terminal states and simple constraints on the control; and (4) fixed time problems with inequality state space contraints and constraints on the control. The nonlinear programming algorithms are derived for each of the methods in its associated category.

  10. Cognitive Models for Learning to Control Dynamic Systems

    DTIC Science & Technology

    2008-05-30

    2 3N NM NM NMK NK M− + + + + constraints, including KN M+ equality constraints, 7 2NM M+ inequality non- timing constraints and the rest are... inequality timing constraints. The size of the MILP model grows rapidly with the increase of problem size. So it is a big challenge to deal with more...task requirement, are studied in the section. An assumption is made in advance that the time of attack delay and flight time to the sink node are

  11. Conformational Sampling in Template-Free Protein Loop Structure Modeling: An Overview

    PubMed Central

    Li, Yaohang

    2013-01-01

    Accurately modeling protein loops is an important step to predict three-dimensional structures as well as to understand functions of many proteins. Because of their high flexibility, modeling the three-dimensional structures of loops is difficult and is usually treated as a “mini protein folding problem” under geometric constraints. In the past decade, there has been remarkable progress in template-free loop structure modeling due to advances of computational methods as well as stably increasing number of known structures available in PDB. This mini review provides an overview on the recent computational approaches for loop structure modeling. In particular, we focus on the approaches of sampling loop conformation space, which is a critical step to obtain high resolution models in template-free methods. We review the potential energy functions for loop modeling, loop buildup mechanisms to satisfy geometric constraints, and loop conformation sampling algorithms. The recent loop modeling results are also summarized. PMID:24688696

  12. Conformational sampling in template-free protein loop structure modeling: an overview.

    PubMed

    Li, Yaohang

    2013-01-01

    Accurately modeling protein loops is an important step to predict three-dimensional structures as well as to understand functions of many proteins. Because of their high flexibility, modeling the three-dimensional structures of loops is difficult and is usually treated as a "mini protein folding problem" under geometric constraints. In the past decade, there has been remarkable progress in template-free loop structure modeling due to advances of computational methods as well as stably increasing number of known structures available in PDB. This mini review provides an overview on the recent computational approaches for loop structure modeling. In particular, we focus on the approaches of sampling loop conformation space, which is a critical step to obtain high resolution models in template-free methods. We review the potential energy functions for loop modeling, loop buildup mechanisms to satisfy geometric constraints, and loop conformation sampling algorithms. The recent loop modeling results are also summarized.

  13. Toward Implementing Patient Flow in a Cancer Treatment Center to Reduce Patient Waiting Time and Improve Efficiency.

    PubMed

    Suss, Samuel; Bhuiyan, Nadia; Demirli, Kudret; Batist, Gerald

    2017-06-01

    Outpatient cancer treatment centers can be considered as complex systems in which several types of medical professionals and administrative staff must coordinate their work to achieve the overall goals of providing quality patient care within budgetary constraints. In this article, we use analytical methods that have been successfully employed for other complex systems to show how a clinic can simultaneously reduce patient waiting times and non-value added staff work in a process that has a series of steps, more than one of which involves a scarce resource. The article describes the system model and the key elements in the operation that lead to staff rework and patient queuing. We propose solutions to the problems and provide a framework to evaluate clinic performance. At the time of this report, the proposals are in the process of implementation at a cancer treatment clinic in a major metropolitan hospital in Montreal, Canada.

  14. The ticking time bomb: Using eye-tracking methodology to capture attentional processing during gradual time constraints.

    PubMed

    Franco-Watkins, Ana M; Davis, Matthew E; Johnson, Joseph G

    2016-11-01

    Many decisions are made under suboptimal circumstances, such as time constraints. We examined how different experiences of time constraints affected decision strategies on a probabilistic inference task and whether individual differences in working memory accounted for complex strategy use across different levels of time. To examine information search and attentional processing, we used an interactive eye-tracking paradigm where task information was occluded and only revealed by an eye fixation to a given cell. Our results indicate that although participants change search strategies during the most restricted times, the occurrence of the shift in strategies depends both on how the constraints are applied as well as individual differences in working memory. This suggests that, in situations that require making decisions under time constraints, one can influence performance by being sensitive to working memory and, potentially, by acclimating people to the task time gradually.

  15. Aiding USAF/UPT (Undergraduate Pilot Training) Aircrew Scheduling Using Network Flow Models.

    DTIC Science & Technology

    1986-06-01

    51 3.4 Heuristic Modifications ............ 55 CHAPTER 4 STUDENT SCHEDULING PROBLEM (LEVEL 2) 4.0 Introduction 4.01 Constraints ............. 60 4.02...Covering" Complete Enumeration . . .. . 71 4.14 Heuristics . ............. 72 4.2 Heuristic Method for the Level 2 Problem 4.21 Step I ............... 73...4.22 Step 2 ............... 74 4.23 Advantages to the Heuristic Method. .... .. 78 4.24 Problems with the Heuristic Method. . ... 79 :,., . * CHAPTER5

  16. Stabilization of computational procedures for constrained dynamical systems

    NASA Technical Reports Server (NTRS)

    Park, K. C.; Chiou, J. C.

    1988-01-01

    A new stabilization method of treating constraints in multibody dynamical systems is presented. By tailoring a penalty form of the constraint equations, the method achieves stabilization without artificial damping and yields a companion matrix differential equation for the constraint forces; hence, the constraint forces are obtained by integrating the companion differential equation for the constraint forces in time. A principal feature of the method is that the errors committed in each constraint condition decay with its corresponding characteristic time scale associated with its constraint force. Numerical experiments indicate that the method yields a marked improvement over existing techniques.

  17. Balancing healthy meals and busy lives: associations between work, school, and family responsibilities and perceived time constraints among young adults.

    PubMed

    Pelletier, Jennifer E; Laska, Melissa N

    2012-01-01

    To characterize associations between perceived time constraints for healthy eating and work, school, and family responsibilities among young adults. Cross-sectional survey. A large, Midwestern metropolitan region. A diverse sample of community college (n = 598) and public university (n = 603) students. Time constraints in general, as well as those specific to meal preparation/structure, and perceptions of a healthy life balance. Chi-square tests and multivariate logistic regression (α = .005). Women, 4-year students, and students with lower socioeconomic status perceived more time constraints (P < .001-.002); students with lower socioeconomic status were less likely to have a healthy balance (P ≤ .003). Having a heavy course load and working longer hours were important predictors of time constraints among men (P < .001-.004), whereas living situation and being in a relationship were more important among women (P = .002-.003). Most young adults perceive time constraints on healthy dietary behaviors, yet some young adults appear able to maintain a healthy life balance despite multiple time demands. Interventions focused on improved time management strategies and nutrition-related messaging to achieve healthy diets on a low time budget may be more successful if tailored to the factors that contribute to time constraints separately among men and women. Copyright © 2012 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  18. Numerical simulation of the kinetic effects in the solar wind

    NASA Astrophysics Data System (ADS)

    Sokolov, I.; Toth, G.; Gombosi, T. I.

    2017-12-01

    Global numerical simulations of the solar wind are usually based on the ideal or resistive MagnetoHydroDynamics (MHD) equations. Within a framework of MHD the electric field is assumed to vanish in the co-moving frame of reference (ideal MHD) or to obey a simple and non-physical scalar Ohm's law (resistive MHD). The Maxwellian distribution functions are assumed, the electron and ion temperatures may be different. Non-disversive MHD waves can be present in this numerical model. The averaged equations for MHD turbulence may be included as well as the energy and momentum exchange between the turbulent and regular motion. With the use of explicit numerical scheme, the time step is controlled by the MHD wave propagtion time across the numerical cell (the CFL condition) More refined approach includes the Hall effect vie the generalized Ohm's law. The Lorentz force acting on light electrons is assumed to vanish, which gives the expression for local electric field in terms of the total electric current, the ion current as well as the electron pressure gradient and magnetic field. The waves (whistlers, ion-cyclotron waves etc) aquire dispersion and the short-wavelength perturbations propagate with elevated speed thus strengthening the CFL condition. If the grid size is sufficiently small to resolve ion skindepth scale, then the timestep is much shorter than the ion gyration period. The next natural step is to use hybrid code to resolve the ion kinetic effects. The hybrid numerical scheme employs the same generalized Ohm's law as Hall MHD and suffers from the same constraint on the time step while solving evolution of the electromagnetic field. The important distiction, however, is that by sloving particle motion for ions we can achieve more detailed description of the kinetic effect without significant degrade in the computational efficiency, because the time-step is sufficient to resolve the particle gyration. We present the fisrt numerical results from coupled BATS-R-US+ALTOR code as applied to kinetic simulations of the solar wind.

  19. Conformation and Dynamics of a Flexible Sheet in Solvent Media by Monte Carlo Simulations

    NASA Astrophysics Data System (ADS)

    Pandey, Ras; Anderson, Kelly; Heinz, Hendrik; Farmer, Barry

    2005-03-01

    Flexibility of the clay sheet is limited even in the ex-foliated state in some solvent media. A coarse grained model is used to investigate dynamics and conformation of a flexible sheet to model such a clay platelet in an effective solvent medium on a cubic lattice of size L^3 with lattice constant a. The undeformed sheet is described by a square lattice of size Ls^2, where, each node of the sheet is represented by the unit cube of the cubic lattice and 2a is the minimum distance between the nearest neighbor nodes to incorporate the excluded volume constraints. Additionally, each node interacts with neighboring nodes and solvent (empty) sites within a range ri. Each node execute their stochastic motion with the Metropolis algorithm subject to bond length fluctuation and excluded volume constraints. Mean square displacements of the center node and that of its center of mass are investigated as a function of time step for a set of these parameters. The radius of gyration (Rg) is also examined concurrently to understand its relaxation. Multi-scale segmental dynamics of the sheet is studied by identifying the power-law dependence in various time regimes. Relaxation of Rg and its dependence of temperature are planned to be discussed.

  20. Computational electrodynamics in material media with constraint-preservation, multidimensional Riemann solvers and sub-cell resolution - Part I, second-order FVTD schemes

    NASA Astrophysics Data System (ADS)

    Balsara, Dinshaw S.; Taflove, Allen; Garain, Sudip; Montecinos, Gino

    2017-11-01

    While classic finite-difference time-domain (FDTD) solutions of Maxwell's equations have served the computational electrodynamics (CED) community very well, formulations based on Godunov methodology have begun to show advantages. We argue that the formulations presented so far are such that FDTD schemes and Godunov-based schemes each have their own unique advantages. However, there is currently not a single formulation that systematically integrates the strengths of both these major strains of development. While an early glimpse of such a formulation was offered in Balsara et al. [16], that paper focused on electrodynamics in plasma. Here, we present a synthesis that integrates the strengths of both FDTD and Godunov-based schemes into a robust single formulation for CED in material media. Three advances make this synthesis possible. First, from the FDTD method, we retain (but somewhat modify) a spatial staggering strategy for the primal variables. This provides a beneficial constraint preservation for the electric displacement and magnetic induction vector fields via reconstruction methods that were initially developed in some of the first author's papers for numerical magnetohydrodynamics (MHD). Second, from the Godunov method, we retain the idea of upwinding, except that this idea, too, has to be significantly modified to use the multi-dimensionally upwinded Riemann solvers developed by the first author. Third, we draw upon recent advances in arbitrary derivatives in space and time (ADER) time-stepping by the first author and his colleagues. We use the ADER predictor step to endow our method with sub-cell resolving capabilities so that the method can be stiffly stable and resolve significant sub-cell variation in the material properties within a zone. Overall, in this paper, we report a new scheme for numerically solving Maxwell's equations in material media, with special attention paid to a second-order-accurate formulation. Several numerical examples are presented to show that the proposed technique works. Because of its sub-cell resolving ability, the new method retains second-order accuracy even when material permeability and permittivity vary by an order-of-magnitude over just one or two zones. Furthermore, because the new method is also unconditionally stable in the presence of stiff source terms (i.e., in problems involving giant conductivity variations), it can handle several orders-of-magnitude variation in material conductivity over just one or two zones without any reduction of the time-step. Consequently, the CFL depends only on the propagation speed of light in the medium being studied.

  1. Dispersal constraints for stream invertebrates: setting realistic timescales for biodiversity restoration.

    PubMed

    Parkyn, Stephanie M; Smith, Brian J

    2011-09-01

    Biodiversity goals are becoming increasingly important in stream restoration. Typical models of stream restoration are based on the assumption that if habitat is restored then species will return and ecological processes will re-establish. However, a range of constraints at different scales can affect restoration success. Much of the research in stream restoration ecology has focused on habitat constraints, namely the in-stream and riparian conditions required to restore biota. Dispersal constraints are also integral to determining the timescales, trajectory and potential endpoints of a restored ecosystem. Dispersal is both a means of organism recolonization of restored sites and a vital ecological process that maintains viable populations. We review knowledge of dispersal pathways and explore the factors influencing stream invertebrate dispersal. From empirical and modeling studies of restoration in warm-temperate zones of New Zealand, we make predictions about the timescales of stream ecological restoration under differing levels of dispersal constraints. This process of constraints identification and timescale prediction is proposed as a practical step for resource managers to prioritize and appropriately monitor restoration sites and highlights that in some instances, natural recolonization and achievement of biodiversity goals may not occur.

  2. Dispersal Constraints for Stream Invertebrates: Setting Realistic Timescales for Biodiversity Restoration

    NASA Astrophysics Data System (ADS)

    Parkyn, Stephanie M.; Smith, Brian J.

    2011-09-01

    Biodiversity goals are becoming increasingly important in stream restoration. Typical models of stream restoration are based on the assumption that if habitat is restored then species will return and ecological processes will re-establish. However, a range of constraints at different scales can affect restoration success. Much of the research in stream restoration ecology has focused on habitat constraints, namely the in-stream and riparian conditions required to restore biota. Dispersal constraints are also integral to determining the timescales, trajectory and potential endpoints of a restored ecosystem. Dispersal is both a means of organism recolonization of restored sites and a vital ecological process that maintains viable populations. We review knowledge of dispersal pathways and explore the factors influencing stream invertebrate dispersal. From empirical and modeling studies of restoration in warm-temperate zones of New Zealand, we make predictions about the timescales of stream ecological restoration under differing levels of dispersal constraints. This process of constraints identification and timescale prediction is proposed as a practical step for resource managers to prioritize and appropriately monitor restoration sites and highlights that in some instances, natural recolonization and achievement of biodiversity goals may not occur.

  3. Limits of thermochemical and photochemical syntheses of gaseous fuels: a finite-time thermodynamic analysis. Annual report, September 1983-February, 1985

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, R.S.

    The objectives of this project are to develop methods for the evaluation of syntheses of gaseous fuels in terms of their optimum possible performance, particularly when they are required to supply those fuels at nonzero rates. The first objective is entirely in the tradition of classical thermodynamics, the processes, given the characteristics and constraints that define them. The new element which this project introduces is the capability to set limits more realistic than those from classical thermodynamics, by the inclusion of the influence of the rate or duration of a process on its performance. The development of these analyses ismore » a natural step in the evolution represented by the evaluative papers of Appendix IV, e.g., by Funk et al., Abraham, Shinnar, Bilgen and Fletcher. A second objective is to determine how any given process should be carried out, within its constraints, in order to yield its optimum performance and to use this information whenever possible to help guide the design of that process.« less

  4. Constrained Optimization Methods in Health Services Research-An Introduction: Report 1 of the ISPOR Optimization Methods Emerging Good Practices Task Force.

    PubMed

    Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S

    2017-03-01

    Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  5. An Extended Lagrangian Method

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing

    1995-01-01

    A unique formulation of describing fluid motion is presented. The method, referred to as 'extended Lagrangian method,' is interesting from both theoretical and numerical points of view. The formulation offers accuracy in numerical solution by avoiding numerical diffusion resulting from mixing of fluxes in the Eulerian description. The present method and the Arbitrary Lagrangian-Eulerian (ALE) method have a similarity in spirit-eliminating the cross-streamline numerical diffusion. For this purpose, we suggest a simple grid constraint condition and utilize an accurate discretization procedure. This grid constraint is only applied to the transverse cell face parallel to the local stream velocity, and hence our method for the steady state problems naturally reduces to the streamline-curvature method, without explicitly solving the steady stream-coordinate equations formulated a priori. Unlike the Lagrangian method proposed by Loh and Hui which is valid only for steady supersonic flows, the present method is general and capable of treating subsonic flows and supersonic flows as well as unsteady flows, simply by invoking in the same code an appropriate grid constraint suggested in this paper. The approach is found to be robust and stable. It automatically adapts to flow features without resorting to clustering, thereby maintaining rather uniform grid spacing throughout and large time step. Moreover, the method is shown to resolve multi-dimensional discontinuities with a high level of accuracy, similar to that found in one-dimensional problems.

  6. Nonholonomic Closed-loop Velocity Control of a Soft-tethered Magnetic Capsule Endoscope.

    PubMed

    Taddese, Addisu Z; Slawinski, Piotr R; Obstein, Keith L; Valdastri, Pietro

    2016-10-01

    In this paper, we demonstrate velocity-level closed-loop control of a tethered magnetic capsule endoscope that is actuated via serial manipulator with a permanent magnet at its end-effector. Closed-loop control (2 degrees-of-freedom in position, and 2 in orientation) is made possible with the use of a real-time magnetic localization algorithm that utilizes the actuating magnetic field and thus does not require additional hardware. Velocity control is implemented to create smooth motion that is clinically necessary for colorectal cancer diagnostics. Our control algorithm generates a spline that passes through a set of input points that roughly defines the shape of the desired trajectory. The velocity controller acts in the tangential direction to the path, while a secondary position controller enforces a nonholonomic constraint on capsule motion. A soft nonholonomic constraint is naturally imposed by the lumen while we enforce a strict constraint for both more accurate estimation of tether disturbance and hypothesized intuitiveness for a clinician's teleoperation. An integrating disturbance force estimation control term is introduced to predict the disturbance of the tether. This paper presents the theoretical formulations and experimental validation of our methodology. Results show the system's ability to achieve a repeatable velocity step response with low steady-state error as well as ability of the tethered capsule to maneuver around a bend.

  7. Optimal space-time attacks on system state estimation under a sparsity constraint

    NASA Astrophysics Data System (ADS)

    Lu, Jingyang; Niu, Ruixin; Han, Puxiao

    2016-05-01

    System state estimation in the presence of an adversary that injects false information into sensor readings has attracted much attention in wide application areas, such as target tracking with compromised sensors, secure monitoring of dynamic electric power systems, secure driverless cars, and radar tracking and detection in the presence of jammers. From a malicious adversary's perspective, the optimal strategy for attacking a multi-sensor dynamic system over sensors and over time is investigated. It is assumed that the system defender can perfectly detect the attacks and identify and remove sensor data once they are corrupted by false information injected by the adversary. With this in mind, the adversary's goal is to maximize the covariance matrix of the system state estimate by the end of attack period under a sparse attack constraint such that the adversary can only attack the system a few times over time and over sensors. The sparsity assumption is due to the adversary's limited resources and his/her intention to reduce the chance of being detected by the system defender. This becomes an integer programming problem and its optimal solution, the exhaustive search, is intractable with a prohibitive complexity, especially for a system with a large number of sensors and over a large number of time steps. Several suboptimal solutions, such as those based on greedy search and dynamic programming are proposed to find the attack strategies. Examples and numerical results are provided in order to illustrate the effectiveness and the reduced computational complexities of the proposed attack strategies.

  8. Deciding to Decide: How Decisions Are Made and How Some Forces Affect the Process.

    PubMed

    McConnell, Charles R

    There is a decision-making pattern that applies in all situations, large or small, although in small decisions, the steps are not especially evident. The steps are gathering information, analyzing information and creating alternatives, selecting and implementing an alternative, and following up on implementation. The amount of effort applied in any decision situation should be consistent with the potential consequences of the decision. Essentially, all decisions are subject to certain limitations or constraints, forces, or circumstances that limit one's range of choices. Follow-up on implementation is the phase of decision making most often neglected, yet it is frequently the phase that determines success or failure. Risk and uncertainty are always present in a decision situation, and the application of human judgment is always necessary. In addition, there are often emotional forces at work that can at times unwittingly steer one away from that which is best or most workable under the circumstances and toward a suboptimal result based largely on the desires of the decision maker.

  9. Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing

    PubMed Central

    Zhang, Qianghui; Wu, Junjie; Li, Wenchao; Huang, Yulin; Yang, Jianyu; Yang, Haiguang

    2016-01-01

    Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR) equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS), which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR) provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP) is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD) based on Stolt interpolation. Finally, a modified TSP (MTSP) is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application. PMID:27472341

  10. Capabilities and constraints of NASA's ground-based reduced gravity facilities

    NASA Technical Reports Server (NTRS)

    Lekan, Jack; Neumann, Eric S.; Sotos, Raymond G.

    1993-01-01

    The ground-based reduced gravity facilities of NASA have been utilized to support numerous investigations addressing various processes and phenomina in several disciplines for the past 30 years. These facilities, which include drop towers, drop tubes, aircraft, and sounding rockets are able to provide a low gravity environment (gravitational levels that range from 10(exp -2)g to 10(exp -6)g) by creating a free fall or semi-free fall condition where the force of gravity on an experiment is offset by its linear acceleration during the 'fall' (drop or parabola). The low gravity condition obtained on the ground is the same as that of an orbiting spacecraft which is in a state of perpetual free fall. The gravitational levels and associated duration times associated with the full spectrum of reduced gravity facilities including spaced-based facilities are summarized. Even though ground-based facilities offer a relatively short experiment time, this available test time has been found to be sufficient to advance the scientific understanding of many phenomena and to provide meaningful hardware tests during the flight experiment development process. Also, since experiments can be quickly repeated in these facilities, multistep phenomena that have longer characteristic times associated with them can sometimes be examined in a step-by-step process. There is a large body of literature which has reported the study results achieved through using reduced-gravity data obtained from the facilities.

  11. A coarse-grid projection method for accelerating incompressible flow computations

    NASA Astrophysics Data System (ADS)

    San, Omer; Staples, Anne

    2011-11-01

    We present a coarse-grid projection (CGP) algorithm for accelerating incompressible flow computations, which is applicable to methods involving Poisson equations as incompressibility constraints. CGP methodology is a modular approach that facilitates data transfer with simple interpolations and uses black-box solvers for the Poisson and advection-diffusion equations in the flow solver. Here, we investigate a particular CGP method for the vorticity-stream function formulation that uses the full weighting operation for mapping from fine to coarse grids, the third-order Runge-Kutta method for time stepping, and finite differences for the spatial discretization. After solving the Poisson equation on a coarsened grid, bilinear interpolation is used to obtain the fine data for consequent time stepping on the full grid. We compute several benchmark flows: the Taylor-Green vortex, a vortex pair merging, a double shear layer, decaying turbulence and the Taylor-Green vortex on a distorted grid. In all cases we use either FFT-based or V-cycle multigrid linear-cost Poisson solvers. Reducing the number of degrees of freedom of the Poisson solver by powers of two accelerates these computations while, for the first level of coarsening, retaining the same level of accuracy in the fine resolution vorticity field.

  12. Configuration optimization of space structures

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos; Crivelli, Luis A.; Vandenbelt, David

    1991-01-01

    The objective is to develop a computer aid for the conceptual/initial design of aerospace structures, allowing configurations and shape to be apriori design variables. The topics are presented in viewgraph form and include the following: Kikuchi's homogenization method; a classical shape design problem; homogenization method steps; a 3D mechanical component design example; forming a homogenized finite element; a 2D optimization problem; treatment of volume inequality constraint; algorithms for the volume inequality constraint; object function derivatives--taking advantage of design locality; stiffness variations; variations of potential; and schematics of the optimization problem.

  13. A new heterogeneous asynchronous explicit-implicit time integrator for nonsmooth dynamics

    NASA Astrophysics Data System (ADS)

    Fekak, Fatima-Ezzahra; Brun, Michael; Gravouil, Anthony; Depale, Bruno

    2017-07-01

    In computational structural dynamics, particularly in the presence of nonsmooth behavior, the choice of the time-step and the time integrator has a critical impact on the feasibility of the simulation. Furthermore, in some cases, as in the case of a bridge crane under seismic loading, multiple time-scales coexist in the same problem. In that case, the use of multi-time scale methods is suitable. Here, we propose a new explicit-implicit heterogeneous asynchronous time integrator (HATI) for nonsmooth transient dynamics with frictionless unilateral contacts and impacts. Furthermore, we present a new explicit time integrator for contact/impact problems where the contact constraints are enforced using a Lagrange multiplier method. In other words, the aim of this paper consists in using an explicit time integrator with a fine time scale in the contact area for reproducing high frequency phenomena, while an implicit time integrator is adopted in the other parts in order to reproduce much low frequency phenomena and to optimize the CPU time. In a first step, the explicit time integrator is tested on a one-dimensional example and compared to Moreau-Jean's event-capturing schemes. The explicit algorithm is found to be very accurate and the scheme has generally a higher order of convergence than Moreau-Jean's schemes and provides also an excellent energy behavior. Then, the two time scales explicit-implicit HATI is applied to the numerical example of a bridge crane under seismic loading. The results are validated in comparison to a fine scale full explicit computation. The energy dissipated in the implicit-explicit interface is well controlled and the computational time is lower than a full-explicit simulation.

  14. An L1-norm phase constraint for half-Fourier compressed sensing in 3D MR imaging.

    PubMed

    Li, Guobin; Hennig, Jürgen; Raithel, Esther; Büchert, Martin; Paul, Dominik; Korvink, Jan G; Zaitsev, Maxim

    2015-10-01

    In most half-Fourier imaging methods, explicit phase replacement is used. In combination with parallel imaging, or compressed sensing, half-Fourier reconstruction is usually performed in a separate step. The purpose of this paper is to report that integration of half-Fourier reconstruction into iterative reconstruction minimizes reconstruction errors. The L1-norm phase constraint for half-Fourier imaging proposed in this work is compared with the L2-norm variant of the same algorithm, with several typical half-Fourier reconstruction methods. Half-Fourier imaging with the proposed phase constraint can be seamlessly combined with parallel imaging and compressed sensing to achieve high acceleration factors. In simulations and in in-vivo experiments half-Fourier imaging with the proposed L1-norm phase constraint enables superior performance both reconstruction of image details and with regard to robustness against phase estimation errors. The performance and feasibility of half-Fourier imaging with the proposed L1-norm phase constraint is reported. Its seamless combination with parallel imaging and compressed sensing enables use of greater acceleration in 3D MR imaging.

  15. Balancing healthy meals and busy lives: Associations between work, school and family responsibilities and perceived time constraints among young adults

    PubMed Central

    Laska, Melissa N.

    2012-01-01

    Objective To characterize associations between perceived time constraints for healthy eating and work, school, and family responsibilities among young adults. Design Cross-sectional survey. Setting A large, Midwestern metropolitan region. Participants A diverse sample of community college (n=598) and public university (n=603) students. Main Outcome Measures Time constraints in general, as well as those specific to meal preparation/structure, and perceptions of a healthy life balance. Analysis Chi-square tests and multivariate logistic regression (α=0.005). Results Women, four-year students, and students with lower socio-economic status perceived more time constraints (P<0.001–0.002); students with lower socio-economic status were less likely to have a healthy balance (P<0.001–0.003). Having a heavy course load and working longer hours were important predictors of time constraints among men (P<0.001–0.004), whereas living situation and being in a relationship were more important among women (P=0.002–0.003). Conclusions and Implications Most young adults perceive time constraints on healthy dietary behaviors, yet some young adults appear able to maintain a healthy life balance despite multiple time demands. Interventions focused on improved time management strategies and nutrition-related messaging to achieve healthy diets on a low time budget may be more successful if tailored to the factors that contribute to time constraints among men and women separately. PMID:23017891

  16. Motor Cortex Activity During Functional Motor Skills: An fNIRS Study.

    PubMed

    Nishiyori, Ryota; Bisconti, Silvia; Ulrich, Beverly

    2016-01-01

    Assessments of brain activity during motor task performance have been limited to fine motor movements due to technological constraints presented by traditional neuroimaging techniques, such as functional magnetic resonance imaging. Functional near-infrared spectroscopy (fNIRS) offers a promising method by which to overcome these constraints and investigate motor performance of functional motor tasks. The current study used fNIRS to quantify hemodynamic responses within the primary motor cortex in twelve healthy adults as they performed unimanual right, unimanual left, and bimanual reaching, and stepping in place. Results revealed that during both unimanual reaching tasks, the contralateral hemisphere showed significant activation in channels located approximately 3 cm medial to the C3 (for right-hand reach) and C4 (for left-hand reach) landmarks. Bimanual reaching and stepping showed activation in similar channels, which were located bilaterally across the primary motor cortex. The medial channels, surrounding Cz, showed significantly higher activations during stepping when compared to bimanual reaching. Our results extend the viability of fNIRS to study motor function and build a foundation for future investigation of motor development in infants during nascent functional behaviors and monitor how they may change with age or practice.

  17. Joint L2,1 Norm and Fisher Discrimination Constrained Feature Selection for Rational Synthesis of Microporous Aluminophosphates.

    PubMed

    Qi, Miao; Wang, Ting; Yi, Yugen; Gao, Na; Kong, Jun; Wang, Jianzhong

    2017-04-01

    Feature selection has been regarded as an effective tool to help researchers understand the generating process of data. For mining the synthesis mechanism of microporous AlPOs, this paper proposes a novel feature selection method by joint l 2,1 norm and Fisher discrimination constraints (JNFDC). In order to obtain more effective feature subset, the proposed method can be achieved in two steps. The first step is to rank the features according to sparse and discriminative constraints. The second step is to establish predictive model with the ranked features, and select the most significant features in the light of the contribution of improving the predictive accuracy. To the best of our knowledge, JNFDC is the first work which employs the sparse representation theory to explore the synthesis mechanism of six kinds of pore rings. Numerical simulations demonstrate that our proposed method can select significant features affecting the specified structural property and improve the predictive accuracy. Moreover, comparison results show that JNFDC can obtain better predictive performances than some other state-of-the-art feature selection methods. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Methodology for the specification of communication activities within the framework of a multi-layered architecture: Toward the definition of a knowledge base

    NASA Astrophysics Data System (ADS)

    Amyay, Omar

    A method defined in terms of synthesis and verification steps is presented. The specification of the services and protocols of communication within a multilayered architecture of the Open Systems Interconnection (OSI) type is an essential issue for the design of computer networks. The aim is to obtain an operational specification of the protocol service couple of a given layer. Planning synthesis and verification steps constitute a specification trajectory. The latter is based on the progressive integration of the 'initial data' constraints and verification of the specification originating from each synthesis step, through validity constraints that characterize an admissible solution. Two types of trajectories are proposed according to the style of the initial specification of the service protocol couple: operational type and service supplier viewpoint; knowledge property oriented type and service viewpoint. Synthesis and verification activities were developed and formalized in terms of labeled transition systems, temporal logic and epistemic logic. The originality of the second specification trajectory and the use of the epistemic logic are shown. An 'artificial intelligence' approach enables a conceptual model to be defined for a knowledge base system for implementing the method proposed. It is structured in three levels of representation of the knowledge relating to the domain, the reasoning characterizing synthesis and verification activities and the planning of the steps of a specification trajectory.

  19. Physics for Occupational Therapy Majors Program

    NASA Astrophysics Data System (ADS)

    Singh Aurora, Tarlok

    1998-03-01

    In Spring 1996, a one semester course - "Survey of Physics" - was taught for students majoring in Occupational Therapy (O. T.), in contrast to the two semester physics sequence for all other health science majors. The course was designed to expose the students to the concept of physics, develop problem solving skills and to emphasize the importance of physics to O.T. In developing the course content, students' preparedness in mathematics and the perceived future applications of physics in O. T. was taken in to consideration, and steps were taken to remedy the deficiencies in students' background. The course was comprised of lecture, laboratory, and considerable self study due to the time constraints, and these will be described.

  20. Successes and failures of sixty years of vector control in French Guiana: what is the next step?

    PubMed

    Epelboin, Yanouk; Chaney, Sarah C; Guidez, Amandine; Habchi-Hanriot, Nausicaa; Talaga, Stanislas; Wang, Lanjiao; Dusfour, Isabelle

    2018-03-12

    Since the 1940s, French Guiana has implemented vector control to contain or eliminate malaria, yellow fever, and, recently, dengue, chikungunya, and Zika. Over time, strategies have evolved depending on the location, efficacy of the methods, development of insecticide resistance, and advances in vector control techniques. This review summarises the history of vector control in French Guiana by reporting the records found in the private archives of the Institute Pasteur in French Guiana and those accessible in libraries worldwide. This publication highlights successes and failures in vector control and identifies the constraints and expectations for vector control in this French overseas territory in the Americas.

  1. Speedup for quantum optimal control from automatic differentiation based on graphics processing units

    NASA Astrophysics Data System (ADS)

    Leung, Nelson; Abdelhafez, Mohamed; Koch, Jens; Schuster, David

    2017-04-01

    We implement a quantum optimal control algorithm based on automatic differentiation and harness the acceleration afforded by graphics processing units (GPUs). Automatic differentiation allows us to specify advanced optimization criteria and incorporate them in the optimization process with ease. We show that the use of GPUs can speedup calculations by more than an order of magnitude. Our strategy facilitates efficient numerical simulations on affordable desktop computers and exploration of a host of optimization constraints and system parameters relevant to real-life experiments. We demonstrate optimization of quantum evolution based on fine-grained evaluation of performance at each intermediate time step, thus enabling more intricate control on the evolution path, suppression of departures from the truncated model subspace, as well as minimization of the physical time needed to perform high-fidelity state preparation and unitary gates.

  2. Variable speed wind turbine control by discrete-time sliding mode approach.

    PubMed

    Torchani, Borhen; Sellami, Anis; Garcia, Germain

    2016-05-01

    The aim of this paper is to propose a new design variable speed wind turbine control by discrete-time sliding mode approach. This methodology is designed for linear saturated system. The saturation constraint is reported on inputs vector. To this end, the back stepping design procedure is followed to construct a suitable sliding manifold that guarantees the attainment of a stabilization control objective. It is well known that the mechanisms are investigated in term of the most proposed assumptions to deal with the damping, shaft stiffness and inertia effect of the gear. The objectives are to synthesize robust controllers that maximize the energy extracted from wind, while reducing mechanical loads and rotor speed tracking combined with an electromagnetic torque. Simulation results of the proposed scheme are presented. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Time-fixed rendezvous by impulse factoring with an intermediate timing constraint. [for transfer orbits

    NASA Technical Reports Server (NTRS)

    Green, R. N.; Kibler, J. F.; Young, G. R.

    1974-01-01

    A method is presented for factoring a two-impulse orbital transfer into a three- or four-impulse transfer which solves the rendezvous problem and satisfies an intermediate timing constraint. Both the time of rendezvous and the intermediate time of a alinement are formulated as any element of a finite sequence of times. These times are integer multiples of a constant plus an additive constant. The rendezvous condition is an equality constraint, whereas the intermediate alinement is an inequality constraint. The two timing constraints are satisfied by factoring the impulses into collinear parts that vectorially sum to the original impulse and by varying the resultant period differences and the number of revolutions in each orbit. Five different types of solutions arise by considering factoring either or both of the two impulses into two or three parts with a limit for four total impulses. The impulse-factoring technique may be applied to any two-impulse transfer which has distinct orbital periods.

  4. Linking Bibliographic Data Bases: A Discussion of the Battelle Technical Report.

    ERIC Educational Resources Information Center

    Jones, C. Lee

    This document establishes the context, summarizes the contents, and discusses the Battelle technical report, noting certain constraints of the study. Further steps for the linking of bibliographic databases for use by academic and public libraries are suggested. (RAA)

  5. Time Relevance of Convective Weather Forecast for Air Traffic Automation

    NASA Technical Reports Server (NTRS)

    Chan, William N.

    2006-01-01

    The Federal Aviation Administration (FAA) is handling nearly 120,000 flights a day through its Air Traffic Management (ATM) system and air traffic congestion is expected to increse substantially over the next 20 years. Weather-induced impacts to throughput and efficiency are the leading cause of flight delays accounting for 70% of all delays with convective weather accounting for 60% of all weather related delays. To support the Next Generation Air Traffic System goal of operating at 3X current capacity in the NAS, ATC decision support tools are being developed to create advisories to assist controllers in all weather constraints. Initial development of these decision support tools did not integrate information regarding weather constraints such as thunderstorms and relied on an additional system to provide that information. Future Decision Support Tools should move towards an integrated system where weather constraints are factored into the advisory of a Decision Support Tool (DST). Several groups such at NASA-Ames, Lincoln Laboratories, and MITRE are integrating convective weather data with DSTs. A survey of current convective weather forecast and observation data show they span a wide range of temporal and spatial resolutions. Short range convective observations can be obtained every 5 mins with longer range forecasts out to several days updated every 6 hrs. Today, the short range forecasts of less than 2 hours have a temporal resolution of 5 mins. Beyond 2 hours, forecasts have much lower temporal. resolution of typically 1 hour. Spatial resolutions vary from 1km for short range to 40km for longer range forecasts. Improving the accuracy of long range convective forecasts is a major challenge. A report published by the National Research Council states improvements for convective forecasts for the 2 to 6 hour time frame will only be achieved for a limited set of convective phenomena in the next 5 to 10 years. Improved longer range forecasts will be probabilistic as opposed to the deterministic shorter range forecasts. Despite the known low level of confidence with respect to long range convective forecasts, these data are still useful to a DST routing algorithm. It is better to develop an aircraft route using the best information available than no information. The temporally coarse long range forecast data needs to be interpolated to be useful to a DST. A DST uses aircraft trajectory predictions that need to be evaluated for impacts by convective storms. Each time-step of a trajectory prediction n&s to be checked against weather data. For the case of coarse temporal data, there needs to be a method fill in weather data where there is none. Simply using the coarse weather data without any interpolation can result in DST routes that are impacted by regions of strong convection. Increasing the temporal resolution of these data can be achieved but result in a large dataset that may prove to be an operational challenge in transmission and loading by a DST. Currently, it takes about 7mins retrieve a 7mb RUC2 forecast file from NOAA at NASA-Ames Research Center. A prototype NCWF6 1 hour forecast is about 3mb in size. A Six hour NCWFG forecast with a 1hr forecast time-step will be about l8mb (6 x 3mb). A 6 hour NCWF6 forecast with a l5min forecast time-step will be about 7mb (24 x 3mb). Based on the time it takes to retrieve a 7mb RUC2 forecast, it will take approximately 70mins to retrieve a 6 hour NCWF forecast with 15min time steps. Until those issues are addressed, there is a need to develop an algorithm that interpolates between these temporally coarse long range forecasts. This paper describes a method of how to use low temporal resolution probabilistic weather forecasts in a DST. The beginning of this paper is a description of some convective weather forecast and observation products followed by an example of how weather data are used by a DST. The subsequent sections will describe probabilistic forecasts followed by a descrtion of a method to use low temporal resolution probabilistic weather forecasts by providing a relevance value to these data outside of their valid times.

  6. Physical constraints, fundamental limits, and optimal locus of operating points for an inverted pendulum based actuated dynamic walker.

    PubMed

    Patnaik, Lalit; Umanand, Loganathan

    2015-10-26

    The inverted pendulum is a popular model for describing bipedal dynamic walking. The operating point of the walker can be specified by the combination of initial mid-stance velocity (v0) and step angle (φm) chosen for a given walk. In this paper, using basic mechanics, a framework of physical constraints that limit the choice of operating points is proposed. The constraint lines thus obtained delimit the allowable region of operation of the walker in the v0-φm plane. A given average forward velocity vx,avg can be achieved by several combinations of v0 and φm. Only one of these combinations results in the minimum mechanical power consumption and can be considered the optimum operating point for the given vx,avg. This paper proposes a method for obtaining this optimal operating point based on tangency of the power and velocity contours. Putting together all such operating points for various vx,avg, a family of optimum operating points, called the optimal locus, is obtained. For the energy loss and internal energy models chosen, the optimal locus obtained has a largely constant step angle with increasing speed but tapers off at non-dimensional speeds close to unity.

  7. Effects of Social Constraints on Career Maturity: The Mediating Effect of the Time Perspective

    ERIC Educational Resources Information Center

    Kim, Kyung-Nyun; Oh, Se-Hee

    2013-01-01

    Previous studies have provided mixed results for the effects of social constraints on career maturity. However, there has been growing interest in these effects from the time perspective. Few studies have examined the effects of social constraints on the time perspective which in turn influences career maturity. This study examines the mediating…

  8. Technical Note: Improving the VMERGE treatment planning algorithm for rotational radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaddy, Melissa R., E-mail: mrgaddy@ncsu.edu; Papp,

    2016-07-15

    Purpose: The authors revisit the VMERGE treatment planning algorithm by Craft et al. [“Multicriteria VMAT optimization,” Med. Phys. 39, 686–696 (2012)] for arc therapy planning and propose two changes to the method that are aimed at improving the achieved trade-off between treatment time and plan quality at little additional planning time cost, while retaining other desirable properties of the original algorithm. Methods: The original VMERGE algorithm first computes an “ideal,” high quality but also highly time consuming treatment plan that irradiates the patient from all possible angles in a fine angular grid with a highly modulated beam and then makesmore » this plan deliverable within practical treatment time by an iterative fluence map merging and sequencing algorithm. We propose two changes to this method. First, we regularize the ideal plan obtained in the first step by adding an explicit constraint on treatment time. Second, we propose a different merging criterion that comprises of identifying and merging adjacent maps whose merging results in the least degradation of radiation dose. Results: The effect of both suggested modifications is evaluated individually and jointly on clinical prostate and paraspinal cases. Details of the two cases are reported. Conclusions: In the authors’ computational study they found that both proposed modifications, especially the regularization, yield noticeably improved treatment plans for the same treatment times than what can be obtained using the original VMERGE method. The resulting plans match the quality of 20-beam step-and-shoot IMRT plans with a delivery time of approximately 2 min.« less

  9. A New Family of Compact High Order Coupled Time-Space Unconditionally Stable Vertical Advection Schemes

    NASA Astrophysics Data System (ADS)

    Lemarié, F.; Debreu, L.

    2016-02-01

    Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost. To our knowledge no unconditionally stable scheme with such high order accuracy in time and space have been presented so far in the literature. Furthermore, we show how those schemes can be made monotonic without compromising their stability properties.

  10. SU-F-T-195: Systematic Constraining of Contralateral Parotid Gland Led to Improved Dosimetric Outcomes for Multi-Field Optimization with Scanning Beam Proton Therapy: Promising Results From a Pilot Study in Patients with Base of Tongue Carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, R; Liu, A; Poenisch, F

    Purpose: Treatment planning for Intensity Modulated Proton Therapy (IMPT) for head and neck cancer is time-consuming due to the large number of organs-at-risk (OAR) to be considered. As there are many competing objectives and also wide range of acceptable OAR constraints, the final approved plan may not be most optimal for the given structures. We evaluated the dose reduction to the contralateral parotid by implementing standardized constraints during optimization for scanning beam proton therapy planning. Methods: Twenty-four (24) consecutive patients previously treated for base of tongue carcinoma were retrospectively selected. The doses were 70Gy, 63Gy and 57Gy (SIB in 33more » fractions) for high-, intermediate-, and standard-risk clinical target volumes (CTV), respectively; the treatment included bilateral neck. Scanning beams using MFO with standardized bilateral anterior oblique and PA fields were applied. New plans where then developed and optimized by employing additional contralateral parotid constraints at multiple defined dose levels. Using a step-wise iterative process, the volume-based constraints at each level were then further reduced until known target coverages were compromised. The newly developed plans were then compared to the original clinically approved plans using paired student t-testing. Results: All 24 newly optimized treatment plans maintained initial plan quality as compared to the approved plans, and the 98% prescription dose coverage to the CTV’s were not compromised. Representative DVH comparison is shown in FIGURE 1. The contralateral parotid doses were reduced at all levels of interest when systematic constraints were applied to V10, V20, V30 and V40Gy (All P<0.0001; TABLE 1). Overall, the mean contralateral parotid doses were reduced by 2.26 Gy on average, a ∼13% relative improvement. Conclusion: Applying systematic and volume-based contralateral parotid constraints for IMPT planning significantly reduced the dose at all dosimetric levels for patients with base of tongue cancer.« less

  11. On the high frequency transfer of mechanical stimuli from the surface of the head to the macular neuroepithelium of the mouse.

    PubMed

    Jones, Timothy A; Lee, Choongheon; Gaines, G Christopher; Grant, J W Wally

    2015-04-01

    Vestibular macular sensors are activated by a shearing motion between the otoconial membrane and underlying receptor epithelium. Shearing motion and sensory activation in response to an externally induced head motion do not occur instantaneously. The mechanically reactive elastic and inertial properties of the intervening tissue introduce temporal constraints on the transfer of the stimulus to sensors. Treating the otoconial sensory apparatus as an overdamped second-order mechanical system, we measured the governing long time constant (Τ(L)) for stimulus transfer from the head surface to epithelium. This provided the basis to estimate the corresponding upper cutoff for the frequency response curve for mouse otoconial organs. A velocity step excitation was used as the forcing function. Hypothetically, the onset of the mechanical response to a step excitation follows an exponential rise having the form Vel(shear) = U(1-e(-t/TL)), where U is the applied shearing velocity step amplitude. The response time of the otoconial apparatus was estimated based on the activation threshold of macular neural responses to step stimuli having durations between 0.1 and 2.0 ms. Twenty adult C57BL/6 J mice were evaluated. Animals were anesthetized. The head was secured to a shaker platform using a non-invasive head clip or implanted skull screws. The shaker was driven to produce a theoretical forcing step velocity excitation at the otoconial organ. Vestibular sensory evoked potentials (VsEPs) were recorded to measure the threshold for macular neural activation. The duration of the applied step motion was reduced systematically from 2 to 0.1 ms and response threshold determined for each duration (nine durations). Hypothetically, the threshold of activation will increase according to the decrease in velocity transfer occurring at shorter step durations. The relationship between neural threshold and stimulus step duration was characterized. Activation threshold increased exponentially as velocity step duration decreased below 1.0 ms. The time constants associated with the exponential curve were Τ(L) = 0.50 ms for the head clip coupling and T(L) = 0.79 ms for skull screw preparation. These corresponded to upper -3 dB frequency cutoff points of approximately 318 and 201 Hz, respectively. T(L) ranged from 224 to 379 across individual animals using the head clip coupling. The findings were consistent with a second-order mass-spring mechanical system. Threshold data were also fitted to underdamped models post hoc. The underdamped fits suggested natural resonance frequencies on the order of 278 to 448 Hz as well as the idea that macular systems in mammals are less damped than generally acknowledged. Although estimated indirectly, it is argued that these time constants reflect largely if not entirely the mechanics of transfer to the sensory apparatus. The estimated governing time constant of 0.50 ms for composite data predicts high frequency cutoffs of at least 318 Hz for the intact otoconial apparatus of the mouse.

  12. Experimental evidence that the Ornstein-Uhlenbeck model best describes the evolution of leaf litter decomposability.

    PubMed

    Pan, Xu; Cornelissen, Johannes H C; Zhao, Wei-Wei; Liu, Guo-Fang; Hu, Yu-Kun; Prinzing, Andreas; Dong, Ming; Cornwell, William K

    2014-09-01

    Leaf litter decomposability is an important effect trait for ecosystem functioning. However, it is unknown how this effect trait evolved through plant history as a leaf 'afterlife' integrator of the evolution of multiple underlying traits upon which adaptive selection must have acted. Did decomposability evolve in a Brownian fashion without any constraints? Was evolution rapid at first and then slowed? Or was there an underlying mean-reverting process that makes the evolution of extreme trait values unlikely? Here, we test the hypothesis that the evolution of decomposability has undergone certain mean-reverting forces due to strong constraints and trade-offs in the leaf traits that have afterlife effects on litter quality to decomposers. In order to test this, we examined the leaf litter decomposability and seven key leaf traits of 48 tree species in the temperate area of China and fitted them to three evolutionary models: Brownian motion model (BM), Early burst model (EB), and Ornstein-Uhlenbeck model (OU). The OU model, which does not allow unlimited trait divergence through time, was the best fit model for leaf litter decomposability and all seven leaf traits. These results support the hypothesis that neither decomposability nor the underlying traits has been able to diverge toward progressively extreme values through evolutionary time. These results have reinforced our understanding of the relationships between leaf litter decomposability and leaf traits in an evolutionary perspective and may be a helpful step toward reconstructing deep-time carbon cycling based on taxonomic composition with more confidence.

  13. "Wood already touched by fire is not hard to set alight": Comment on "Constraints to applying systems thinking concepts in health systems: A regional perspective from surveying stakeholders in Eastern Mediterranean countries".

    PubMed

    Agyepong, Irene Akua

    2015-03-01

    A major constraint to the application of any form of knowledge and principles is the awareness, understanding and acceptance of the knowledge and principles. Systems Thinking (ST) is a way of understanding and thinking about the nature of health systems and how to make and implement decisions within health systems to maximize desired and minimize undesired effects. A major constraint to applying ST within health systems in Low- and Middle-Income Countries (LMICs) would appear to be an awareness and understanding of ST and how to apply it. This is a fundamental constraint and in the increasing desire to enable the application of ST concepts in health systems in LMIC and understand and evaluate the effects; an essential first step is going to be enabling of a wide spread as well as deeper understanding of ST and how to apply this understanding.

  14. Understanding medication compliance and persistence from an economics perspective.

    PubMed

    Elliott, Rachel A; Shinogle, Judith A; Peele, Pamela; Bhosle, Monali; Hughes, Dyfrig A

    2008-01-01

    An increased understanding of the reasons for noncompliance and lack of persistence with prescribed medication is an important step to improve treatment effectiveness, and thus patient health. Explanations have been attempted from epidemiological, sociological, and psychological perspectives. Economic models (utility maximization, time preferences, health capital, bilateral bargaining, stated preference, and prospect theory) may contribute to the understanding of medication-taking behavior. Economic models are applied to medication noncompliance. Traditional consumer choice models under a budget constraint do apply to medication-taking behavior in that increased prices cause decreased utilization. Nevertheless, empiric evidence suggests that budget constraints are not the only factor affecting consumer choice around medicines. Examination of time preference models suggests that the intuitive association between time preference and medication compliance has not been investigated extensively, and has not been proven empirically. The health capital model has theoretical relevance, but has not been applied to compliance. Bilateral bargaining may present an alternative model to concordance of the patient-prescriber relationship, taking account of game-playing by either party. Nevertheless, there is limited empiric evidence to test its usefulness. Stated preference methods have been applied most extensively to medicines use. Evidence suggests that patients' preferences are consistently affected by side effects, and that preferences change over time, with age and experience. Prospect theory attempts to explain how new information changes risk perceptions and associated behavior but has not been applied empirically to medication use. Economic models of behavior may contribute to the understanding of medication use, but more empiric work is needed to assess their applicability.

  15. Systems and methods for energy cost optimization in a building system

    DOEpatents

    Turney, Robert D.; Wenzel, Michael J.

    2016-09-06

    Methods and systems to minimize energy cost in response to time-varying energy prices are presented for a variety of different pricing scenarios. A cascaded model predictive control system is disclosed comprising an inner controller and an outer controller. The inner controller controls power use using a derivative of a temperature setpoint and the outer controller controls temperature via a power setpoint or power deferral. An optimization procedure is used to minimize a cost function within a time horizon subject to temperature constraints, equality constraints, and demand charge constraints. Equality constraints are formulated using system model information and system state information whereas demand charge constraints are formulated using system state information and pricing information. A masking procedure is used to invalidate demand charge constraints for inactive pricing periods including peak, partial-peak, off-peak, critical-peak, and real-time.

  16. Premature Mobility of Boulders in Constructed Step-pool River Structures in the Carmel River, CA: The Role of Fish-centric Design Constraints, and Flow on Structural Stability

    NASA Astrophysics Data System (ADS)

    Smith, D. P.; Chow, K.; Luna, L.

    2017-12-01

    The 32 m tall San Clemente Dam (Carmel River, CA) was removed in 2015 to eliminate seismic risk and to improve fish passage for all life stages of steelhead (O. mykiss). Reservoir sediment was sequestered in place, rather than released, and a new 1000 m long channel/floodplain system was constructed to circumvent the stored sediment. The channel comprised a 250 m long, meandering low-gradient reach and a 750 m reach with alternating step-pool sections, plane beds, and resting pools. The floodprone surfaces were compacted, wrapped in geotechnical fabric and vegetated. This study analyzes the geomorphic evolution of the new channel system during its first two years of service based upon detailed field inspection, SfM photogrammetry, orthophoto analysis, and 2d hydraulic modeling. A significant proportion of the step-pool structures experienced premature mobility and several reaches of engineered stream banks were eroded in the first year. Individual, six-tonne boulders were mobilized despite experiencing less than the 3 yr flow. The channel and floodplain were fully repaired following the first year. Strong flows (two 10-yr floods and a 30-yr flood) during the second year catastrophically altered the constructed channel and floodplain. While the low-gradient reach remained intact, each of the original step-pool structures was either completely mobilized and destroyed, buried by gravel, or bypassed by the subsequent channel. Despite the overall structural failure of the constructed channel, the new channel does not block steelhead migration, and can be serendipitously considered an ecological success. Step-pool design was constrained by a fish-centric requirement that steps be 1 ft tall or less. Some constructed "resting pools" filled rather than transport sediment. Using fish-centric constraints in the design, rather than strictly fluvial geomorphic principles may have contributed to early failure of the step-pool structures and other parts of the system.

  17. Finite-time stabilisation of a class of switched nonlinear systems with state constraints

    NASA Astrophysics Data System (ADS)

    Huang, Shipei; Xiang, Zhengrong

    2018-06-01

    This paper investigates the finite-time stabilisation for a class of switched nonlinear systems with state constraints. Some power orders of the system are allowed to be ratios of positive even integers over odd integers. A Barrier Lyapunov function is introduced to guarantee that the state constraint is not violated at any time. Using the convex combination method and a recursive design approach, a state-dependent switching law and state feedback controllers of individual subsystems are constructed such that the closed-loop system is finite-time stable without violation of the state constraint. Two examples are provided to show the effectiveness of the proposed method.

  18. Solving constrained minimum-time robot problems using the sequential gradient restoration algorithm

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.

    1991-01-01

    Three constrained minimum-time control problems of a two-link manipulator are solved using the Sequential Gradient and Restoration Algorithm (SGRA). The inequality constraints considered are reduced via Valentine-type transformations to nondifferential path equality constraints. The SGRA is then used to solve these transformed problems with equality constraints. The results obtained indicate that at least one of the two controls is at its limits at any instant in time. The remaining control then adjusts itself so that none of the system constraints is violated. Hence, the minimum-time control is either a pure bang-bang control or a combined bang-bang/singular control.

  19. Finite-time sliding surface constrained control for a robot manipulator with an unknown deadzone and disturbance.

    PubMed

    Ik Han, Seong; Lee, Jangmyung

    2016-11-01

    This paper presents finite-time sliding mode control (FSMC) with predefined constraints for the tracking error and sliding surface in order to obtain robust positioning of a robot manipulator with input nonlinearity due to an unknown deadzone and external disturbance. An assumed model feedforward FSMC was designed to avoid tedious identification procedures for the manipulator parameters and to obtain a fast response time. Two constraint switching control functions based on the tracking error and finite-time sliding surface were added to the FSMC to guarantee the predefined tracking performance despite the presence of an unknown deadzone and disturbance. The tracking error due to the deadzone and disturbance can be suppressed within the predefined error boundary simply by tuning the gain value of the constraint switching function and without the addition of an extra compensator. Therefore, the designed constraint controller has a simpler structure than conventional transformed error constraint methods and the sliding surface constraint scheme can also indirectly guarantee the tracking error constraint while being more stable than the tracking error constraint control. A simulation and experiment were performed on an articulated robot manipulator to validate the proposed control schemes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Walking-adaptability assessments with the Interactive Walkway: Between-systems agreement and sensitivity to task and subject variations.

    PubMed

    Geerse, Daphne J; Coolen, Bert H; Roerdink, Melvyn

    2017-05-01

    The ability to adapt walking to environmental circumstances is an important aspect of walking, yet difficult to assess. The Interactive Walkway was developed to assess walking adaptability by augmenting a multi-Kinect-v2 10-m walkway with gait-dependent visual context (stepping targets, obstacles) using real-time processed markerless full-body kinematics. In this study we determined Interactive Walkway's usability for walking-adaptability assessments in terms of between-systems agreement and sensitivity to task and subject variations. Under varying task constraints, 21 healthy subjects performed obstacle-avoidance, sudden-stops-and-starts and goal-directed-stepping tasks. Various continuous walking-adaptability outcome measures were concurrently determined with the Interactive Walkway and a gold-standard motion-registration system: available response time, obstacle-avoidance and sudden-stop margins, step length, stepping accuracy and walking speed. The same holds for dichotomous classifications of success and failure for obstacle-avoidance and sudden-stops tasks and performed short-stride versus long-stride obstacle-avoidance strategies. Continuous walking-adaptability outcome measures generally agreed well between systems (high intraclass correlation coefficients for absolute agreement, low biases and narrow limits of agreement) and were highly sensitive to task and subject variations. Success and failure ratings varied with available response times and obstacle types and agreed between systems for 85-96% of the trials while obstacle-avoidance strategies were always classified correctly. We conclude that Interactive Walkway walking-adaptability outcome measures are reliable and sensitive to task and subject variations, even in high-functioning subjects. We therefore deem Interactive Walkway walking-adaptability assessments usable for obtaining an objective and more task-specific examination of one's ability to walk, which may be feasible for both high-functioning and fragile populations since walking adaptability can be assessed at various levels of difficulty. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. A Random Walk Approach to Query Informative Constraints for Clustering.

    PubMed

    Abin, Ahmad Ali

    2017-08-09

    This paper presents a random walk approach to the problem of querying informative constraints for clustering. The proposed method is based on the properties of the commute time, that is the expected time taken for a random walk to travel between two nodes and return, on the adjacency graph of data. Commute time has the nice property of that, the more short paths connect two given nodes in a graph, the more similar those nodes are. Since computing the commute time takes the Laplacian eigenspectrum into account, we use this property in a recursive fashion to query informative constraints for clustering. At each recursion, the proposed method constructs the adjacency graph of data and utilizes the spectral properties of the commute time matrix to bipartition the adjacency graph. Thereafter, the proposed method benefits from the commute times distance on graph to query informative constraints between partitions. This process iterates for each partition until the stop condition becomes true. Experiments on real-world data show the efficiency of the proposed method for constraints selection.

  2. A Hybrid Constraint Representation and Reasoning Framework

    NASA Technical Reports Server (NTRS)

    Golden, Keith; Pang, Wan-Lin

    2003-01-01

    This paper introduces JNET, a novel constraint representation and reasoning framework that supports procedural constraints and constraint attachments, providing a flexible way of integrating the constraint reasoner with a run- time software environment. Attachments in JNET are constraints over arbitrary Java objects, which are defined using Java code, at runtime, with no changes to the JNET source code.

  3. Separation-Compliant, Optimal Routing and Control of Scheduled Arrivals in a Terminal Airspace

    NASA Technical Reports Server (NTRS)

    Sadovsky, Alexander V.; Davis, Damek; Isaacson, Douglas R.

    2013-01-01

    We address the problem of navigating a set (fleet) of aircraft in an aerial route network so as to bring each aircraft to its destination at a specified time and with minimal distance separation assured between all aircraft at all times. The speed range, initial position, required destination, and required time of arrival at destination for each aircraft are assumed provided. Each aircraft's movement is governed by a controlled differential equation (state equation). The problem consists in choosing for each aircraft a path in the route network and a control strategy so as to meet the constraints and reach the destination at the required time. The main contribution of the paper is a model that allows to recast this problem as a decoupled collection of problems in classical optimal control and is easily generalized to the case when inertia cannot be neglected. Some qualitative insight into solution behavior is obtained using the Pontryagin Maximum Principle. Sample numerical solutions are computed using a numerical optimal control solver. The proposed model is first step toward increasing the fidelity of continuous time control models of air traffic in a terminal airspace. The Pontryagin Maximum Principle implies the polygonal shape of those portions of the state trajectories away from those states in which one or more aircraft pair are at minimal separation. The model also confirms the intuition that, the narrower the allowed speed ranges of the aircraft, the smaller the space of optimal solutions, and that an instance of the optimal control problem may not have a solution at all (i.e., no control strategy that meets the separation requirement and other constraints).

  4. Rocky or Not, Here We Come: Further Revealing the Internal Structures of K2-21b+c Through Transit Timing

    NASA Astrophysics Data System (ADS)

    Stevenson, Kevin; Bean, Jacob; Dragomir, Diana; Fabrycky, Daniel; Kreidberg, Laura; Mills, Sean; Petigura, Erik

    2016-08-01

    The provenance of planets 1.5 - 2 times the size of the Earth is one of the biggest unresolved mysteries from the Kepler mission. Determining the nature and origins of these exoplanets relies not only on measuring their radii, but also requires knowledge about their masses, atmospheric compositions, and interior structures. With this information, we can more confidently estimate planet mass distributions from measured radii, distinguish between rocky and non-rocky compositions, and better constrain the occurrence rate of Earth-like planets. Last year, Co-I Petigura announced the discovery of a two-transiting-planet system, K2-21, with bodies of 1.6 and 1.9 Earth-radii. The latter is expected to have a volatile-rich atmosphere, but the former lies squarely on the rocky/non-rocky composition boundary. These exoplanets orbit their relatively bright, nearby M dwarf parent star in a near 5:3 resonance and, based on our successful Spitzer observations, exhibit measurable transit timing variations (TTVs). Complete knowledge about their interactions will reveal constraints on the planets' masses, which is important because significant stellar activity makes RV mass measurements impractical. We propose to continue measuring precise transit times of K2-21b and K2-21c with Spitzer and combine that information with existing K2 timing constraints to determine their masses. Understanding the planets' masses is a critical, first step to ultimately determining their atmospheric compositions and internal structures. These planets will provide an excellent test to current statistical arguments that suggest there is a turning point in composition from rocky, true-to-name super-Earths to volatile-rich sub-Neptunes in the range of 1.5 - 2 Earth-radii.

  5. A hybridized discontinuous Galerkin framework for high-order particle-mesh operator splitting of the incompressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Maljaars, Jakob M.; Labeur, Robert Jan; Möller, Matthias

    2018-04-01

    A generic particle-mesh method using a hybridized discontinuous Galerkin (HDG) framework is presented and validated for the solution of the incompressible Navier-Stokes equations. Building upon particle-in-cell concepts, the method is formulated in terms of an operator splitting technique in which Lagrangian particles are used to discretize an advection operator, and an Eulerian mesh-based HDG method is employed for the constitutive modeling to account for the inter-particle interactions. Key to the method is the variational framework provided by the HDG method. This allows to formulate the projections between the Lagrangian particle space and the Eulerian finite element space in terms of local (i.e. cellwise) ℓ2-projections efficiently. Furthermore, exploiting the HDG framework for solving the constitutive equations results in velocity fields which excellently approach the incompressibility constraint in a local sense. By advecting the particles through these velocity fields, the particle distribution remains uniform over time, obviating the need for additional quality control. The presented methodology allows for a straightforward extension to arbitrary-order spatial accuracy on general meshes. A range of numerical examples shows that optimal convergence rates are obtained in space and, given the particular time stepping strategy, second-order accuracy is obtained in time. The model capabilities are further demonstrated by presenting results for the flow over a backward facing step and for the flow around a cylinder.

  6. SciBox, an end-to-end automated science planning and commanding system

    NASA Astrophysics Data System (ADS)

    Choo, Teck H.; Murchie, Scott L.; Bedini, Peter D.; Steele, R. Josh; Skura, Joseph P.; Nguyen, Lillian; Nair, Hari; Lucks, Michael; Berman, Alice F.; McGovern, James A.; Turner, F. Scott

    2014-01-01

    SciBox is a new technology for planning and commanding science operations for Earth-orbital and planetary space missions. It has been incrementally developed since 2001 and demonstrated on several spaceflight projects. The technology has matured to the point that it is now being used to plan and command all orbital science operations for the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) mission to Mercury. SciBox encompasses the derivation of observing sequences from science objectives, the scheduling of those sequences, the generation of spacecraft and instrument commands, and the validation of those commands prior to uploading to the spacecraft. Although the process is automated, science and observing requirements are incorporated at each step by a series of rules and parameters to optimize observing opportunities, which are tested and validated through simulation and review. Except for limited special operations and tests, there is no manual scheduling of observations or construction of command sequences. SciBox reduces the lead time for operations planning by shortening the time-consuming coordination process, reduces cost by automating the labor-intensive processes of human-in-the-loop adjudication of observing priorities, reduces operations risk by systematically checking constraints, and maximizes science return by fully evaluating the trade space of observing opportunities to meet MESSENGER science priorities within spacecraft recorder, downlink, scheduling, and orbital-geometry constraints.

  7. Do not Lose Your Students in Large Lectures: A Five-Step Paper-Based Model to Foster Students’ Participation

    PubMed Central

    Aburahma, Mona Hassan

    2015-01-01

    Like most of the pharmacy colleges in developing countries with high population growth, public pharmacy colleges in Egypt are experiencing a significant increase in students’ enrollment annually due to the large youth population, accompanied with the keenness of students to join pharmacy colleges as a step to a better future career. In this context, large lectures represent a popular approach for teaching the students as economic and logistic constraints prevent splitting them into smaller groups. Nevertheless, the impact of large lectures in relation to student learning has been widely questioned due to their educational limitations, which are related to the passive role the students maintain in lectures. Despite the reported feebleness underlying large lectures and lecturing in general, large lectures will likely continue to be taught in the same format in these countries. Accordingly, to soften the negative impacts of large lectures, this article describes a simple and feasible 5-step paper-based model to transform lectures from a passive information delivery space into an active learning environment. This model mainly suits educational establishments with financial constraints, nevertheless, it can be applied in lectures presented in any educational environment to improve active participation of students. The components and the expected advantages of employing the 5-step paper-based model in large lectures as well as its limitations and ways to overcome them are presented briefly. The impact of applying this model on students’ engagement and learning is currently being investigated. PMID:28975906

  8. Economic-Oriented Stochastic Optimization in Advanced Process Control of Chemical Processes

    PubMed Central

    Dobos, László; Király, András; Abonyi, János

    2012-01-01

    Finding the optimal operating region of chemical processes is an inevitable step toward improving economic performance. Usually the optimal operating region is situated close to process constraints related to product quality or process safety requirements. Higher profit can be realized only by assuring a relatively low frequency of violation of these constraints. A multilevel stochastic optimization framework is proposed to determine the optimal setpoint values of control loops with respect to predetermined risk levels, uncertainties, and costs of violation of process constraints. The proposed framework is realized as direct search-type optimization of Monte-Carlo simulation of the controlled process. The concept is illustrated throughout by a well-known benchmark problem related to the control of a linear dynamical system and the model predictive control of a more complex nonlinear polymerization process. PMID:23213298

  9. Theoretical Study of the Mechanism Behind the para-Selective Nitration of Toluene in Zeolite H-Beta

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, Amity; Govind, Niranjan; Subramanian, Lalitha

    Periodic density functional theory calculations were performed to investigate the origin of the favorable para-selective nitration of toluene exhibited by zeolite H-beta with acetyl nitrate nitration agent. Energy calculations were performed for each of the 32 crystallographically unique Bronsted acid sites of a beta polymorph B zeolite unit cell with multiple Bronsted acid sites of comparable stability. However, one particular aluminum T-site with three favorable Bronsted site oxygens embedded in a straight 12-T channel wall provides multiple favorable proton transfer sites. Transition state searches around this aluminum site were performed to determine the barrier to reaction for both para andmore » ortho nitration of toluene. A three-step process was assumed for the nitration of toluene with two organic intermediates: the pi- and sigma-complexes. The rate limiting step is the proton transfer from the sigma-complex to a zeolite Bronsted site. The barrier for this step in ortho nitration is shown to be nearly 2.5 times that in para nitration. This discrepancy appears to be due to steric constraints imposed by the curvature of the large 12-T pore channels of beta and the toluene methyl group in the ortho approach that are not present in the para approach.« less

  10. Applying high-throughput methods to develop a purification process for a highly glycosylated protein.

    PubMed

    Sanaie, Nooshafarin; Cecchini, Douglas; Pieracci, John

    2012-10-01

    Micro-scale chromatography formats are becoming more routinely used in purification process development because of their ability to rapidly screen large number of process conditions at a time with minimal material. Given the usual constraints that exist on development timelines and resources, these systems can provide a means to maximize process knowledge and process robustness compared to traditional packed column formats. In this work, a high-throughput, 96-well filter plate format was used in the development of the cation exchange and hydrophobic interaction chromatography steps of a purification process designed to alter the glycoform distribution of a small protein. The significant input parameters affecting process performance were rapidly identified for both steps and preliminary operating conditions were identified. These ranges were verified in a packed chromatography column in order to assess the ability of the 96-well plate to predict packed column performance. In both steps, the 96-well plate format consistently led to underestimated glycoform-enrichment levels and to overestimated product recovery rates compared to the column-based approach. These studies demonstrate that the plate format can be used as a screening tool to narrow the operating ranges prior to further optimization on packed chromatography columns. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Effect of Profilin on Actin Critical Concentration: A Theoretical Analysis

    PubMed Central

    Yarmola, Elena G.; Dranishnikov, Dmitri A.; Bubb, Michael R.

    2008-01-01

    To explain the effect of profilin on actin critical concentration in a manner consistent with thermodynamic constraints and available experimental data, we built a thermodynamically rigorous model of actin steady-state dynamics in the presence of profilin. We analyzed previously published mechanisms theoretically and experimentally and, based on our analysis, suggest a new explanation for the effect of profilin. It is based on a general principle of indirect energy coupling. The fluctuation-based process of exchange diffusion indirectly couples the energy of ATP hydrolysis to actin polymerization. Profilin modulates this coupling, producing two basic effects. The first is based on the acceleration of exchange diffusion by profilin, which indicates, paradoxically, that a faster rate of actin depolymerization promotes net polymerization. The second is an affinity-based mechanism similar to the one suggested in 1993 by Pantaloni and Carlier although based on indirect rather than direct energy coupling. In the model by Pantaloni and Carlier, transformation of chemical energy of ATP hydrolysis into polymerization energy is regulated by direct association of each step in the hydrolysis reaction with a corresponding step in polymerization. Thus, hydrolysis becomes a time-limiting step in actin polymerization. In contrast, indirect coupling allows ATP hydrolysis to lag behind actin polymerization, consistent with experimental results. PMID:18835900

  12. Does a time constraint modify results from rating-based conjoint analysis? Case study with orange/pomegranate juice bottles.

    PubMed

    Reis, Felipe; Machín, Leandro; Rosenthal, Amauri; Deliza, Rosires; Ares, Gastón

    2016-12-01

    People do not usually process all the available information on packages for making their food choices and rely on heuristics for making their decisions, particularly when having limited time. However, in most consumer studies encourage participants to invest a lot of time for making their choices. Therefore, imposing a time-constraint in consumer studies may increase their ecological validity. In this context, the aim of the present work was to evaluate the influence of a time-constraint on consumer evaluation of pomegranate/orange juice bottles using rating-based conjoint task. A consumer study with 100 participants was carried out, in which they had to evaluate 16 pomegranate/orange fruit juice bottles, differing in bottle design, front-of-pack nutritional information, nutrition claim and processing claim, and to rate their intention to purchase. Half of the participants evaluated the bottle images without time constraint and the other half had a time-constraint of 3s for evaluating each image. Eye-movements were recorded during the evaluation. Results showed that time-constraint when evaluating intention to purchase did not largely modify the way in which consumers visually processed bottle images. Regardless of the experimental condition (with or without time constraint), they tended to evaluate the same product characteristics and to give them the same relative importance. However, a trend towards a more superficial evaluation of the bottles that skipped complex information was observed. Regarding the influence of product characteristics on consumer intention to purchase, bottle design was the variable with the largest relative importance in both conditions, overriding the influence of nutritional or processing characteristics, which stresses the importance of graphic design in shaping consumer perception. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Brief assessments and screening for geriatric conditions in older primary care patients: a pragmatic approach.

    PubMed

    Seematter-Bagnoud, Laurence; Büla, Christophe

    2018-01-01

    This paper discusses the rationale behind performing a brief geriatric assessment as a first step in the management of older patients in primary care practice. While geriatric conditions are considered by older patients and health professionals as particularly relevant for health and well-being, they remain too often overlooked due to many patient- and physician-related factors. These include time constraints and lack of specific training to undertake comprehensive geriatric assessment. This article discusses the epidemiologic rationale for screening functional, cognitive, affective, hearing and visual impairments, and nutritional status as well as fall risk and social status. It proposes using brief screening tests in primary care practice to identify patients who may need further comprehensive geriatric assessment or specific interventions.

  14. Earth Observatory Satellite system definition study. Report no. 2: Instrument constraints and interface specifications

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The instruments to be flown on the Earth Observatory Satellite (EOS) system are defined. The instruments will be used to support the Land Resources Management (LRM) mission of the EOS. Program planning information and suggested acquisition activities for obtaining the instruments are presented. The subjects considered are as follows: (1) the performance and interface of the Thematic Mapper (TM) and the High Resolution Pointing Imager (HRPI), (2) procedure for interfacing the TM and HRPI with the EOS satellite, (3) a space vehicle integration plan suggesting the steps and sequence of events required to carry out the interface activities, and (4) suggested agreements between the contractors for providing timely and equitable solution of problems at minimum cost.

  15. One-step volumetric additive manufacturing of complex polymer structures

    PubMed Central

    Shusteff, Maxim; Browar, Allison E. M.; Kelly, Brett E.; Henriksson, Johannes; Weisgraber, Todd H.; Panas, Robert M.; Fang, Nicholas X.; Spadaccini, Christopher M.

    2017-01-01

    Two limitations of additive manufacturing methods that arise from layer-based fabrication are slow speed and geometric constraints (which include poor surface quality). Both limitations are overcome in the work reported here, introducing a new volumetric additive fabrication paradigm that produces photopolymer structures with complex nonperiodic three-dimensional geometries on a time scale of seconds. We implement this approach using holographic patterning of light fields, demonstrate the fabrication of a variety of structures, and study the properties of the light patterns and photosensitive resins required for this fabrication approach. The results indicate that low-absorbing resins containing ~0.1% photoinitiator, illuminated at modest powers (~10 to 100 mW), may be successfully used to build full structures in ~1 to 10 s. PMID:29230437

  16. One-step volumetric additive manufacturing of complex polymer structures.

    PubMed

    Shusteff, Maxim; Browar, Allison E M; Kelly, Brett E; Henriksson, Johannes; Weisgraber, Todd H; Panas, Robert M; Fang, Nicholas X; Spadaccini, Christopher M

    2017-12-01

    Two limitations of additive manufacturing methods that arise from layer-based fabrication are slow speed and geometric constraints (which include poor surface quality). Both limitations are overcome in the work reported here, introducing a new volumetric additive fabrication paradigm that produces photopolymer structures with complex nonperiodic three-dimensional geometries on a time scale of seconds. We implement this approach using holographic patterning of light fields, demonstrate the fabrication of a variety of structures, and study the properties of the light patterns and photosensitive resins required for this fabrication approach. The results indicate that low-absorbing resins containing ~0.1% photoinitiator, illuminated at modest powers (~10 to 100 mW), may be successfully used to build full structures in ~1 to 10 s.

  17. The determination of the pulse pile-up reject (PUR) counting for X and gamma ray spectrometry

    NASA Astrophysics Data System (ADS)

    Karabıdak, S. M.; Kaya, S.

    2017-02-01

    The collection the charged particles produced by the incident radiation on a detector requires a time interval. If this time interval is not sufficiently short compared with the peaking time of the amplifier, a loss in the recovered signal amplitude occurs. Another major constraint on the throughput of modern x or gamma-ray spectrometers is the time required for the subsequent the pulse processing by the electronics. Two above-mentioned limitations are cause of counting losses resulting from the dead time and the pile-up. The pulse pile-up is a common problem in x and gamma ray radiation detection systems. The pulses pile-up in spectroscopic analysis can cause significant errors. Therefore, inhibition of these pulses is a vital step. A way to reduce errors due to the pulse pile-up is a pile-up inspection circuitry (PUR). Such a circuit rejects some of the pulse pile-up. Therefore, this circuit leads to counting losses. Determination of these counting losses is an important problem. In this work, a new method is suggested for the determination of the pulse pile-up reject.

  18. A discontinuous Galerkin method for nonlinear parabolic equations and gradient flow problems with interaction potentials

    NASA Astrophysics Data System (ADS)

    Sun, Zheng; Carrillo, José A.; Shu, Chi-Wang

    2018-01-01

    We consider a class of time-dependent second order partial differential equations governed by a decaying entropy. The solution usually corresponds to a density distribution, hence positivity (non-negativity) is expected. This class of problems covers important cases such as Fokker-Planck type equations and aggregation models, which have been studied intensively in the past decades. In this paper, we design a high order discontinuous Galerkin method for such problems. If the interaction potential is not involved, or the interaction is defined by a smooth kernel, our semi-discrete scheme admits an entropy inequality on the discrete level. Furthermore, by applying the positivity-preserving limiter, our fully discretized scheme produces non-negative solutions for all cases under a time step constraint. Our method also applies to two dimensional problems on Cartesian meshes. Numerical examples are given to confirm the high order accuracy for smooth test cases and to demonstrate the effectiveness for preserving long time asymptotics.

  19. The RNA-mediated, asymmetric ring regulatory mechanism of the transcription termination Rho helicase decrypted by time-resolved nucleotide analog interference probing (trNAIP).

    PubMed

    Soares, Emilie; Schwartz, Annie; Nollmann, Marcello; Margeat, Emmanuel; Boudvillain, Marc

    2014-08-01

    Rho is a ring-shaped, ATP-dependent RNA helicase/translocase that dissociates transcriptional complexes in bacteria. How RNA recognition is coupled to ATP hydrolysis and translocation in Rho is unclear. Here, we develop and use a new combinatorial approach, called time-resolved Nucleotide Analog Interference Probing (trNAIP), to unmask RNA molecular determinants of catalytic Rho function. We identify a regulatory step in the translocation cycle involving recruitment of the 2'-hydroxyl group of the incoming 3'-RNA nucleotide by a Rho subunit. We propose that this step arises from the intrinsic weakness of one of the subunit interfaces caused by asymmetric, split-ring arrangement of primary RNA tethers around the Rho hexamer. Translocation is at highest stake every seventh nucleotide when the weak interface engages the incoming 3'-RNA nucleotide or breaks, depending on RNA threading constraints in the Rho pore. This substrate-governed, 'test to run' iterative mechanism offers a new perspective on how a ring-translocase may function or be regulated. It also illustrates the interest and versatility of the new trNAIP methodology to unveil the molecular mechanisms of complex RNA-based systems. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. Use of distributed water level and soil moisture data in the evaluation of the PUMMA periurban distributed hydrological model: application to the Mercier catchment, France

    NASA Astrophysics Data System (ADS)

    Braud, Isabelle; Fuamba, Musandji; Branger, Flora; Batchabani, Essoyéké; Sanzana, Pedro; Sarrazin, Benoit; Jankowfsky, Sonja

    2016-04-01

    Distributed hydrological models are used at best when their outputs are compared not only to the outlet discharge, but also to internal observed variables, so that they can be used as powerful hypothesis-testing tools. In this paper, the interest of distributed networks of sensors for evaluating a distributed model and the underlying functioning hypotheses is explored. Two types of data are used: surface soil moisture and water level in streams. The model used in the study is the periurban PUMMA (Peri-Urban Model for landscape Management, Jankowfsky et al., 2014), that is applied to the Mercier catchment (6.7 km2) a semi-rural catchment with 14% imperviousness, located close to Lyon, France where distributed water level (13 locations) and surface soil moisture data (9 locations) are available. Model parameters are specified using in situ information or the results of previous studies, without any calibration and the model is run for four years from January 1st 2007 to December 31st 2010 with a variable time step for rainfall and an hourly time step for reference evapotranspiration. The model evaluation protocol was guided by the available data and how they can be interpreted in terms of hydrological processes and constraints for the model components and parameters. We followed a stepwise approach. The first step was a simple model water balance assessment, without comparison to observed data. It can be interpreted as a basic quality check for the model, ensuring that it conserves mass, makes the difference between dry and wet years, and reacts to rainfall events. The second step was an evaluation against observed discharge data at the outlet, using classical performance criteria. It gives a general picture of the model performance and allows to comparing it to other studies found in the literature. In the next steps (steps 3 to 6), focus was made on more specific hydrological processes. In step 3, distributed surface soil moisture data was used to assess the relevance of the simulated seasonal soil water storage dynamics. In step 4, we evaluated the base flow generation mechanisms in the model through comparison with continuous water level data transformed into stream intermittency statistics. In step 5, the water level data was used again but at the event time scale, to evaluate the fast flow generation components through comparison of modelled and observed reaction and response times. Finally, in step 6, we studied correlation between observed and simulated reaction and response times and various characteristics of the rainfall events (rain volume, intensity) and antecedent soil moisture, to see if the model was able to reproduce the observed features as described in Sarrazin (2012). The results show that the model is able to represent satisfactorily the soil water storage dynamics and stream intermittency. On the other hand, the model does not reproduce the response times and the difference in response between forested and agricultural areas. References: Jankowfsky et al., 2014. Assessing anthropogenic influence on the hydrology of small peri-urban catchments: Development of the object-oriented PUMMA model by integrating urban and rural hydrological models. J. Hydrol., 517, 1056-1071 Sarrazin, B., 2012. MNT et observations multi-locales du réseau hydrographique d'un petit bassin versant rural dans une perspective d'aide à la modélisation hydrologique. Ecole doctorale Terre, Univers, Environnement. l'Institut National Polytechnique de Grenoble, 269 pp (in French).

  1. Methods for Estimating Environmental Effects and Constraints on NexGen: High Density Case Study

    NASA Technical Reports Server (NTRS)

    Augustine, S.; Ermatinger, C.; Graham, M.; Thompson, T.

    2010-01-01

    This document provides a summary of the current methods developed by Metron Aviation for the estimate of environmental effects and constraints on the Next Generation Air Transportation System (NextGen). This body of work incorporates many of the key elements necessary to achieve such an estimate. Each section contains the background and motivation for the technical elements of the work, a description of the methods used, and possible next steps. The current methods described in this document were selected in an attempt to provide a good balance between accuracy and fairly rapid turn around times to best advance Joint Planning and Development Office (JPDO) System Modeling and Analysis Division (SMAD) objectives while also supporting the needs of the JPDO Environmental Working Group (EWG). In particular this document describes methods applied to support the High Density (HD) Case Study performed during the spring of 2008. A reference day (in 2006) is modeled to describe current system capabilities while the future demand is applied to multiple alternatives to analyze system performance. The major variables in the alternatives are operational/procedural capabilities for airport, terminal, and en route airspace along with projected improvements to airframe, engine and navigational equipment.

  2. Software-Enabled Project Management Techniques and Their Relationship to the Triple Constraints

    ERIC Educational Resources Information Center

    Elleh, Festus U.

    2013-01-01

    This study investigated the relationship between software-enabled project management techniques and the triple constraints (time, cost, and scope). There was the dearth of academic literature that focused on the relationship between software-enabled project management techniques and the triple constraints (time, cost, and scope). Based on the gap…

  3. Alternative Constraint Handling Technique for Four-Bar Linkage Path Generation

    NASA Astrophysics Data System (ADS)

    Sleesongsom, S.; Bureerat, S.

    2018-03-01

    This paper proposes an extension of a new concept for path generation from our previous work by adding a new constraint handling technique. The propose technique was initially designed for problems without prescribed timing by avoiding the timing constraint, while remain constraints are solving with a new constraint handling technique. The technique is one kind of penalty technique. The comparative study is optimisation of path generation problems are solved using self-adaptive population size teaching-learning based optimization (SAP-TLBO) and original TLBO. In this study, two traditional path generation test problem are used to test the proposed technique. The results show that the new technique can be applied with the path generation problem without prescribed timing and gives better results than the previous technique. Furthermore, SAP-TLBO outperforms the original one.

  4. Optimal control of singularly perturbed nonlinear systems with state-variable inequality constraints

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Corban, J. E.

    1990-01-01

    The established necessary conditions for optimality in nonlinear control problems that involve state-variable inequality constraints are applied to a class of singularly perturbed systems. The distinguishing feature of this class of two-time-scale systems is a transformation of the state-variable inequality constraint, present in the full order problem, to a constraint involving states and controls in the reduced problem. It is shown that, when a state constraint is active in the reduced problem, the boundary layer problem can be of finite time in the stretched time variable. Thus, the usual requirement for asymptotic stability of the boundary layer system is not applicable, and cannot be used to construct approximate boundary layer solutions. Several alternative solution methods are explored and illustrated with simple examples.

  5. Efficient constraint handling in electromagnetism-like algorithm for traveling salesman problem with time windows.

    PubMed

    Yurtkuran, Alkın; Emel, Erdal

    2014-01-01

    The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms.

  6. Efficient Constraint Handling in Electromagnetism-Like Algorithm for Traveling Salesman Problem with Time Windows

    PubMed Central

    Yurtkuran, Alkın

    2014-01-01

    The traveling salesman problem with time windows (TSPTW) is a variant of the traveling salesman problem in which each customer should be visited within a given time window. In this paper, we propose an electromagnetism-like algorithm (EMA) that uses a new constraint handling technique to minimize the travel cost in TSPTW problems. The EMA utilizes the attraction-repulsion mechanism between charged particles in a multidimensional space for global optimization. This paper investigates the problem-specific constraint handling capability of the EMA framework using a new variable bounding strategy, in which real-coded particle's boundary constraints associated with the corresponding time windows of customers, is introduced and combined with the penalty approach to eliminate infeasibilities regarding time window violations. The performance of the proposed algorithm and the effectiveness of the constraint handling technique have been studied extensively, comparing it to that of state-of-the-art metaheuristics using several sets of benchmark problems reported in the literature. The results of the numerical experiments show that the EMA generates feasible and near-optimal results within shorter computational times compared to the test algorithms. PMID:24723834

  7. A classification procedure for the effective management of changes during the maintenance process

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Basili, Victor R.

    1992-01-01

    During software operation, maintainers are often faced with numerous change requests. Given available resources such as effort and calendar time, changes, if approved, have to be planned to fit within budget and schedule constraints. In this paper, we address the issue of assessing the difficulty of a change based on known or predictable data. This paper should be considered as a first step towards the construction of customized economic models for maintainers. In it, we propose a modeling approach, based on regular statistical techniques, that can be used in a variety of software maintenance environments. The approach can be easily automated, and is simple for people with limited statistical experience to use. Moreover, it deals effectively with the uncertainty usually associated with both model inputs and outputs. The modeling approach is validated on a data set provided by NASA/GSFC which shows it was effective in classifying changes with respect to the effort involved in implementing them. Other advantages of the approach are discussed along with additional steps to improve the results.

  8. Integrated system for single leg walking

    NASA Astrophysics Data System (ADS)

    Simmons, Reid; Krotkov, Eric; Roston, Gerry

    1990-07-01

    The Carnegie Mellon University Planetary Rover project is developing a six-legged walking robot capable of autonomously navigating, exploring, and acquiring samples in rugged, unknown environments. This report describes an integrated software system capable of navigating a single leg of the robot over rugged terrain. The leg, based on an early design of the Ambler Planetary Rover, is suspended below a carriage that slides along rails. To walk, the system creates an elevation map of the terrain from laser scanner images, plans an appropriate foothold based on terrain and geometric constraints, weaves the leg through the terrain to position it above the foothold, contacts the terrain with the foot, and applies force enough to advance the carriage along the rails. Walking both forward and backward, the system has traversed hundreds of meters of rugged terrain including obstacles too tall to step over, trenches too deep to step in, closely spaced obstacles, and sand hills. The implemented system consists of a number of task-specific processes (two for planning, two for perception, one for real-time control) and a central control process that directs the flow of communication between processes.

  9. Simulation of gait and gait initiation associated with body oscillating behavior in the gravity environment on the moon, mars and Phobos.

    PubMed

    Brenière, Y

    2001-04-01

    A double-inverted pendulum model of body oscillations in the frontal plane during stepping [Brenière and Ribreau (1998) Biol Cybern 79: 337-345] proposed an equivalent model for studying the body oscillating behavior induced by step frequency in the form of: (1) a kinetic body parameter, the natural body frequency (NBF), which contains gravity and which is invariable for humans, (2) a parametric function of frequency, whose parameter is the NBF, which explicates the amplitude ratio of center of mass to center of foot pressure oscillation, and (3) a function of frequency which simulates the equivalent torque necessary for the control of the head-arms-trunk segment oscillations. Here, this equivalent model is used to simulate the duration of gait initiation, i.e., the duration necessary to initiate and execute the first step of gait in subgravity, as well as to calculate the step frequencies that would impose the same minimum and maximum amplitudes of the oscillating responses of the body center of mass, whatever the gravity value. In particular, this simulation is tested under the subgravity conditions of the Moon, Mars, and Phobos, where gravity is 1/6, 3/8, and 1/1600 times that on the Earth, respectively. More generally, the simulation allows us to establish and discuss the conditions for gait adaptability that result from the biomechanical constraints particular to each gravity system.

  10. Opportunity Foregone: Education in Brazil.

    ERIC Educational Resources Information Center

    Birdsall, Nancy, Ed.; Sabot, Richard H., Ed.

    The studies presented in this volume help readers to understand the constraints faced in addressing the key problems within the Brazilian education system. Steps to address the issues and benefits to be gained by addressing those issues are discussed. Forty-two authors reiterate that the success of Brazil's education reform will have an important…

  11. Stoichiometric network constraints on xylose metabolism by recombinant Saccharomyces cerevisiae

    Treesearch

    Yong-Su Jin; Thomas W. Jeffries

    2004-01-01

    Metabolic pathway engineering is constrained by the thermodynamic and stoichiometric feasibility of enzymatic activities of introduced genes. Engineering of xylose metabolism in Saccharomyces cerevisiae has focused on introducing genes for the initial xylose assimilation steps from Pichia stipitis, a xylose-fermenting yeast, into S. cerevisiae, a yeast raditionally...

  12. Sulfur isotopic constraints from a single enzyme on the cellular to global sulfur cycles

    NASA Astrophysics Data System (ADS)

    Sim, M. S.; Adkins, J. F.; Sessions, A. L.; Orphan, V. J.; McGlynn, S.

    2017-12-01

    Since first reported more than a half century ago, sulfur isotope fractionation between sulfate and sulfide has been used as a diagnostic indicator of microbial sulfate reduction, giving added dimensions to the microbial ecological and geochemical studies of the sulfur cycle. A wide range of fractionation has attracted particular attention because it may serve as a potential indicator of environmental or physiological variables such as substrate concentrations or specific respiration rates. In theory, the magnitude of isotope fractionation depends upon the sulfur isotope effect imparted by the involved enzymes and the relative rate of each enzymatic reaction. The former defines the possible range of fractionation quantitatively, while the latter responds to environmental stimuli, providing an underlying rationale for the varying fractionations. The experimental efforts so far have concentrated largely on the latter, the factors affecting the size of fractionation. Recently, however, the direct assessment of intracellular processes emerges as a promising means for the quantitative analysis of microbial sulfur isotope fractionation as a function of environmental or physiological variables. Here, we experimentally determined for the first time the sulfur isotope fractionation during APS reduction, the first reductive step in the dissimilatory sulfate reduction pathway, using the enzyme purified from Desulfovibrio vulgaris Miyazaki. APS reductase carried out the one-step, two-electron reduction of APS to sulfite, without the production of other metabolic intermediates. Nearly identical isotope effects were obtained at two different temperatures, while the rate of APS reduction more than quadrupled with a temperature increase from 20 to 32°C. When placed in context of the linear network model for microbial sulfur isotope fractionation, our finding could provide a new, semi-quantitative constraint on the sulfur cycle at levels from cellular to global.

  13. Retuning Rieske-type Oxygenases to Expand Substrate Range

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammadi, Mahmood; Viger, Jean-François; Kumar, Pravindra

    2012-09-17

    Rieske-type oxygenases are promising biocatalysts for the destruction of persistent pollutants or for the synthesis of fine chemicals. In this work, we explored pathways through which Rieske-type oxygenases evolve to expand their substrate range. BphAE{sub p4}, a variant biphenyl dioxygenase generated from Burkholderia xenovorans LB400 BphAE{sub LB400} by the double substitution T335A/F336M, and BphAE{sub RR41}, obtained by changing Asn{sup 338}, Ile{sup 341}, and Leu{sup 409} of BphAE{sub p4} to Gln{sup 338}, Val{sup 341}, and Phe{sup 409}, metabolize dibenzofuran two and three times faster than BphAE{sub LB400}, respectively. Steady-state kinetic measurements of single- and multiple-substitution mutants of BphAE{sub LB400} showed thatmore » the single T335A and the double N338Q/L409F substitutions contribute significantly to enhanced catalytic activity toward dibenzofuran. Analysis of crystal structures showed that the T335A substitution relieves constraints on a segment lining the catalytic cavity, allowing a significant displacement in response to dibenzofuran binding. The combined N338Q/L409F substitutions alter substrate-induced conformational changes of protein groups involved in subunit assembly and in the chemical steps of the reaction. This suggests a responsive induced fit mechanism that retunes the alignment of protein atoms involved in the chemical steps of the reaction. These enzymes can thus expand their substrate range through mutations that alter the constraints or plasticity of the catalytic cavity to accommodate new substrates or that alter the induced fit mechanism required to achieve proper alignment of reaction-critical atoms or groups.« less

  14. A Vision and Roadmap for Increasing User Autonomy in Flight Operations in the National Airspace

    NASA Technical Reports Server (NTRS)

    Cotton, William B.; Hilb, Robert; Koczo, Stefan; Wing, David

    2016-01-01

    The purpose of Air Transportation is to move people and cargo safely, efficiently and swiftly to their destinations. The companies and individuals who use aircraft for this purpose, the airspace users, desire to operate their aircraft according to a dynamically optimized business trajectory for their specific mission and operational business model. In current operations, the dynamic optimization of business trajectories is limited by constraints built into operations in the National Airspace System (NAS) for reasons of safety and operational needs of the air navigation service providers. NASA has been developing and testing means to overcome many of these constraints and permit operations to be conducted closer to the airspace user's changing business trajectory as conditions unfold before and during the flight. A roadmap of logical steps progressing toward increased user autonomy is proposed, beginning with NASA's Traffic Aware Strategic Aircrew Requests (TASAR) concept that enables flight crews to make informed, deconflicted flight-optimization requests to air traffic control. These steps include the use of data communications for route change requests and approvals, integration with time-based arrival flow management processes under development by the Federal Aviation Administration (FAA), increased user authority for defining and modifying downstream, strategic portions of the trajectory, and ultimately application of self-separation. This progression takes advantage of existing FAA NextGen programs and RTCA standards development, and it is designed to minimize the number of hardware upgrades required of airspace users to take advantage of these advanced capabilities to achieve dynamically optimized business trajectories in NAS operations. The roadmap is designed to provide operational benefits to first adopters so that investment decisions do not depend upon a large segment of the user community becoming equipped before benefits can be realized. The issues of equipment certification and operational approval of new procedures are addressed in a way that minimizes their impact on the transition by deferring a change in the assignment of separation responsibility until a large body of operational data is available to support the safety case for this change in the last roadmap step.This paper will relate the roadmap steps to ongoing activities to clarify the economics-based transition to these technologies for operational use.

  15. Apollo experience report: Evolution of the attitude time line

    NASA Technical Reports Server (NTRS)

    Duncan, R. D.

    1973-01-01

    The evolution of the attitude time line is discussed. Emphasis is placed on the operational need for and constraints on the time line and on how these factors were involved in the time line generation procedure. Examples of constraints on and applications of the complete time line are given.

  16. Request-Driven Schedule Automation for the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.; Tran, Daniel; Arroyo, Belinda; Call, Jared; Mercado, Marisol

    2010-01-01

    The DSN Scheduling Engine (DSE) has been developed to increase the level of automated scheduling support available to users of NASA s Deep Space Network (DSN). We have adopted a request-driven approach to DSN scheduling, in contrast to the activity-oriented approach used up to now. Scheduling requests allow users to declaratively specify patterns and conditions on their DSN service allocations, including timing, resource requirements, gaps, overlaps, time linkages among services, repetition, priorities, and a wide range of additional factors and preferences. The DSE incorporates a model of the key constraints and preferences of the DSN scheduling domain, along with algorithms to expand scheduling requests into valid resource allocations, to resolve schedule conflicts, and to repair unsatisfied requests. We use time-bounded systematic search with constraint relaxation to return nearby solutions if exact ones cannot be found, where the relaxation options and order are under user control. To explore the usability aspects of our approach we have developed a graphical user interface incorporating some crucial features to make it easier to work with complex scheduling requests. Among these are: progressive revelation of relevant detail, immediate propagation and visual feedback from a user s decisions, and a meeting calendar metaphor for repeated patterns of requests. Even as a prototype, the DSE has been deployed and adopted as the initial step in building the operational DSN schedule, thus representing an important initial validation of our overall approach. The DSE is a core element of the DSN Service Scheduling Software (S(sup 3)), a web-based collaborative scheduling system now under development for deployment to all DSN users.

  17. Allocating limited resources in a time of fiscal constraints: a priority setting case study from Dalhousie University Faculty of Medicine.

    PubMed

    Mitton, Craig; Levy, Adrian; Gorsky, Diane; MacNeil, Christina; Dionne, Francois; Marrie, Tom

    2013-07-01

    Facing a projected $1.4M deficit on a $35M operating budget for fiscal year 2011/2012, members of the Dalhousie University Faculty of Medicine developed and implemented an explicit, transparent, criteria-based priority setting process for resource reallocation. A task group that included representatives from across the Faculty of Medicine used a program budgeting and marginal analysis (PBMA) framework, which provided an alternative to the typical public-sector approaches to addressing a budget deficit of across-the-board spending cuts and political negotiation. Key steps to the PBMA process included training staff members and department heads on priority setting and resource reallocation, establishing process guidelines to meet immediate and longer-term fiscal needs, developing a reporting structure and forming key working groups, creating assessment criteria to guide resource reallocation decisions, assessing disinvestment proposals from all departments, and providing proposal implementation recommendations to the dean. All departments were required to submit proposals for consideration. The task group approved 27 service reduction proposals and 28 efficiency gains proposals, totaling approximately $2.7M in savings across two years. During this process, the task group faced a number of challenges, including a tight timeline for development and implementation (January to April 2011), a culture that historically supported decentralized planning, at times competing interests (e.g., research versus teaching objectives), and reductions in overall health care and postsecondary education government funding. Overall, faculty and staff preferred the PBMA approach to previous practices. Other institutions should use this example to set priorities in times of fiscal constraints.

  18. Experimental evidence that the Ornstein-Uhlenbeck model best describes the evolution of leaf litter decomposability

    PubMed Central

    Pan, Xu; Cornelissen, Johannes H C; Zhao, Wei-Wei; Liu, Guo-Fang; Hu, Yu-Kun; Prinzing, Andreas; Dong, Ming; Cornwell, William K

    2014-01-01

    Leaf litter decomposability is an important effect trait for ecosystem functioning. However, it is unknown how this effect trait evolved through plant history as a leaf ‘afterlife’ integrator of the evolution of multiple underlying traits upon which adaptive selection must have acted. Did decomposability evolve in a Brownian fashion without any constraints? Was evolution rapid at first and then slowed? Or was there an underlying mean-reverting process that makes the evolution of extreme trait values unlikely? Here, we test the hypothesis that the evolution of decomposability has undergone certain mean-reverting forces due to strong constraints and trade-offs in the leaf traits that have afterlife effects on litter quality to decomposers. In order to test this, we examined the leaf litter decomposability and seven key leaf traits of 48 tree species in the temperate area of China and fitted them to three evolutionary models: Brownian motion model (BM), Early burst model (EB), and Ornstein-Uhlenbeck model (OU). The OU model, which does not allow unlimited trait divergence through time, was the best fit model for leaf litter decomposability and all seven leaf traits. These results support the hypothesis that neither decomposability nor the underlying traits has been able to diverge toward progressively extreme values through evolutionary time. These results have reinforced our understanding of the relationships between leaf litter decomposability and leaf traits in an evolutionary perspective and may be a helpful step toward reconstructing deep-time carbon cycling based on taxonomic composition with more confidence. PMID:25535551

  19. Free energy from molecular dynamics with multiple constraints

    NASA Astrophysics Data System (ADS)

    den Otter, W. K.; Briels, W. J.

    In molecular dynamics simulations of reacting systems, the key step to determining the equilibrium constant and the reaction rate is the calculation of the free energy as a function of the reaction coordinate. Intuitively the derivative of the free energy is equal to the average force needed to constrain the reaction coordinate to a constant value, but the metric tensor effect of the constraint on the sampled phase space distribution complicates this relation. The appropriately corrected expression for the potential of mean constraint force method (PMCF) for systems in which only the reaction coordinate is constrained was published recently. Here we will consider the general case of a system with multiple constraints. This situation arises when both the reaction coordinate and the 'hard' coordinates are constrained, and also in systems with several reaction coordinates. The obvious advantage of this method over the established thermodynamic integration and free energy perturbation methods is that it avoids the cumbersome introduction of a full set of generalized coordinates complementing the constrained coordinates. Simulations of n -butane and n -pentane in vacuum illustrate the method.

  20. Self-constrained inversion of potential fields

    NASA Astrophysics Data System (ADS)

    Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.

    2013-11-01

    We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.

  1. Efficient robust reconstruction of dynamic PET activity maps with radioisotope decay constraints.

    PubMed

    Gao, Fei; Liu, Huafeng; Shi, Pengcheng

    2010-01-01

    Dynamic PET imaging performs sequence of data acquisition in order to provide visualization and quantification of physiological changes in specific tissues and organs. The reconstruction of activity maps is generally the first step in dynamic PET. State space Hinfinity approaches have been proved to be a robust method for PET image reconstruction where, however, temporal constraints are not considered during the reconstruction process. In addition, the state space strategies for PET image reconstruction have been computationally prohibitive for practical usage because of the need for matrix inversion. In this paper, we present a minimax formulation of the dynamic PET imaging problem where a radioisotope decay model is employed as physics-based temporal constraints on the photon counts. Furthermore, a robust steady state Hinfinity filter is developed to significantly improve the computational efficiency with minimal loss of accuracy. Experiments are conducted on Monte Carlo simulated image sequences for quantitative analysis and validation.

  2. Minimizing conflicts: A heuristic repair method for constraint-satisfaction and scheduling problems

    NASA Technical Reports Server (NTRS)

    Minton, Steve; Johnston, Mark; Philips, Andrew; Laird, Phil

    1992-01-01

    This paper describes a simple heuristic approach to solving large-scale constraint satisfaction and scheduling problems. In this approach one starts with an inconsistent assignment for a set of variables and searches through the space of possible repairs. The search can be guided by a value-ordering heuristic, the min-conflicts heuristic, that attempts to minimize the number of constraint violations after each step. The heuristic can be used with a variety of different search strategies. We demonstrate empirically that on the n-queens problem, a technique based on this approach performs orders of magnitude better than traditional backtracking techniques. We also describe a scheduling application where the approach has been used successfully. A theoretical analysis is presented both to explain why this method works well on certain types of problems and to predict when it is likely to be most effective.

  3. Optimization of Stereo Matching in 3D Reconstruction Based on Binocular Vision

    NASA Astrophysics Data System (ADS)

    Gai, Qiyang

    2018-01-01

    Stereo matching is one of the key steps of 3D reconstruction based on binocular vision. In order to improve the convergence speed and accuracy in 3D reconstruction based on binocular vision, this paper adopts the combination method of polar constraint and ant colony algorithm. By using the line constraint to reduce the search range, an ant colony algorithm is used to optimize the stereo matching feature search function in the proposed search range. Through the establishment of the stereo matching optimization process analysis model of ant colony algorithm, the global optimization solution of stereo matching in 3D reconstruction based on binocular vision system is realized. The simulation results show that by the combining the advantage of polar constraint and ant colony algorithm, the stereo matching range of 3D reconstruction based on binocular vision is simplified, and the convergence speed and accuracy of this stereo matching process are improved.

  4. In Silico Constraint-Based Strain Optimization Methods: the Quest for Optimal Cell Factories

    PubMed Central

    Maia, Paulo; Rocha, Miguel

    2015-01-01

    SUMMARY Shifting from chemical to biotechnological processes is one of the cornerstones of 21st century industry. The production of a great range of chemicals via biotechnological means is a key challenge on the way toward a bio-based economy. However, this shift is occurring at a pace slower than initially expected. The development of efficient cell factories that allow for competitive production yields is of paramount importance for this leap to happen. Constraint-based models of metabolism, together with in silico strain design algorithms, promise to reveal insights into the best genetic design strategies, a step further toward achieving that goal. In this work, a thorough analysis of the main in silico constraint-based strain design strategies and algorithms is presented, their application in real-world case studies is analyzed, and a path for the future is discussed. PMID:26609052

  5. How Low Can You Go? Maximum Constraints on Hydrogen Concentrations Prior to the Great Oxidation Event

    NASA Technical Reports Server (NTRS)

    Domagal-Goldman, Shawn

    2014-01-01

    Shaw postulates that Earth's early atmosphere was rich in reducing gases such as hydrogen, brought to Earth via impact events. This commentary seeks to place constraints on this idea through a very brief review of existing geological and geochemical upper limits on the reducing power of Earth's atmosphere prior to the rise of oxygen. While these constraints place tight limits on this idea for rocks younger than 3.8 Ga, few constraints exist prior to that time, due to a paucity of rocks of that age. The time prior to these constraints is also a time frame for which the proposal is most plausible, and for which it carries the greatest potential to explain other mysteries. Given this potential, several tests are suggested for the H2-rich early Earth hypothesis.

  6. Planning energy-efficient bipedal locomotion on patterned terrain

    NASA Astrophysics Data System (ADS)

    Zamani, Ali; Bhounsule, Pranav A.; Taha, Ahmad

    2016-05-01

    Energy-efficient bipedal walking is essential in realizing practical bipedal systems. However, current energy-efficient bipedal robots (e.g., passive-dynamics-inspired robots) are limited to walking at a single speed and step length. The objective of this work is to address this gap by developing a method of synthesizing energy-efficient bipedal locomotion on patterned terrain consisting of stepping stones using energy-efficient primitives. A model of Cornell Ranger (a passive-dynamics inspired robot) is utilized to illustrate our technique. First, an energy-optimal trajectory control problem for a single step is formulated and solved. The solution minimizes the Total Cost Of Transport (TCOT is defined as the energy used per unit weight per unit distance travelled) subject to various constraints such as actuator limits, foot scuffing, joint kinematic limits, ground reaction forces. The outcome of the optimization scheme is a table of TCOT values as a function of step length and step velocity. Next, we parameterize the terrain to identify the location of the stepping stones. Finally, the TCOT table is used in conjunction with the parameterized terrain to plan an energy-efficient stepping strategy.

  7. Between a Map and a Data Rod

    NASA Astrophysics Data System (ADS)

    Teng, W. L.; Rui, H.; Strub, R. F.; Vollmer, B.

    2015-12-01

    A "Digital Divide" has long stood between how NASA and other satellite-derived data are typically archived (time-step arrays or "maps") and how hydrology and other point-time series oriented communities prefer to access those data. In essence, the desired method of data access is orthogonal to the way the data are archived. Our approach to bridging the Divide is part of a larger NASA-supported "data rods" project to enhance access to and use of NASA and other data by the Consortium of Universities for the Advancement of Hydrologic Science, Inc. (CUAHSI) Hydrologic Information System (HIS) and the larger hydrology community. Our main objective was to determine a way to reorganize data that is optimal for these communities. Two related objectives were to optimally reorganize data in a way that (1) is operational and fits in and leverages the existing Goddard Earth Sciences Data and Information Services Center (GES DISC) operational environment and (2) addresses the scaling up of data sets available as time series from those archived at the GES DISC to potentially include those from other Earth Observing System Data and Information System (EOSDIS) data archives. Through several prototype efforts and lessons learned, we arrived at a non-database solution that satisfied our objectives/constraints. We describe, in this presentation, how we implemented the operational production of pre-generated data rods and, considering the tradeoffs between length of time series (or number of time steps), resources needed, and performance, how we implemented the operational production of on-the-fly ("virtual") data rods. For the virtual data rods, we leveraged a number of existing resources, including the NASA Giovanni Cache and NetCDF Operators (NCO) and used data cubes processed in parallel. Our current benchmark performance for virtual generation of data rods is about a year's worth of time series for hourly data (~9,000 time steps) in ~90 seconds. Our approach is a specific implementation of the general optimal strategy of reorganizing data to match the desired means of access. Results from our project have already significantly extended NASA data to the large and important hydrology user community that has been, heretofore, mostly unable to easily access and use NASA data.

  8. Uncertainty assessment of 3D instantaneous velocity model from stack velocities

    NASA Astrophysics Data System (ADS)

    Emanuele Maesano, Francesco; D'Ambrogi, Chiara

    2015-04-01

    3D modelling is a powerful tool that is experiencing increasing applications in data analysis and dissemination. At the same time the need of quantitative uncertainty evaluation is strongly requested in many aspects of the geological sciences and by the stakeholders. In many cases the starting point for 3D model building is the interpretation of seismic profiles that provide indirect information about the geology of the subsurface in the domain of time. The most problematic step in the 3D modelling construction is the conversion of the horizons and faults interpreted in time domain to the depth domain. In this step the dominant variable that could lead to significantly different results is the velocity. The knowledge of the subsurface velocities is related mainly to punctual data (sonic logs) that are often sparsely distributed in the areas covered by the seismic interpretation. The extrapolation of velocity information to wide extended horizons is thus a critical step to obtain a 3D model in depth that can be used for predictive purpose. In the EU-funded GeoMol Project, the availability of a dense network of seismic lines (confidentially provided by ENI S.p.A.) in the Central Po Plain, is paired with the presence of 136 well logs, but few of them have sonic logs and in some portion of the area the wells are very widely spaced. The depth conversion of the 3D model in time domain has been performed testing different strategies for the use and the interpolation of velocity data. The final model has been obtained using a 4 layer cake 3D instantaneous velocity model that considers both the initial velocity (v0) in every reference horizon and the gradient of velocity variation with depth (k). Using this method it is possible to consider the geological constraint given by the geometries of the horizons and the geo-statistical approach to the interpolation of velocities and gradient. Here we present an experiment based on the use of set of pseudo-wells obtained from the stack velocities available inside the area, interpolated using the kriging geo-statistical method. The stack velocities are intersected with the position of the horizons in time domain and from this information we build a pseudo-well to calculate the initial velocity and the gradient of increase (or decrease) of velocity with depth inside the considered rock volume. The experiment is aimed to obtain estimation and a representation of the uncertainty related to the geo-statistical interpolation of velocity data in a 3D model and to have an independent control of the final results using the well markers available inside the test area as constraints. The project GeoMol is co-funded by the Alpine Space Program as part of the European Territorial Cooperation 2007-2013. The project integrates partners from Austria, France, Germany, Italy, Slovenia and Switzerland and runs from September 2012 to June 2015. Further information on www.geomol.eu

  9. Learning Artificial Phonotactic Constraints: Time Course, Durability, and Relationship to Natural Constraints

    ERIC Educational Resources Information Center

    Taylor, Conrad F.; Houghton, George

    2005-01-01

    G. S. Dell, K. D. Reed, D. R. Adams, and A. S. Meyer (2000) proposed a "breadth-of-constraint" continuum on phoneme errors, using artificial experiment-wide constraints to investigate a putative middle ground between local and language-wide constraints. The authors report 5 experiments that test the idea of the continuum and the location of the…

  10. Balancing antagonistic time and resource utilization constraints in over-subscribed scheduling problems

    NASA Technical Reports Server (NTRS)

    Smith, Stephen F.; Pathak, Dhiraj K.

    1991-01-01

    In this paper, we report work aimed at applying concepts of constraint-based problem structuring and multi-perspective scheduling to an over-subscribed scheduling problem. Previous research has demonstrated the utility of these concepts as a means for effectively balancing conflicting objectives in constraint-relaxable scheduling problems, and our goal here is to provide evidence of their similar potential in the context of HST observation scheduling. To this end, we define and experimentally assess the performance of two time-bounded heuristic scheduling strategies in balancing the tradeoff between resource setup time minimization and satisfaction of absolute time constraints. The first strategy considered is motivated by dispatch-based manufacturing scheduling research, and employs a problem decomposition that concentrates local search on minimizing resource idle time due to setup activities. The second is motivated by research in opportunistic scheduling and advocates a problem decomposition that focuses attention on the goal activities that have the tightest temporal constraints. Analysis of experimental results gives evidence of differential superiority on the part of each strategy in different problem solving circumstances. A composite strategy based on recognition of characteristics of the current problem solving state is then defined and tested to illustrate the potential benefits of constraint-based problem structuring and multi-perspective scheduling in over-subscribe scheduling problems.

  11. Advantages of soft versus hard constraints in self-modeling curve resolution problems. Alternating least squares with penalty functions.

    PubMed

    Gemperline, Paul J; Cash, Eric

    2003-08-15

    A new algorithm for self-modeling curve resolution (SMCR) that yields improved results by incorporating soft constraints is described. The method uses least squares penalty functions to implement constraints in an alternating least squares algorithm, including nonnegativity, unimodality, equality, and closure constraints. By using least squares penalty functions, soft constraints are formulated rather than hard constraints. Significant benefits are (obtained using soft constraints, especially in the form of fewer distortions due to noise in resolved profiles. Soft equality constraints can also be used to introduce incomplete or partial reference information into SMCR solutions. Four different examples demonstrating application of the new method are presented, including resolution of overlapped HPLC-DAD peaks, flow injection analysis data, and batch reaction data measured by UV/visible and near-infrared spectroscopy (NIR). Each example was selected to show one aspect of the significant advantages of soft constraints over traditionally used hard constraints. Incomplete or partial reference information into self-modeling curve resolution models is described. The method offers a substantial improvement in the ability to resolve time-dependent concentration profiles from mixture spectra recorded as a function of time.

  12. Model for estimating the penetration depth limit of the time-reversed ultrasonically encoded optical focusing technique

    PubMed Central

    Jang, Mooseok; Ruan, Haowen; Judkewitz, Benjamin; Yang, Changhuei

    2014-01-01

    The time-reversed ultrasonically encoded (TRUE) optical focusing technique is a method that is capable of focusing light deep within a scattering medium. This theoretical study aims to explore the depth limits of the TRUE technique for biological tissues in the context of two primary constraints – the safety limit of the incident light fluence and a limited TRUE’s recording time (assumed to be 1 ms), as dynamic scatterer movements in a living sample can break the time-reversal scattering symmetry. Our numerical simulation indicates that TRUE has the potential to render an optical focus with a peak-to-background ratio of ~2 at a depth of ~103 mm at wavelength of 800 nm in a phantom with tissue scattering characteristics. This study sheds light on the allocation of photon budget in each step of the TRUE technique, the impact of low signal on the phase measurement error, and the eventual impact of the phase measurement error on the strength of the TRUE optical focus. PMID:24663917

  13. SU-F-T-342: Dosimetric Constraint Prediction Guided Automatic Mulit-Objective Optimization for Intensity Modulated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, T; Zhou, L; Li, Y

    Purpose: For intensity modulated radiotherapy, the plan optimization is time consuming with difficulties of selecting objectives and constraints, and their relative weights. A fast and automatic multi-objective optimization algorithm with abilities to predict optimal constraints and manager their trade-offs can help to solve this problem. Our purpose is to develop such a framework and algorithm for a general inverse planning. Methods: There are three main components contained in this proposed multi-objective optimization framework: prediction of initial dosimetric constraints, further adjustment of constraints and plan optimization. We firstly use our previously developed in-house geometry-dosimetry correlation model to predict the optimal patient-specificmore » dosimetric endpoints, and treat them as initial dosimetric constraints. Secondly, we build an endpoint(organ) priority list and a constraint adjustment rule to repeatedly tune these constraints from their initial values, until every single endpoint has no room for further improvement. Lastly, we implement a voxel-independent based FMO algorithm for optimization. During the optimization, a model for tuning these voxel weighting factors respecting to constraints is created. For framework and algorithm evaluation, we randomly selected 20 IMRT prostate cases from the clinic and compared them with our automatic generated plans, in both the efficiency and plan quality. Results: For each evaluated plan, the proposed multi-objective framework could run fluently and automatically. The voxel weighting factor iteration time varied from 10 to 30 under an updated constraint, and the constraint tuning time varied from 20 to 30 for every case until no more stricter constraint is allowed. The average total costing time for the whole optimization procedure is ∼30mins. By comparing the DVHs, better OAR dose sparing could be observed in automatic generated plan, for 13 out of the 20 cases, while others are with competitive results. Conclusion: We have successfully developed a fast and automatic multi-objective optimization for intensity modulated radiotherapy. This work is supported by the National Natural Science Foundation of China (No: 81571771)« less

  14. A dual method for optimal control problems with initial and final boundary constraints.

    NASA Technical Reports Server (NTRS)

    Pironneau, O.; Polak, E.

    1973-01-01

    This paper presents two new algorithms belonging to the family of dual methods of centers. The first can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states. The second one can be used for solving fixed time optimal control problems with inequality constraints on the initial and terminal states and with affine instantaneous inequality constraints on the control. Convergence is established for both algorithms. Qualitative reasoning indicates that the rate of convergence is linear.

  15. Time Poverty Thresholds and Rates for the US Population

    ERIC Educational Resources Information Center

    Kalenkoski, Charlene M.; Hamrick, Karen S.; Andrews, Margaret

    2011-01-01

    Time constraints, like money constraints, affect Americans' well-being. This paper defines what it means to be time poor based on the concepts of necessary and committed time and presents time poverty thresholds and rates for the US population and certain subgroups. Multivariate regression techniques are used to identify the key variables…

  16. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    NASA Astrophysics Data System (ADS)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one step to any point in the near-optimal region, and each iterate generates a new, feasible alternative. We use the method to generate alternatives that span the near-optimal regions of simple and more complicated water management problems and may be preferred to optimal solutions. We also discuss extensions to handle non-linear equity constraints.

  17. Modeling the main fungal diseases of winter wheat: constraints and possible solutions

    USDA-ARS?s Scientific Manuscript database

    The first step in the formulation of disease management strategy for any cropping system is to identify the most important risk factors among those on the long list of possible candidates. This is facilitated by basic epidemiological studies of pathogen life cycles, and an understanding of the way i...

  18. 78 FR 23778 - Quivira National Wildlife Refuge, Stafford, KS; Comprehensive Conservation Plan and Environmental...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-22

    ..., Parks and Tourism. Level of Service staffing at the GPNC would remain the same. Alternative B--Proposed... the constraints imposed by biological, economic, social, political, and legal considerations... meetings are yet to be determined, but will be announced via local media and a planning update. Next Steps...

  19. rSPACE: Spatially based power analysis for conservation and ecology

    Treesearch

    Martha M. Ellis; Jacob S. Ivan; Jody M. Tucker; Michael K. Schwartz

    2015-01-01

    1.) Power analysis is an important step in designing effective monitoring programs to detect trends in plant or animal populations. Although project goals often focus on detecting changes in population abundance, logistical constraints may require data collection on population indices, such as detection/non-detection data for occupancy estimation. 2.) We describe the...

  20. First spin-parity constraint of the 306 keV resonance in Cl 35 for nova nucleosynthesis

    DOE PAGES

    Chipps, K. A.; Rutgers Univ., New Brunswick, NJ; Pain, S. D.; ...

    2017-04-28

    Something of particular interest in astrophysics is the 34 S ( p , γ ) 35 Cl reaction, which serves as a stepping stone in thermonuclear runaway reaction chains during a nova explosion. Although the isotopes involved are all stable, the reaction rate of this significant step is not well known, due to a lack of experimental spectroscopic information on states within the Gamow window above the proton separation threshold of 35 Cl . Furthermore, measurements of level spins and parities provide input for the calculation of resonance strengths, which ultimately determine the astrophysical reaction rate of the 34 Smore » ( p , γ ) 35 Cl proton capture reaction. By performing the 37 Cl ( p , t ) 35 Cl reaction in normal kinematics at the Holifield Radioactive Ion Beam Facility at Oak Ridge National Laboratory, we have conducted a study of the region of astrophysical interest in 35 Cl , and have made the first-ever constraint on the spin and parity assignment for a level at 6677 ± 15 keV ( E r = 306 keV), inside the Gamow window for novae.« less

  1. Radio Resource Allocation on Complex 4G Wireless Cellular Networks

    NASA Astrophysics Data System (ADS)

    Psannis, Kostas E.

    2015-09-01

    In this article we consider the heuristic algorithm which improves step by step wireless data delivery over LTE cellular networks by using the total transmit power with the constraint on users’ data rates, and the total throughput with the constraints on the total transmit power as well as users’ data rates, which are jointly integrated into a hybrid-layer design framework to perform radio resource allocation for multiple users, and to effectively decide the optimal system parameter such as modulation and coding scheme (MCS) in order to adapt to the varying channel quality. We propose new heuristic algorithm which balances the accessible data rate, the initial data rates of each user allocated by LTE scheduler, the priority indicator which signals delay- throughput- packet loss awareness of the user, and the buffer fullness by achieving maximization of radio resource allocation for multiple users. It is noted that the overall performance is improved with the increase in the number of users, due to multiuser diversity. Experimental results illustrate and validate the accuracy of the proposed methodology.

  2. First spin-parity constraint of the 306 keV resonance in 35Cl for nova nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Chipps, K. A.; Pain, S. D.; Kozub, R. L.; Bardayan, D. W.; Cizewski, J. A.; Chae, K. Y.; Liang, J. F.; Matei, C.; Moazen, B. H.; Nesaraja, C. D.; O'Malley, P. D.; Peters, W. A.; Pittman, S. T.; Schmitt, K. T.; Smith, M. S.

    2017-04-01

    Of particular interest in astrophysics is the 34S(p ,γ )35Cl reaction, which serves as a stepping stone in thermonuclear runaway reaction chains during a nova explosion. Though the isotopes involved are all stable, the reaction rate of this significant step is not well known, due to a lack of experimental spectroscopic information on states within the Gamow window above the proton separation threshold of 35Cl. Measurements of level spins and parities provide input for the calculation of resonance strengths, which ultimately determine the astrophysical reaction rate of the 34S(p ,γ )35Cl proton capture reaction. By performing the 37Cl(p ,t )35Cl reaction in normal kinematics at the Holifield Radioactive Ion Beam Facility at Oak Ridge National Laboratory, we have conducted a study of the region of astrophysical interest in 35Cl, and have made the first-ever constraint on the spin and parity assignment for a level at 6677 ±15 keV (Er=306 keV), inside the Gamow window for novae.

  3. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    NASA Astrophysics Data System (ADS)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.

  4. Compact pulse transformer for 85 kV, 3.5 μs electron gun anode of compact X-ray cargo scanner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patel, R.; Sharma, D.K.; Dixit, K.

    Design of compact and reliable 85kV HV pulse transformer for electron gun anode pulsing is a major concern, when size and space are constraints. This paper describes design procedures and optimization of various parameters like HV insulation, step up ratio, rise time and flat top of Pulse transformer, operating with input from a 10 stage PFN of 50 ohm impedance and charged at 14kV. The transformer should deliver rated output voltage of negative polarity 85kV, 3 to 4μs pulse width, less than 2μs rise time and flat top within 10% across an electron gun load, equivalent to a parallel combinationmore » of 10kΩ and 200pF load at a PRF of 250 Hz. Since the Cargo Scanner has to operate on movable carrier, this transformer is designed to operate even in the inclined positions. This transformer has given voltage step up, rise time and flat top of 13.75, 1.5 μs and 4.5% respectively for a 10kΩ and 200pF load at 250Hz PRF and also demonstrated operation in 90{sup °} tilted transformer positions. An effort has been put to achieve maintenance free Pulse transformer by providing effective sealing in the transformer tank to stop breathing action. Also, special flexing walls of transformer tank accommodate for small changes in volume of oil due to temperature variations. (author)« less

  5. Feasibility of intra-acquisition motion correction for 4D DSA reconstruction for applications in the thorax and abdomen

    NASA Astrophysics Data System (ADS)

    Wagner, Martin; Laeseke, Paul; Harari, Colin; Schafer, Sebastian; Speidel, Michael; Mistretta, Charles

    2018-03-01

    The recently proposed 4D DSA technique enables reconstruction of time resolved 3D volumes from two C-arm CT acquisitions. This provides information on the blood flow in neurovascular applications and can be used for the diagnosis and treatment of vascular diseases. For applications in the thorax and abdomen, respiratory motion can prevent successful 4D DSA reconstruction and cause severe artifacts. The purpose of this work is to propose a novel technique for motion compensated 4D DSA reconstruction to enable applications in the thorax and abdomen. The approach uses deformable 2D registration to align the projection images of a non-contrast and a contrast enhanced scan. A subset of projection images is then selected, which are acquired in a similar respiratory state and an iterative simultaneous multiplicative algebraic reconstruction is applied to determine a 3D constraint volume. A 2D-3D registration step then aligns the remaining projection images with the 3D constraint volume. Finally, a constrained back-projection is performed to create a 3D volume for each projection image. A pig study has been performed, where 4D DSA acquisitions were performed with and without respiratory motion to evaluate the feasibility of the approach. The dice similarity coefficient between the reference 3D constraint volume and the motion compensated reconstruction was 51.12 % compared to 35.99 % without motion compensation. This technique could improve the workflow for procedures in interventional radiology, e.g. liver embolizations, where changes in blood flow have to be monitored carefully.

  6. Impacts of Base-Case and Post-Contingency Constraint Relaxations on Static and Dynamic Operational Security

    NASA Astrophysics Data System (ADS)

    Salloum, Ahmed

    Constraint relaxation by definition means that certain security, operational, or financial constraints are allowed to be violated in the energy market model for a predetermined penalty price. System operators utilize this mechanism in an effort to impose a price-cap on shadow prices throughout the market. In addition, constraint relaxations can serve as corrective approximations that help in reducing the occurrence of infeasible or extreme solutions in the day-ahead markets. This work aims to capture the impact constraint relaxations have on system operational security. Moreover, this analysis also provides a better understanding of the correlation between DC market models and AC real-time systems and analyzes how relaxations in market models propagate to real-time systems. This information can be used not only to assess the criticality of constraint relaxations, but also as a basis for determining penalty prices more accurately. Constraint relaxations practice was replicated in this work using a test case and a real-life large-scale system, while capturing both energy market aspects and AC real-time system performance. System performance investigation included static and dynamic security analysis for base-case and post-contingency operating conditions. PJM peak hour loads were dynamically modeled in order to capture delayed voltage recovery and sustained depressed voltage profiles as a result of reactive power deficiency caused by constraint relaxations. Moreover, impacts of constraint relaxations on operational system security were investigated when risk based penalty prices are used. Transmission lines in the PJM system were categorized according to their risk index and each category was as-signed a different penalty price accordingly in order to avoid real-time overloads on high risk lines. This work also extends the investigation of constraint relaxations to post-contingency relaxations, where emergency limits are allowed to be relaxed in energy market models. Various scenarios were investigated to capture and compare between the impacts of base-case and post-contingency relaxations on real-time system performance, including the presence of both relaxations simultaneously. The effect of penalty prices on the number and magnitude of relaxations was investigated as well.

  7. Ramses-GPU: Second order MUSCL-Handcock finite volume fluid solver

    NASA Astrophysics Data System (ADS)

    Kestener, Pierre

    2017-10-01

    RamsesGPU is a reimplementation of RAMSES (ascl:1011.007) which drops the adaptive mesh refinement (AMR) features to optimize 3D uniform grid algorithms for modern graphics processor units (GPU) to provide an efficient software package for astrophysics applications that do not need AMR features but do require a very large number of integration time steps. RamsesGPU provides an very efficient C++/CUDA/MPI software implementation of a second order MUSCL-Handcock finite volume fluid solver for compressible hydrodynamics as a magnetohydrodynamics solver based on the constraint transport technique. Other useful modules includes static gravity, dissipative terms (viscosity, resistivity), and forcing source term for turbulence studies, and special care was taken to enhance parallel input/output performance by using state-of-the-art libraries such as HDF5 and parallel-netcdf.

  8. Transport coefficients of liquid CF4 and SF6 computed by molecular dynamics using polycenter Lennard-Jones potentials

    NASA Astrophysics Data System (ADS)

    Hoheisel, C.

    1989-01-01

    For several liquid states of CF4 and SF4, the shear and the bulk viscosity as well as the thermal conductivity were determined by equilibrium molecular dynamics (MD) calculations. Lennard-Jones four- and six-center pair potentials were applied, and the method of constraints was chosen for the MD. The computed Green-Kubo integrands show a steep time decay, and no particular longtime behavior occurs. The molecule number dependence of the results is found to be small, and 3×105 integration steps allow an accuracy of about 10% for the shear viscosity and the thermal conductivity coefficient. Comparison with experimental data shows a fair agreement for CF4, while for SF6 the transport coefficients fall below the experimental ones by about 30%.

  9. Optimization of plasma amplifiers

    DOE PAGES

    Sadler, James D.; Trines, Raoul M. G. M.; Tabak, Max; ...

    2017-05-24

    Here, plasma amplifiers offer a route to side-step limitations on chirped pulse amplification and generate laser pulses at the power frontier. They compress long pulses by transferring energy to a shorter pulse via the Raman or Brillouin instabilities. We present an extensive kinetic numerical study of the three-dimensional parameter space for the Raman case. Further particle-in-cell simulations find the optimal seed pulse parameters for experimentally relevant constraints. The high-efficiency self-similar behavior is observed only for seeds shorter than the linear Raman growth time. A test case similar to an upcoming experiment at the Laboratory for Laser Energetics is found tomore » maintain good transverse coherence and high-energy efficiency. Effective compression of a 10kJ, nanosecond-long driver pulse is also demonstrated in a 15-cm-long amplifier.« less

  10. The promise of advanced technology for future air transports

    NASA Technical Reports Server (NTRS)

    Bower, R. E.

    1978-01-01

    Progress in all weather 4-D navigation and wake vortex attenuation research is discussed and the concept of time based metering of aircraft is recommended for increased emphasis. The far term advances in aircraft efficiency were shown to be skin friction reduction and advanced configuration types. The promise of very large aircraft, possibly all wing aircraft is discussed, as is an advanced concept for an aerial relay transportation system. Very significant technological developments were identified that can improve supersonic transport performance and reduce noise. The hypersonic transport was proposed as the ultimate step in air transportation in the atmosphere. Progress in the key technology areas of propulsion and structures was reviewed. Finally, the impact of alternate fuels on future air transports was considered and shown not to be a growth constraint.

  11. Optimization of plasma amplifiers

    NASA Astrophysics Data System (ADS)

    Sadler, James D.; Trines, Raoul M. Â. G. Â. M.; Tabak, Max; Haberberger, Dan; Froula, Dustin H.; Davies, Andrew S.; Bucht, Sara; Silva, Luís O.; Alves, E. Paulo; Fiúza, Frederico; Ceurvorst, Luke; Ratan, Naren; Kasim, Muhammad F.; Bingham, Robert; Norreys, Peter A.

    2017-05-01

    Plasma amplifiers offer a route to side-step limitations on chirped pulse amplification and generate laser pulses at the power frontier. They compress long pulses by transferring energy to a shorter pulse via the Raman or Brillouin instabilities. We present an extensive kinetic numerical study of the three-dimensional parameter space for the Raman case. Further particle-in-cell simulations find the optimal seed pulse parameters for experimentally relevant constraints. The high-efficiency self-similar behavior is observed only for seeds shorter than the linear Raman growth time. A test case similar to an upcoming experiment at the Laboratory for Laser Energetics is found to maintain good transverse coherence and high-energy efficiency. Effective compression of a 10 kJ , nanosecond-long driver pulse is also demonstrated in a 15-cm-long amplifier.

  12. Working conditions and occupational risk exposure in employees driving for work.

    PubMed

    Fort, Emmanuel; Ndagire, Sheba; Gadegbeku, Blandine; Hours, Martine; Charbotel, Barbara

    2016-04-01

    An analysis of the occupational constraints and exposures to which employees facing road risk at work are subject was performed, with comparison versus non-exposed employees. Objective was to improve knowledge of the characteristics of workers exposed to road risk in France and of the concomitant occupational constraints. The descriptive study was based on data from the 2010 SUMER survey (Medical Monitoring of Occupational Risk Exposure: Surveillance Médicale des Expositions aux Risques professionnels), which included data not only on road risk exposure at work but also on a range of socio-occupational factors and working conditions. The main variable of interest was "driving (car, truck, bus, coach, etc.) on public thoroughfares" for work (during the last week of work). This was a dichotomous "Yes/No" variable, distinguishing employees who drove for work; it also comprised 4-step weekly exposure duration: <2h, 2-10h, 10-20h and ≥20h. 75% of the employees with driving exposure were male. Certain socio-occupational categories were found significantly more frequently: professional drivers (INSEE occupations and socio-occupational categories (PCS) 64), skilled workers (PCS 61), intermediate professions and teaching, health, civil service (functionaries) and assimilated (PCS 46) and company executives (PCS 36). Employees with driving exposure more often worked in small businesses or establishments. Constraints in terms of schedule and work-time were more frequent in employees with driving exposure. Constraints in terms of work rhythm were more frequent in non-exposed employees, with the exception of external demands requiring immediate response. On the Karasek's Job Demand-Control Model, employees with driving exposure less often had low decision latitude. Prevalence of job-strain was also lower, as was prevalence of "iso-strain" (combination of job-strain and social isolation). Employees with driving exposure were less often concerned by hostile behavior and, when they did report such psychological violence (inspired on the Leymann questionnaire), it was significantly more frequently due to clients, users or patients. Employees with driving exposure at work showed several specificities. The present study, based on a representative nationwide survey of employees, confirmed the existence of differences in working conditions between employees with and without driving exposure at work. In employees with driving exposure, constraints in terms of work-time and rhythm increased with weekly exposure duration, as did tension at work and exposure to hostile behavior. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. a Robust Method for Stereo Visual Odometry Based on Multiple Euclidean Distance Constraint and Ransac Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Q.; Tong, X.; Liu, S.; Lu, X.; Liu, S.; Chen, P.; Jin, Y.; Xie, H.

    2017-07-01

    Visual Odometry (VO) is a critical component for planetary robot navigation and safety. It estimates the ego-motion using stereo images frame by frame. Feature points extraction and matching is one of the key steps for robotic motion estimation which largely influences the precision and robustness. In this work, we choose the Oriented FAST and Rotated BRIEF (ORB) features by considering both accuracy and speed issues. For more robustness in challenging environment e.g., rough terrain or planetary surface, this paper presents a robust outliers elimination method based on Euclidean Distance Constraint (EDC) and Random Sample Consensus (RANSAC) algorithm. In the matching process, a set of ORB feature points are extracted from the current left and right synchronous images and the Brute Force (BF) matcher is used to find the correspondences between the two images for the Space Intersection. Then the EDC and RANSAC algorithms are carried out to eliminate mismatches whose distances are beyond a predefined threshold. Similarly, when the left image of the next time matches the feature points with the current left images, the EDC and RANSAC are iteratively performed. After the above mentioned, there are exceptional remaining mismatched points in some cases, for which the third time RANSAC is applied to eliminate the effects of those outliers in the estimation of the ego-motion parameters (Interior Orientation and Exterior Orientation). The proposed approach has been tested on a real-world vehicle dataset and the result benefits from its high robustness.

  14. Optimizing Natural Gas Networks through Dynamic Manifold Theory and a Decentralized Algorithm: Belgium Case Study

    NASA Astrophysics Data System (ADS)

    Koch, Caleb; Winfrey, Leigh

    2014-10-01

    Natural Gas is a major energy source in Europe, yet political instabilities have the potential to disrupt access and supply. Energy resilience is an increasingly essential construct and begins with transmission network design. This study proposes a new way of thinking about modelling natural gas flow. Rather than relying on classical economic models, this problem is cast into a time-dependent Hamiltonian dynamics discussion. Traditional Natural Gas constraints, including inelastic demand and maximum/minimum pipe flows, are portrayed as energy functions and built into the dynamics of each pipe flow. Doing so allows the constraints to be built into the dynamics of each pipeline. As time progresses in the model, natural gas flow rates find the minimum energy, thus the optimal gas flow rates. The most important result of this study is using dynamical principles to ensure the output of natural gas at demand nodes remains constant, which is important for country to country natural gas transmission. Another important step in this study is building the dynamics of each flow in a decentralized algorithm format. Decentralized regulation has solved congestion problems for internet data flow, traffic flow, epidemiology, and as demonstrated in this study can solve the problem of Natural Gas congestion. A mathematical description is provided for how decentralized regulation leads to globally optimized network flow. Furthermore, the dynamical principles and decentralized algorithm are applied to a case study of the Fluxys Belgium Natural Gas Network.

  15. Reactivity of propene, n-butene, and isobutene in the hydrogen transfer steps of n-hexane cracking over zeolites of different structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lukyanov, D.B.

    The reaction of n-hexane cracking over HZSM-5, HY zeolite and mordenite (HM) was studied in accordance with the procedure of the [beta]-test recently proposed for quantitative characterization of zeolite hydrogen transfer activity. It is shown that this procedure allows one to obtain quantitative data on propene, n-butene, and isobutene reactivities in the hydrogen transfer steps of the reaction. The results demonstrate that in the absence of steric constraints (large pore HY and HM zeolites) isobutene is approximately 5 times more reactive in hydrogen transfer than n-butene. The latter, in turn, is about 1.3 times more reactive than propene. With mediummore » pore HZSM-5, steric inhibition of the hydrogen transfer between n-hexane and isobutene is observed. This results in a sharp decrease in the isobutene reactivity: over HZSM-5 zeolites isobutene is only 1.2 times more reactive in hydrogen transfer than n-butene. On the basis of these data it is concluded that the [beta]-test measures the [open quotes]real[close quotes] hydrogen transfer activity of zeolites, i.e., the activity that summarizes the effects of the acidic and structural properties of zeolites. An attempt is made to estimate the [open quotes]ideal[close quotes] zeolite hydrogen transfer activity, i.e., the activity determined by the zeolite acidic properties only. The estimations obtained show that this activity is approximately 1.8 and 1.6 times higher for HM zeolite in comparison with HZSM-5 and HY zeolites, respectively. 16 refs., 4 figs., 2 tabs.« less

  16. Updated tomographic analysis of the integrated Sachs-Wolfe effect and implications for dark energy

    NASA Astrophysics Data System (ADS)

    Stölzner, Benjamin; Cuoco, Alessandro; Lesgourgues, Julien; Bilicki, Maciej

    2018-03-01

    We derive updated constraints on the integrated Sachs-Wolfe (ISW) effect through cross-correlation of the cosmic microwave background with galaxy surveys. We improve with respect to similar previous analyses in several ways. First, we use the most recent versions of extragalactic object catalogs, SDSS DR12 photometric redshift (photo-z ) and 2MASS Photo-z data sets, as well as those employed earlier for ISW, SDSS QSO photo-z and NVSS samples. Second, we use for the first time the WISE × SuperCOSMOS catalog, which allows us to perform an all-sky analysis of the ISW up to z ˜0.4 . Third, thanks to the use of photo-z s , we separate each data set into different redshift bins, deriving the cross-correlation in each bin. This last step leads to a significant improvement in sensitivity. We remove cross-correlation between catalogs using masks which mutually exclude common regions of the sky. We use two methods to quantify the significance of the ISW effect. In the first one, we fix the cosmological model, derive linear galaxy biases of the catalogs, and then evaluate the significance of the ISW using a single parameter. In the second approach we perform a global fit of the ISW and of the galaxy biases varying the cosmological model. We find significances of the ISW in the range 4.7 - 5.0 σ thus reaching, for the first time in such an analysis, the threshold of 5 σ . Without the redshift tomography we find a significance of ˜4.0 σ , which shows the importance of the binning method. Finally we use the ISW data to infer constraints on the dark energy redshift evolution and equation of state. We find that the redshift range covered by the catalogs is still not optimal to derive strong constraints, although this goal will be likely reached using future datasets such as from Euclid, LSST, and SKA.

  17. Conformal and covariant Z4 formulation of the Einstein equations: Strongly hyperbolic first-order reduction and solution with discontinuous Galerkin schemes

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael; Guercilena, Federico; Köppel, Sven; Rezzolla, Luciano; Zanotti, Olindo

    2018-04-01

    We present a strongly hyperbolic first-order formulation of the Einstein equations based on the conformal and covariant Z4 system (CCZ4) with constraint-violation damping, which we refer to as FO-CCZ4. As CCZ4, this formulation combines the advantages of a conformal and traceless formulation, with the suppression of constraint violations given by the damping terms, but being first order in time and space, it is particularly suited for a discontinuous Galerkin (DG) implementation. The strongly hyperbolic first-order formulation has been obtained by making careful use of first and second-order ordering constraints. A proof of strong hyperbolicity is given for a selected choice of standard gauges via an analytical computation of the entire eigenstructure of the FO-CCZ4 system. The resulting governing partial differential equations system is written in nonconservative form and requires the evolution of 58 unknowns. A key feature of our formulation is that the first-order CCZ4 system decouples into a set of pure ordinary differential equations and a reduced hyperbolic system of partial differential equations that contains only linearly degenerate fields. We implement FO-CCZ4 in a high-order path-conservative arbitrary-high-order-method-using-derivatives (ADER)-DG scheme with adaptive mesh refinement and local time-stepping, supplemented with a third-order ADER-WENO subcell finite-volume limiter in order to deal with singularities arising with black holes. We validate the correctness of the formulation through a series of standard tests in vacuum, performed in one, two and three spatial dimensions, and also present preliminary results on the evolution of binary black-hole systems. To the best of our knowledge, these are the first successful three-dimensional simulations of moving punctures carried out with high-order DG schemes using a first-order formulation of the Einstein equations.

  18. Enhanced Fuel-Optimal Trajectory-Generation Algorithm for Planetary Pinpoint Landing

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Blackmore, James C.; Scharf, Daniel P.

    2011-01-01

    An enhanced algorithm is developed that builds on a previous innovation of fuel-optimal powered-descent guidance (PDG) for planetary pinpoint landing. The PDG problem is to compute constrained, fuel-optimal trajectories to land a craft at a prescribed target on a planetary surface, starting from a parachute cut-off point and using a throttleable descent engine. The previous innovation showed the minimal-fuel PDG problem can be posed as a convex optimization problem, in particular, as a Second-Order Cone Program, which can be solved to global optimality with deterministic convergence properties, and hence is a candidate for onboard implementation. To increase the speed and robustness of this convex PDG algorithm for possible onboard implementation, the following enhancements are incorporated: 1) Fast detection of infeasibility (i.e., control authority is not sufficient for soft-landing) for subsequent fault response. 2) The use of a piecewise-linear control parameterization, providing smooth solution trajectories and increasing computational efficiency. 3) An enhanced line-search algorithm for optimal time-of-flight, providing quicker convergence and bounding the number of path-planning iterations needed. 4) An additional constraint that analytically guarantees inter-sample satisfaction of glide-slope and non-sub-surface flight constraints, allowing larger discretizations and, hence, faster optimization. 5) Explicit incorporation of Mars rotation rate into the trajectory computation for improved targeting accuracy. These enhancements allow faster convergence to the fuel-optimal solution and, more importantly, remove the need for a "human-in-the-loop," as constraints will be satisfied over the entire path-planning interval independent of step-size (as opposed to just at the discrete time points) and infeasible initial conditions are immediately detected. Finally, while the PDG stage is typically only a few minutes, ignoring the rotation rate of Mars can introduce 10s of meters of error. By incorporating it, the enhanced PDG algorithm becomes capable of pinpoint targeting.

  19. Phenotypic plasticity of nest timing in a post-glacial landscape: how do reptiles adapt to seasonal time constraints?

    PubMed

    Edge, Christopher B; Rollinson, Njal; Brooks, Ronald J; Congdon, Justin D; Iverson, John B; Janzen, Fredric J; Litzgus, Jacqueline D

    2017-02-01

    Life histories evolve in response to constraints on the time available for growth and development. Nesting date and its plasticity in response to spring temperature may therefore be important components of fitness in oviparous ectotherms near their northern range limit, as reproducing early provides more time for embryos to complete development before winter. We used data collected over several decades to compare air temperature and nest date plasticity in populations of painted turtles and snapping turtles from a relatively warm environment (southeastern Michigan) near the southern extent of the last glacial maximum to a relatively cool environment (central Ontario) near the northern extent of post-glacial recolonization. For painted turtles, population-level differences in reaction norm elevation for two phenological traits were consistent with adaptation to time constraints, but no differences in reaction norm slopes were observed. For snapping turtle populations, the difference in reaction norm elevation for a single phenological trait was in the opposite direction of what was expected under adaptation to time constraints, and no difference in reaction norm slope was observed. Finally, among-individual variation in individual plasticity for nesting date was detected only in the northern population of snapping turtles, suggesting that reaction norms are less canalized in this northern population. Overall, we observed evidence of phenological adaptation, and possibly maladaptation, to time constraints in long-lived reptiles. Where present, (mal)adaptation occurred by virtue of differences in reaction norm elevation, not reaction norm slope. Glacial history, generation time, and genetic constraint may all play an important role in the evolution of phenological timing and its plasticity in long-lived reptiles. © 2016 by the Ecological Society of America.

  20. Voxel inversion of airborne electromagnetic data for improved groundwater model construction and prediction accuracy

    NASA Astrophysics Data System (ADS)

    Kruse Christensen, Nikolaj; Ferre, Ty Paul A.; Fiandaca, Gianluca; Christensen, Steen

    2017-03-01

    We present a workflow for efficient construction and calibration of large-scale groundwater models that includes the integration of airborne electromagnetic (AEM) data and hydrological data. In the first step, the AEM data are inverted to form a 3-D geophysical model. In the second step, the 3-D geophysical model is translated, using a spatially dependent petrophysical relationship, to form a 3-D hydraulic conductivity distribution. The geophysical models and the hydrological data are used to estimate spatially distributed petrophysical shape factors. The shape factors primarily work as translators between resistivity and hydraulic conductivity, but they can also compensate for structural defects in the geophysical model. The method is demonstrated for a synthetic case study with sharp transitions among various types of deposits. Besides demonstrating the methodology, we demonstrate the importance of using geophysical regularization constraints that conform well to the depositional environment. This is done by inverting the AEM data using either smoothness (smooth) constraints or minimum gradient support (sharp) constraints, where the use of sharp constraints conforms best to the environment. The dependency on AEM data quality is also tested by inverting the geophysical model using data corrupted with four different levels of background noise. Subsequently, the geophysical models are used to construct competing groundwater models for which the shape factors are calibrated. The performance of each groundwater model is tested with respect to four types of prediction that are beyond the calibration base: a pumping well's recharge area and groundwater age, respectively, are predicted by applying the same stress as for the hydrologic model calibration; and head and stream discharge are predicted for a different stress situation. As expected, in this case the predictive capability of a groundwater model is better when it is based on a sharp geophysical model instead of a smoothness constraint. This is true for predictions of recharge area, head change, and stream discharge, while we find no improvement for prediction of groundwater age. Furthermore, we show that the model prediction accuracy improves with AEM data quality for predictions of recharge area, head change, and stream discharge, while there appears to be no accuracy improvement for the prediction of groundwater age.

  1. Investigating the Retention and Time Course of Phonotactic Constraint Learning from Production Experience

    ERIC Educational Resources Information Center

    Warker, Jill A.

    2013-01-01

    Adults can rapidly learn artificial phonotactic constraints such as /"f"/ "occurs only at the beginning of syllables" by producing syllables that contain those constraints. This implicit learning is then reflected in their speech errors. However, second-order constraints in which the placement of a phoneme depends on another…

  2. Radiofrequency pulse design in parallel transmission under strict temperature constraints.

    PubMed

    Boulant, Nicolas; Massire, Aurélien; Amadon, Alexis; Vignaud, Alexandre

    2014-09-01

    To gain radiofrequency (RF) pulse performance by directly addressing the temperature constraints, as opposed to the specific absorption rate (SAR) constraints, in parallel transmission at ultra-high field. The magnitude least-squares RF pulse design problem under hard SAR constraints was solved repeatedly by using the virtual observation points and an active-set algorithm. The SAR constraints were updated at each iteration based on the result of a thermal simulation. The numerical study was performed for an SAR-demanding and simplified time of flight sequence using B1 and ΔB0 maps obtained in vivo on a human brain at 7T. The proposed adjustment of the SAR constraints combined with an active-set algorithm provided higher flexibility in RF pulse design within a reasonable time. The modifications of those constraints acted directly upon the thermal response as desired. Although further confidence in the thermal models is needed, this study shows that RF pulse design under strict temperature constraints is within reach, allowing better RF pulse performance and faster acquisitions at ultra-high fields at the cost of higher sequence complexity. Copyright © 2013 Wiley Periodicals, Inc.

  3. HOROPLAN: computer-assisted nurse scheduling using constraint-based programming.

    PubMed

    Darmoni, S J; Fajner, A; Mahé, N; Leforestier, A; Vondracek, M; Stelian, O; Baldenweck, M

    1995-01-01

    Nurse scheduling is a difficult and time consuming task. The schedule has to determine the day to day shift assignments of each nurse for a specified period of time in a way that satisfies the given requirements as much as possible, taking into account the wishes of nurses as closely as possible. This paper presents a constraint-based, artificial intelligence approach by describing a prototype implementation developed with the Charme language and the first results of its use in the Rouen University Hospital. Horoplan implements a non-cyclical constraint-based scheduling, using some heuristics. Four levels of constraints were defined to give a maximum of flexibility: French level (e.g. number of worked hours in a year), hospital level (e.g. specific day-off), department level (e.g. specific shift) and care unit level (e.g. specific pattern for week-ends). Some constraints must always be verified and can not be overruled and some constraints can be overruled at a certain cost. Rescheduling is possible at any time specially in case of an unscheduled absence.

  4. Starting a new residency program: a step-by-step guide for institutions, hospitals, and program directors

    PubMed Central

    Barajaz, Michelle; Turner, Teri

    2016-01-01

    Although our country faces a looming shortage of doctors, constraints of space, funding, and patient volume in many existing residency programs limit training opportunities for medical graduates. New residency programs need to be created for the expansion of graduate medical education training positions. Partnerships between existing academic institutions and community hospitals with a need for physicians can be a very successful means toward this end. Baylor College of Medicine and The Children's Hospital of San Antonio were affiliated in 2012, and subsequently, we developed and received accreditation for a new categorical pediatric residency program at that site in 2014. We share below a step-by-step guide through the process that includes building of the infrastructure, educational development, accreditation, marketing, and recruitment. It is our hope that the description of this process will help others to spur growth in graduate medical training positions. PMID:27507541

  5. A Cooperative Traffic Control of Vehicle–Intersection (CTCVI) for the Reduction of Traffic Delays and Fuel Consumption

    PubMed Central

    Li, Jinjian; Dridi, Mahjoub; El-Moudni, Abdellah

    2016-01-01

    The problem of reducing traffic delays and decreasing fuel consumption simultaneously in a network of intersections without traffic lights is solved by a cooperative traffic control algorithm, where the cooperation is executed based on the connection of Vehicle-to-Infrastructure (V2I). This resolution of the problem contains two main steps. The first step concerns the itinerary of which intersections are chosen by vehicles to arrive at their destination from their starting point. Based on the principle of minimal travel distance, each vehicle chooses its itinerary dynamically based on the traffic loads in the adjacent intersections. The second step is related to the following proposed cooperative procedures to allow vehicles to pass through each intersection rapidly and economically: on one hand, according to the real-time information sent by vehicles via V2I in the edge of the communication zone, each intersection applies Dynamic Programming (DP) to cooperatively optimize the vehicle passing sequence with minimal traffic delays so that the vehicles may rapidly pass the intersection under the relevant safety constraints; on the other hand, after receiving this sequence, each vehicle finds the optimal speed profiles with the minimal fuel consumption by an exhaustive search. The simulation results reveal that the proposed algorithm can significantly reduce both travel delays and fuel consumption compared with other papers under different traffic volumes. PMID:27999333

  6. Constraint monitoring in TOSCA

    NASA Technical Reports Server (NTRS)

    Beck, Howard

    1992-01-01

    The Job-Shop Scheduling Problem (JSSP) deals with the allocation of resources over time to factory operations. Allocations are subject to various constraints (e.g., production precedence relationships, factory capacity constraints, and limits on the allowable number of machine setups) which must be satisfied for a schedule to be valid. The identification of constraint violations and the monitoring of constraint threats plays a vital role in schedule generation in terms of the following: (1) directing the scheduling process; and (2) informing scheduling decisions. This paper describes a general mechanism for identifying constraint violations and monitoring threats to the satisfaction of constraints throughout schedule generation.

  7. Modelling microbial metabolic rewiring during growth in a complex medium.

    PubMed

    Fondi, Marco; Bosi, Emanuele; Presta, Luana; Natoli, Diletta; Fani, Renato

    2016-11-24

    In their natural environment, bacteria face a wide range of environmental conditions that change over time and that impose continuous rearrangements at all the cellular levels (e.g. gene expression, metabolism). When facing a nutritionally rich environment, for example, microbes first use the preferred compound(s) and only later start metabolizing the other one(s). A systemic re-organization of the overall microbial metabolic network in response to a variation in the composition/concentration of the surrounding nutrients has been suggested, although the range and the entity of such modifications in organisms other than a few model microbes has been scarcely described up to now. We used multi-step constraint-based metabolic modelling to simulate the growth in a complex medium over several time steps of the Antarctic model organism Pseudoalteromonas haloplanktis TAC125. As each of these phases is characterized by a specific set of amino acids to be used as carbon and energy source our modelling framework describes the major consequences of nutrients switching at the system level. The model predicts that a deep metabolic reprogramming might be required to achieve optimal biomass production in different stages of growth (different medium composition), with at least half of the cellular metabolic network involved (more than 50% of the metabolic genes). Additionally, we show that our modelling framework is able to capture metabolic functional association and/or common regulatory features of the genes embedded in our reconstruction (e.g. the presence of common regulatory motifs). Finally, to explore the possibility of a sub-optimal biomass objective function (i.e. that cells use resources in alternative metabolic processes at the expense of optimal growth) we have implemented a MOMA-based approach (called nutritional-MOMA) and compared the outcomes with those obtained with Flux Balance Analysis (FBA). Growth simulations under this scenario revealed the deep impact of choosing among alternative objective functions on the resulting predictions of fluxes distribution. Here we provide a time-resolved, systems-level scheme of PhTAC125 metabolic re-wiring as a consequence of carbon source switching in a nutritionally complex medium. Our analyses suggest the presence of a potential efficient metabolic reprogramming machinery to continuously and promptly adapt to this nutritionally changing environment, consistent with adaptation to fast growth in a fairly, but probably inconstant and highly competitive, environment. Also, we show i) how functional partnership and co-regulation features can be predicted by integrating multi-step constraint-based metabolic modelling with fed-batch growth data and ii) that performing simulations under a sub-optimal objective function may lead to different flux distributions in respect to canonical FBA.

  8. Automatic scheduling of outages of nuclear power plants with time windows. Final report, January-December 1995

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gomes, C.

    This report describes a successful project for transference of advanced AI technology into the domain of planning of outages of nuclear power plants as part of DOD`s dual-use program. ROMAN (Rome Lab Outage Manager) is the prototype system that was developed as a result of this project. ROMAN`s main innovation compared to the current state-of-the-art of outage management tools is its capability to automatically enforce safety constraints during the planning and scheduling phase. Another innovative aspect of ROMAN is the generation of more robust schedules that are feasible over time windows. In other words, ROMAN generates a family of schedulesmore » by assigning time intervals as start times to activities rather than single start times, without affecting the overall duration of the project. ROMAN uses a constraint satisfaction paradigm combining a global search tactic with constraint propagation. The derivation of very specialized representations for the constraints to perform efficient propagation is a key aspect for the generation of very fast schedules - constraints are compiled into the code, which is a novel aspect of our work using an automatic programming system, KIDS.« less

  9. Trade-off between Multiple Constraints Enables Simultaneous Formation of Modules and Hubs in Neural Systems

    PubMed Central

    Chen, Yuhan; Wang, Shengjun; Hilgetag, Claus C.; Zhou, Changsong

    2013-01-01

    The formation of the complex network architecture of neural systems is subject to multiple structural and functional constraints. Two obvious but apparently contradictory constraints are low wiring cost and high processing efficiency, characterized by short overall wiring length and a small average number of processing steps, respectively. Growing evidence shows that neural networks are results from a trade-off between physical cost and functional value of the topology. However, the relationship between these competing constraints and complex topology is not well understood quantitatively. We explored this relationship systematically by reconstructing two known neural networks, Macaque cortical connectivity and C. elegans neuronal connections, from combinatory optimization of wiring cost and processing efficiency constraints, using a control parameter , and comparing the reconstructed networks to the real networks. We found that in both neural systems, the reconstructed networks derived from the two constraints can reveal some important relations between the spatial layout of nodes and the topological connectivity, and match several properties of the real networks. The reconstructed and real networks had a similar modular organization in a broad range of , resulting from spatial clustering of network nodes. Hubs emerged due to the competition of the two constraints, and their positions were close to, and partly coincided, with the real hubs in a range of values. The degree of nodes was correlated with the density of nodes in their spatial neighborhood in both reconstructed and real networks. Generally, the rebuilt network matched a significant portion of real links, especially short-distant ones. These findings provide clear evidence to support the hypothesis of trade-off between multiple constraints on brain networks. The two constraints of wiring cost and processing efficiency, however, cannot explain all salient features in the real networks. The discrepancy suggests that there are further relevant factors that are not yet captured here. PMID:23505352

  10. Robust and fast nonlinear optimization of diffusion MRI microstructure models.

    PubMed

    Harms, R L; Fritz, F J; Tobisch, A; Goebel, R; Roebroeck, A

    2017-07-15

    Advances in biophysical multi-compartment modeling for diffusion MRI (dMRI) have gained popularity because of greater specificity than DTI in relating the dMRI signal to underlying cellular microstructure. A large range of these diffusion microstructure models have been developed and each of the popular models comes with its own, often different, optimization algorithm, noise model and initialization strategy to estimate its parameter maps. Since data fit, accuracy and precision is hard to verify, this creates additional challenges to comparability and generalization of results from diffusion microstructure models. In addition, non-linear optimization is computationally expensive leading to very long run times, which can be prohibitive in large group or population studies. In this technical note we investigate the performance of several optimization algorithms and initialization strategies over a few of the most popular diffusion microstructure models, including NODDI and CHARMED. We evaluate whether a single well performing optimization approach exists that could be applied to many models and would equate both run time and fit aspects. All models, algorithms and strategies were implemented on the Graphics Processing Unit (GPU) to remove run time constraints, with which we achieve whole brain dataset fits in seconds to minutes. We then evaluated fit, accuracy, precision and run time for different models of differing complexity against three common optimization algorithms and three parameter initialization strategies. Variability of the achieved quality of fit in actual data was evaluated on ten subjects of each of two population studies with a different acquisition protocol. We find that optimization algorithms and multi-step optimization approaches have a considerable influence on performance and stability over subjects and over acquisition protocols. The gradient-free Powell conjugate-direction algorithm was found to outperform other common algorithms in terms of run time, fit, accuracy and precision. Parameter initialization approaches were found to be relevant especially for more complex models, such as those involving several fiber orientations per voxel. For these, a fitting cascade initializing or fixing parameter values in a later optimization step from simpler models in an earlier optimization step further improved run time, fit, accuracy and precision compared to a single step fit. This establishes and makes available standards by which robust fit and accuracy can be achieved in shorter run times. This is especially relevant for the use of diffusion microstructure modeling in large group or population studies and in combining microstructure parameter maps with tractography results. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  11. Scheduling Results for the THEMIS Observation Scheduling Tool

    NASA Technical Reports Server (NTRS)

    Mclaren, David; Rabideau, Gregg; Chien, Steve; Knight, Russell; Anwar, Sadaat; Mehall, Greg; Christensen, Philip

    2011-01-01

    We describe a scheduling system intended to assist in the development of instrument data acquisitions for the THEMIS instrument, onboard the Mars Odyssey spacecraft, and compare results from multiple scheduling algorithms. This tool creates observations of both (a) targeted geographical regions of interest and (b) general mapping observations, while respecting spacecraft constraints such as data volume, observation timing, visibility, lighting, season, and science priorities. This tool therefore must address both geometric and state/timing/resource constraints. We describe a tool that maps geometric polygon overlap constraints to set covering constraints using a grid-based approach. These set covering constraints are then incorporated into a greedy optimization scheduling algorithm incorporating operations constraints to generate feasible schedules. The resultant tool generates schedules of hundreds of observations per week out of potential thousands of observations. This tool is currently under evaluation by the THEMIS observation planning team at Arizona State University.

  12. Method and apparatus for creating time-optimal commands for linear systems

    NASA Technical Reports Server (NTRS)

    Seering, Warren P. (Inventor); Tuttle, Timothy D. (Inventor)

    2004-01-01

    A system for and method of determining an input command profile for substantially any dynamic system that can be modeled as a linear system, the input command profile for transitioning an output of the dynamic system from one state to another state. The present invention involves identifying characteristics of the dynamic system, selecting a command profile which defines an input to the dynamic system based on the identified characteristics, wherein the command profile comprises one or more pulses which rise and fall at switch times, imposing a plurality of constraints on the dynamic system, at least one of the constraints being defined in terms of the switch times, and determining the switch times for the input to the dynamic system based on the command profile and the plurality of constraints. The characteristics may be related to poles and zeros of the dynamic system, and the plurality of constraints may include a dynamics cancellation constraint which specifies that the input moves the dynamic system from a first state to a second state such that the dynamic system remains substantially at the second state.

  13. Movies of Finite Deformation within Western North American Plate Boundary Zone

    NASA Astrophysics Data System (ADS)

    Holt, W. E.; Birkes, B.; Richard, G. A.

    2004-12-01

    Animations of finite strain within deforming continental zones can be an important tool for both education and research. We present finite strain models for western North America. We have found that these moving images, which portray plate motions, landform uplift, and subsidence, are highly useful for enabling students to conceptualize the dramatic changes that can occur within plate boundary zones over geologic time. These models use instantaneous rates of strain inferred from both space geodetic observations and Quaternary fault slip rates. Geodetic velocities and Quaternary strain rates are interpolated to define a continuous, instantaneous velocity field for western North America. This velocity field is then used to track topography points and fault locations through time (both backward and forward in time), using small time steps, to produce a 6 million year image. The strain rate solution is updated at each time step, accounting for changes in boundary conditions of plate motion, and changes in fault orientation. Assuming zero volume change, Airy isostasy, and a ratio of erosion rate to tectonic uplift rate, the topography is also calculated as a function of time. The animations provide interesting moving images of the transform boundary, highlighting ongoing extension and subsidence, convergence and uplift, and large translations taking place within the strike-slip regime. Moving images of the strain components, uplift volume through time, and inferred erosion volume through time, have also been produced. These animations are an excellent demonstration for education purposes and also hold potential as an important tool for research enabling the quantification of finite rotations of fault blocks, potential erosion volume, uplift volume, and the influence of climate on these parameters. The models, however, point to numerous shortcomings of taking constraints from instantaneous calculations to provide insight into time evolution and reconstruction models. More rigorous calculations are needed to account for changes in dynamics (body forces) through time and resultant changes in fault behavior and crustal rheology.

  14. The Impact of Resource Constraints on the Psychological Well-Being of Survivors of Intimate Partner Violence over Time

    ERIC Educational Resources Information Center

    Beeble, Marisa L.; Bybee, Deborah; Sullivan, Cris M.

    2010-01-01

    This study examined the impact of resource constraints on the psychological well-being of survivors of intimate partner violence (IPV), testing whether resource constraints is one mechanism that partially mediates the relationship between IPV and women's well-being. Although within-woman changes in resource constraints did not mediate the…

  15. Dynamic control of moisture during hot pressing of wood composites

    Treesearch

    Cheng Piao; Todd F. Shupe; Chung Y. Hse

    2006-01-01

    Hot pressing is an important step in the manufacture of wood composites. In the conventional pressing system, hot press output often acts as a constraint to increased production. Severe drying of the furnish (e.g., particles, flakes, or fibers) required by this process substantially increases the manufacturing cost and creates air-polluting emissions of volatile...

  16. Kriging Direct and Indirect Estimates of Sulfate Deposition: A Comparison

    Treesearch

    Gregory A. Reams; Manuela M.P. Huso; Richard J. Vong; Joseph M. McCollum

    1997-01-01

    Due to logistical and cost constraints, acidic deposition is rarely measured at forest research or sampling locations. A crucial first step to assessing the effects of acid rain on forests is an accurate estimate of acidic deposition at forest sample sites. We examine two methods (direct and indirect) for estimating sulfate deposition at atmospherically unmonitored...

  17. Positivity-preserving cell-centered Lagrangian schemes for multi-material compressible flows: From first-order to high-orders. Part I: The one-dimensional case

    NASA Astrophysics Data System (ADS)

    Vilar, François; Shu, Chi-Wang; Maire, Pierre-Henri

    2016-05-01

    One of the main issues in the field of numerical schemes is to ally robustness with accuracy. Considering gas dynamics, numerical approximations may generate negative density or pressure, which may lead to nonlinear instability and crash of the code. This phenomenon is even more critical using a Lagrangian formalism, the grid moving and being deformed during the calculation. Furthermore, most of the problems studied in this framework contain very intense rarefaction and shock waves. In this paper, the admissibility of numerical solutions obtained by high-order finite-volume-scheme-based methods, such as the discontinuous Galerkin (DG) method, the essentially non-oscillatory (ENO) and the weighted ENO (WENO) finite volume schemes, is addressed in the one-dimensional Lagrangian gas dynamics framework. After briefly recalling how to derive Lagrangian forms of the 1D gas dynamics system of equations, a discussion on positivity-preserving approximate Riemann solvers, ensuring first-order finite volume schemes to be positive, is then given. This study is conducted for both ideal gas and non-ideal gas equations of state (EOS), such as the Jones-Wilkins-Lee (JWL) EOS or the Mie-Grüneisen (MG) EOS, and relies on two different techniques: either a particular definition of the local approximation of the acoustic impedances arising from the approximate Riemann solver, or an additional time step constraint relative to the cell volume variation. Then, making use of the work presented in [89,90,22], this positivity study is extended to high-orders of accuracy, where new time step constraints are obtained, and proper limitation is required. Through this new procedure, scheme robustness is highly improved and hence new problems can be tackled. Numerical results are provided to demonstrate the effectiveness of these methods. This paper is the first part of a series of two. The whole analysis presented here is extended to the two-dimensional case in [85], and proves to fit a wide range of numerical schemes in the literature, such as those presented in [19,64,15,82,84].

  18. Advanced timeline systems

    NASA Technical Reports Server (NTRS)

    Bulfin, R. L.; Perdue, C. A.

    1994-01-01

    The Mission Planning Division of the Mission Operations Laboratory at NASA's Marshall Space Flight Center is responsible for scheduling experiment activities for space missions controlled at MSFC. In order to draw statistically relevant conclusions, all experiments must be scheduled at least once and may have repeated performances during the mission. An experiment consists of a series of steps which, when performed, provide results pertinent to the experiment's functional objective. Since these experiments require a set of resources such as crew and power, the task of creating a timeline of experiment activities for the mission is one of resource constrained scheduling. For each experiment, a computer model with detailed information of the steps involved in running the experiment, including crew requirements, processing times, and resource requirements is created. These models are then loaded into the Experiment Scheduling Program (ESP) which attempts to create a schedule which satisfies all resource constraints. ESP uses a depth-first search technique to place each experiment into a time interval, and a scoring function to evaluate the schedule. The mission planners generate several schedules and choose one with a high value of the scoring function to send through the approval process. The process of approving a mission timeline can take several months. Each timeline must meet the requirements of the scientists, the crew, and various engineering departments as well as enforce all resource restrictions. No single objective is considered in creating a timeline. The experiment scheduling problem is: given a set of experiments, place each experiment along the mission timeline so that all resource requirements and temporal constraints are met and the timeline is acceptable to all who must approve it. Much work has been done on multicriteria decision making (MCDM). When there are two criteria, schedules which perform well with respect to one criterion will often perform poorly with respect to the other. One schedule dominates another if it performs strictly better on one criterion, and no worse on the other. Clearly, dominated schedules are undesireable. A nondominated schedule can be generated by some sort of optimization problem. Generally there are two approaches: the first is a hierarchical approach while the second requires optimizing a weighting or scoring function.

  19. Robust optimization for nonlinear time-delay dynamical system of dha regulon with cost sensitivity constraint in batch culture

    NASA Astrophysics Data System (ADS)

    Yuan, Jinlong; Zhang, Xu; Liu, Chongyang; Chang, Liang; Xie, Jun; Feng, Enmin; Yin, Hongchao; Xiu, Zhilong

    2016-09-01

    Time-delay dynamical systems, which depend on both the current state of the system and the state at delayed times, have been an active area of research in many real-world applications. In this paper, we consider a nonlinear time-delay dynamical system of dha-regulonwith unknown time-delays in batch culture of glycerol bioconversion to 1,3-propanediol induced by Klebsiella pneumonia. Some important properties and strong positive invariance are discussed. Because of the difficulty in accurately measuring the concentrations of intracellular substances and the absence of equilibrium points for the time-delay system, a quantitative biological robustness for the concentrations of intracellular substances is defined by penalizing a weighted sum of the expectation and variance of the relative deviation between system outputs before and after the time-delays are perturbed. Our goal is to determine optimal values of the time-delays. To this end, we formulate an optimization problem in which the time delays are decision variables and the cost function is to minimize the biological robustness. This optimization problem is subject to the time-delay system, parameter constraints, continuous state inequality constraints for ensuring that the concentrations of extracellular and intracellular substances lie within specified limits, a quality constraint to reflect operational requirements and a cost sensitivity constraint for ensuring that an acceptable level of the system performance is achieved. It is approximated as a sequence of nonlinear programming sub-problems through the application of constraint transcription and local smoothing approximation techniques. Due to the highly complex nature of this optimization problem, the computational cost is high. Thus, a parallel algorithm is proposed to solve these nonlinear programming sub-problems based on the filled function method. Finally, it is observed that the obtained optimal estimates for the time-delays are highly satisfactory via numerical simulations.

  20. Dynamic minimum set problem for reserve design: Heuristic solutions for large problems

    PubMed Central

    Sabbadin, Régis; Johnson, Fred A.; Stith, Bradley

    2018-01-01

    Conversion of wild habitats to human dominated landscape is a major cause of biodiversity loss. An approach to mitigate the impact of habitat loss consists of designating reserves where habitat is preserved and managed. Determining the most valuable areas to preserve in a landscape is called the reserve design problem. There exists several possible formulations of the reserve design problem, depending on the objectives and the constraints. In this article, we considered the dynamic problem of designing a reserve that contains a desired area of several key habitats. The dynamic case implies that the reserve cannot be designed in one time step, due to budget constraints, and that habitats can be lost before they are reserved, due for example to climate change or human development. We proposed two heuristics strategies that can be used to select sites to reserve each year for large reserve design problem. The first heuristic is a combination of the Marxan and site-ordering algorithms and the second heuristic is an augmented version of the common naive myopic heuristic. We evaluated the strategies on several simulated examples and showed that the augmented greedy heuristic is particularly interesting when some of the habitats to protect are particularly threatened and/or the compactness of the network is accounted for. PMID:29543830

  1. Enhancing molecular logic through modulation of temporal and spatial constraints with quantum dot-based systems that use fluorescent (Förster) resonance energy transfer

    NASA Astrophysics Data System (ADS)

    Claussen, Jonathan C.; Algar, W. Russ; Hildebrandt, Niko; Susumu, Kimihiro; Ancona, Mario G.; Medintz, Igor L.

    2013-10-01

    Luminescent semiconductor nanocrystals or quantum dots (QDs) contain favorable photonic properties (e.g., resistance to photobleaching, size-tunable PL, and large effective Stokes shifts) that make them well-suited for fluorescence (Förster) resonance energy transfer (FRET) based applications including monitoring proteolytic activity, elucidating the effects of nanoparticles-mediated drug delivery, and analyzing the spatial and temporal dynamics of cellular biochemical processes. Herein, we demonstrate how unique considerations of temporal and spatial constraints can be used in conjunction with QD-FRET systems to open up new avenues of scientific discovery in information processing and molecular logic circuitry. For example, by conjugating both long lifetime luminescent terbium(III) complexes (Tb) and fluorescent dyes (A647) to a single QD, we can create multiple FRET lanes that change temporally as the QD acts as both an acceptor and donor at distinct time intervals. Such temporal FRET modulation creates multi-step FRET cascades that produce a wealth of unique photoluminescence (PL) spectra that are well-suited for the construction of a photonic alphabet and photonic logic circuits. These research advances in bio-based molecular logic open the door to future applications including multiplexed biosensing and drug delivery for disease diagnostics and treatment.

  2. Automatic design of synthetic gene circuits through mixed integer non-linear programming.

    PubMed

    Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias

    2012-01-01

    Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits.

  3. Scalable algorithms for 3D extended MHD.

    NASA Astrophysics Data System (ADS)

    Chacon, Luis

    2007-11-01

    In the modeling of plasmas with extended MHD (XMHD), the challenge is to resolve long time scales while rendering the whole simulation manageable. In XMHD, this is particularly difficult because fast (dispersive) waves are supported, resulting in a very stiff set of PDEs. In explicit schemes, such stiffness results in stringent numerical stability time-step constraints, rendering them inefficient and algorithmically unscalable. In implicit schemes, it yields very ill-conditioned algebraic systems, which are difficult to invert. In this talk, we present recent theoretical and computational progress that demonstrate a scalable 3D XMHD solver (i.e., CPU ˜N, with N the number of degrees of freedom). The approach is based on Newton-Krylov methods, which are preconditioned for efficiency. The preconditioning stage admits suitable approximations without compromising the quality of the overall solution. In this work, we employ optimal (CPU ˜N) multilevel methods on a parabolized XMHD formulation, which renders the whole algorithm scalable. The (crucial) parabolization step is required to render XMHD multilevel-friendly. Algebraically, the parabolization step can be interpreted as a Schur factorization of the Jacobian matrix, thereby providing a solid foundation for the current (and future extensions of the) approach. We will build towards 3D extended MHDootnotetextL. Chac'on, Comput. Phys. Comm., 163 (3), 143-171 (2004)^,ootnotetextL. Chac'on et al., 33rd EPS Conf. Plasma Physics, Rome, Italy, 2006 by discussing earlier algorithmic breakthroughs in 2D reduced MHDootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) and 2D Hall MHD.ootnotetextL. Chac'on et al., J. Comput. Phys., 188 (2), 573-592 (2003)

  4. Maximizing Submodular Functions under Matroid Constraints by Evolutionary Algorithms.

    PubMed

    Friedrich, Tobias; Neumann, Frank

    2015-01-01

    Many combinatorial optimization problems have underlying goal functions that are submodular. The classical goal is to find a good solution for a given submodular function f under a given set of constraints. In this paper, we investigate the runtime of a simple single objective evolutionary algorithm called (1 + 1) EA and a multiobjective evolutionary algorithm called GSEMO until they have obtained a good approximation for submodular functions. For the case of monotone submodular functions and uniform cardinality constraints, we show that the GSEMO achieves a (1 - 1/e)-approximation in expected polynomial time. For the case of monotone functions where the constraints are given by the intersection of K ≥ 2 matroids, we show that the (1 + 1) EA achieves a (1/k + δ)-approximation in expected polynomial time for any constant δ > 0. Turning to nonmonotone symmetric submodular functions with k ≥ 1 matroid intersection constraints, we show that the GSEMO achieves a 1/((k + 2)(1 + ε))-approximation in expected time O(n(k + 6)log(n)/ε.

  5. Multi-scale dynamics and relaxation of a tethered membrane in a solvent by Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Pandey, Ras; Anderson, Kelly; Farmer, Barry

    2006-03-01

    A tethered membrane modeled by a flexible sheet dissipates entropy as it wrinkles and crumples. Nodes of a coarse grained membrane are connected via multiple pathways for dynamical modes to propagate. We consider a sheet with nodes connected by fluctuating bonds on a cubic lattice. The empty lattice sites constitute an effective solvent medium via node-solvent interaction. Each node execute its stochastic motion with the Metropolis algorithm subject to bond fluctuations, excluded volume constraints, and interaction energy. Dynamics and conformation of the sheet are examined at a low and a high temperature with attractive and repulsive node-node interactions for the contrast in an attractive solvent medium. Variations of the mean square displacement of the center node of the sheet and that of its center of mass with the time steps are examined in detail which show different power-law motion from short to long time regimes. Relaxation of the gyration radius and scaling of its asymptotic value with the molecular weight are examined.

  6. An adaptive, implicit, conservative, 1D-2V multi-species Vlasov-Fokker-Planck multi-scale solver in planar geometry

    NASA Astrophysics Data System (ADS)

    Taitano, W. T.; Chacón, L.; Simakov, A. N.

    2018-07-01

    We consider a 1D-2V Vlasov-Fokker-Planck multi-species ionic description coupled to fluid electrons. We address temporal stiffness with implicit time stepping, suitably preconditioned. To address temperature disparity in time and space, we extend the conservative adaptive velocity-space discretization scheme proposed in [Taitano et al., J. Comput. Phys., 318, 391-420, (2016)] to a spatially inhomogeneous system. In this approach, we normalize the velocity-space coordinate to a temporally and spatially varying local characteristic speed per species. We explicitly consider the resulting inertial terms in the Vlasov equation, and derive a discrete formulation that conserves mass, momentum, and energy up to a prescribed nonlinear tolerance upon convergence. Our conservation strategy employs nonlinear constraints to enforce these properties discretely for both the Vlasov operator and the Fokker-Planck collision operator. Numerical examples of varying degrees of complexity, including shock-wave propagation, demonstrate the favorable efficiency and accuracy properties of the scheme.

  7. Robust ADP Design for Continuous-Time Nonlinear Systems With Output Constraints.

    PubMed

    Fan, Bo; Yang, Qinmin; Tang, Xiaoyu; Sun, Youxian

    2018-06-01

    In this paper, a novel robust adaptive dynamic programming (RADP)-based control strategy is presented for the optimal control of a class of output-constrained continuous-time unknown nonlinear systems. Our contribution includes a step forward beyond the usual optimal control result to show that the output of the plant is always within user-defined bounds. To achieve the new results, an error transformation technique is first established to generate an equivalent nonlinear system, whose asymptotic stability guarantees both the asymptotic stability and the satisfaction of the output restriction of the original system. Furthermore, RADP algorithms are developed to solve the transformed nonlinear optimal control problem with completely unknown dynamics as well as a robust design to guarantee the stability of the closed-loop systems in the presence of unavailable internal dynamic state. Via small-gain theorem, asymptotic stability of the original and transformed nonlinear system is theoretically guaranteed. Finally, comparison results demonstrate the merits of the proposed control policy.

  8. Extended Lagrangian formulation of charge-constrained tight-binding molecular dynamics.

    PubMed

    Cawkwell, M J; Coe, J D; Yadav, S K; Liu, X-Y; Niklasson, A M N

    2015-06-09

    The extended Lagrangian Born-Oppenheimer molecular dynamics formalism [Niklasson, Phys. Rev. Lett., 2008, 100, 123004] has been applied to a tight-binding model under the constraint of local charge neutrality to yield microcanonical trajectories with both precise, long-term energy conservation and a reduced number of self-consistent field optimizations at each time step. The extended Lagrangian molecular dynamics formalism restores time reversal symmetry in the propagation of the electronic degrees of freedom, and it enables the efficient and accurate self-consistent optimization of the chemical potential and atomwise potential energy shifts in the on-site elements of the tight-binding Hamiltonian that are required when enforcing local charge neutrality. These capabilities are illustrated with microcanonical molecular dynamics simulations of a small metallic cluster using an sd-valent tight-binding model for titanium. The effects of weak dissipation on the propagation of the auxiliary degrees of freedom for the chemical potential and on-site Hamiltonian matrix elements that is used to counteract the accumulation of numerical noise during trajectories was also investigated.

  9. Design and Benchmarking of a Network-In-the-Loop Simulation for Use in a Hardware-In-the-Loop System

    NASA Technical Reports Server (NTRS)

    Aretskin-Hariton, Eliot; Thomas, George; Culley, Dennis; Kratz, Jonathan

    2017-01-01

    Distributed engine control (DEC) systems alter aircraft engine design constraints because of fundamental differences in the input and output communication between DEC and centralized control architectures. The change in the way communication is implemented may create new optimum engine-aircraft configurations. This paper continues the exploration of digital network communication by demonstrating a Network-In-the-Loop simulation at the NASA Glenn Research Center. This simulation incorporates a real-time network protocol, the Engine Area Distributed Interconnect Network Lite (EADIN Lite), with the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) software. The objective of this study is to assess digital control network impact to the control system. Performance is evaluated relative to a truth model for large transient maneuvers and a typical flight profile for commercial aircraft. Results show that a decrease in network bandwidth from 250 Kbps (sampling all sensors every time step) to 40 Kbps, resulted in very small differences in control system performance.

  10. Design and Benchmarking of a Network-In-the-Loop Simulation for Use in a Hardware-In-the-Loop System

    NASA Technical Reports Server (NTRS)

    Aretskin-Hariton, Eliot D.; Thomas, George Lindsey; Culley, Dennis E.; Kratz, Jonathan L.

    2017-01-01

    Distributed engine control (DEC) systems alter aircraft engine design constraints be- cause of fundamental differences in the input and output communication between DEC and centralized control architectures. The change in the way communication is implemented may create new optimum engine-aircraft configurations. This paper continues the exploration of digital network communication by demonstrating a Network-In-the-Loop simulation at the NASA Glenn Research Center. This simulation incorporates a real-time network protocol, the Engine Area Distributed Interconnect Network Lite (EADIN Lite), with the Commercial Modular Aero-Propulsion System Simulation 40k (C-MAPSS40k) software. The objective of this study is to assess digital control network impact to the control system. Performance is evaluated relative to a truth model for large transient maneuvers and a typical flight profile for commercial aircraft. Results show that a decrease in network bandwidth from 250 Kbps (sampling all sensors every time step) to 40 Kbps, resulted in very small differences in control system performance.

  11. Discretized torsional dynamics and the folding of an RNA chain.

    PubMed

    Fernández, A; Salthú, R; Cendra, H

    1999-08-01

    The aim of this work is to implement a discrete coarse codification of local torsional states of the RNA chain backbone in order to explore the long-time limit dynamics and ultimately obtain a coarse solution to the RNA folding problem. A discrete representation of the soft-mode dynamics is turned into an algorithm for a rough structure prediction. The algorithm itself is inherently parallel, as it evaluates concurrent folding possibilities by pattern recognition, but it may be implemented in a personal computer as a chain of perturbation-translation-renormalization cycles performed on a binary matrix of local topological constraints. This requires suitable representational tools and a periodic quenching of the dynamics for system renormalization. A binary coding of local topological constraints associated with each structural motif is introduced, with each local topological constraint corresponding to a local torsional state. This treatment enables us to adopt a computation time step far larger than hydrodynamic drag time scales. Accordingly, the solvent is no longer treated as a hydrodynamic drag medium. Instead we incorporate its capacity for forming local conformation-dependent dielectric domains. Each translation of the matrix of local topological constraints (LTM's) depends on the conformation-dependent local dielectric created by a confined solvent. Folding pathways are resolved as transitions between patterns of locally encoded structural signals which change within the 1 ns-100 ms time scale range. These coarse folding pathways are generated by a search at regular intervals for structural patterns in the LTM. Each pattern is recorded as a base-pairing pattern (BPP) matrix, a consensus-evaluation operation subject to a renormalization feedback loop. Since several mutually conflicting consensus evaluations might occur at a given time, the need arises for a probabilistic approach appropriate for an ensemble of RNA molecules. Thus, a statistical dynamics of consensus formation is determined by the time evolution of the base pairing probability matrix. These dynamics are generated for a functional RNA molecule, a representative of the so-called group I ribozymes, in order to test the model. The resulting ensemble of conformations is sharply peaked and the most probable structure features the predominance of all phylogenetically conserved intrachain helices tantamount to ribozyme function. Furthermore, the magnesium-aided cooperativity that leads to the shaping of the catalytic core is elucidated. Once the predictive folding algorithm has been implemented, the validity of the so-called "adiabatic approximation" is tested. This approximation requires that conformational microstates be lumped up into BPP's which are treated as quasiequilibrium states, while folding pathways are coarsely represented as sequences of BPP transitions. To test the validity of this adiabatic ansatz, a computation of the coarse Shannon information entropy sigma associated to the specific partition of conformation space into BPP's is performed taking into account the LTM evolution and contrasted with the adiabatic computation. The results reveal a subordination of torsional microstate dynamics to BPP transitions within time scales relevant to folding. This adiabatic entrainment in the long-time limit is thus identified as responsible for the expediency of the folding process.

  12. Statistical aspects of point count sampling

    USGS Publications Warehouse

    Barker, R.J.; Sauer, J.R.; Ralph, C.J.; Sauer, J.R.; Droege, S.

    1995-01-01

    The dominant feature of point counts is that they do not census birds, but instead provide incomplete counts of individuals present within a survey plot. Considering a simple model for point count sampling, we demon-strate that use of these incomplete counts can bias estimators and testing procedures, leading to inappropriate conclusions. A large portion of the variability in point counts is caused by the incomplete counting, and this within-count variation can be confounded with ecologically meaningful varia-tion. We recommend caution in the analysis of estimates obtained from point counts. Using; our model, we also consider optimal allocation of sampling effort. The critical step in the optimization process is in determining the goals of the study and methods that will be used to meet these goals. By explicitly defining the constraints on sampling and by estimating the relationship between precision and bias of estimators and time spent counting, we can predict the optimal time at a point for each of several monitoring goals. In general, time spent at a point will differ depending on the goals of the study.

  13. A Selfish Constraint Satisfaction Genetic Algorithms for Planning a Long-Distance Transportation Network

    NASA Astrophysics Data System (ADS)

    Onoyama, Takashi; Maekawa, Takuya; Kubota, Sen; Tsuruta, Setuso; Komoda, Norihisa

    To build a cooperative logistics network covering multiple enterprises, a planning method that can build a long-distance transportation network is required. Many strict constraints are imposed on this type of problem. To solve these strict-constraint problems, a selfish constraint satisfaction genetic algorithm (GA) is proposed. In this GA, each gene of an individual satisfies only its constraint selfishly, disregarding the constraints of other genes in the same individuals. Moreover, a constraint pre-checking method is also applied to improve the GA convergence speed. The experimental result shows the proposed method can obtain an accurate solution in a practical response time.

  14. ISS Solar Array Management

    NASA Technical Reports Server (NTRS)

    Williams, James P.; Martin, Keith D.; Thomas, Justin R.; Caro, Samuel

    2010-01-01

    The International Space Station (ISS) Solar Array Management (SAM) software toolset provides the capabilities necessary to operate a spacecraft with complex solar array constraints. It monitors spacecraft telemetry and provides interpretations of solar array constraint data in an intuitive manner. The toolset provides extensive situational awareness to ensure mission success by analyzing power generation needs, array motion constraints, and structural loading situations. The software suite consists of several components including samCS (constraint set selector), samShadyTimers (array shadowing timers), samWin (visualization GUI), samLock (array motion constraint computation), and samJet (attitude control system configuration selector). It provides high availability and uptime for extended and continuous mission support. It is able to support two-degrees-of-freedom (DOF) array positioning and supports up to ten simultaneous constraints with intuitive 1D and 2D decision support visualizations of constraint data. Display synchronization is enabled across a networked control center and multiple methods for constraint data interpolation are supported. Use of this software toolset increases flight safety, reduces mission support effort, optimizes solar array operation for achieving mission goals, and has run for weeks at a time without issues. The SAM toolset is currently used in ISS real-time mission operations.

  15. Fuel Optimal, Finite Thrust Guidance Methods to Circumnavigate with Lighting Constraints

    NASA Astrophysics Data System (ADS)

    Prince, E. R.; Carr, R. W.; Cobb, R. G.

    This paper details improvements made to the authors' most recent work to find fuel optimal, finite-thrust guidance to inject an inspector satellite into a prescribed natural motion circumnavigation (NMC) orbit about a resident space object (RSO) in geosynchronous orbit (GEO). Better initial guess methodologies are developed for the low-fidelity model nonlinear programming problem (NLP) solver to include using Clohessy- Wiltshire (CW) targeting, a modified particle swarm optimization (PSO), and MATLAB's genetic algorithm (GA). These initial guess solutions may then be fed into the NLP solver as an initial guess, where a different NLP solver, IPOPT, is used. Celestial lighting constraints are taken into account in addition to the sunlight constraint, ensuring that the resulting NMC also adheres to Moon and Earth lighting constraints. The guidance is initially calculated given a fixed final time, and then solutions are also calculated for fixed final times before and after the original fixed final time, allowing mission planners to choose the lowest-cost solution in the resulting range which satisfies all constraints. The developed algorithms provide computationally fast and highly reliable methods for determining fuel optimal guidance for NMC injections while also adhering to multiple lighting constraints.

  16. Towards a Semantically-Enabled Control Strategy for Building Simulations: Integration of Semantic Technologies and Model Predictive Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delgoshaei, Parastoo; Austin, Mark A.; Pertzborn, Amanda J.

    State-of-the-art building simulation control methods incorporate physical constraints into their mathematical models, but omit implicit constraints associated with policies of operation and dependency relationships among rules representing those constraints. To overcome these shortcomings, there is a recent trend in enabling the control strategies with inference-based rule checking capabilities. One solution is to exploit semantic web technologies in building simulation control. Such approaches provide the tools for semantic modeling of domains, and the ability to deduce new information based on the models through use of Description Logic (DL). In a step toward enabling this capability, this paper presents a cross-disciplinary data-drivenmore » control strategy for building energy management simulation that integrates semantic modeling and formal rule checking mechanisms into a Model Predictive Control (MPC) formulation. The results show that MPC provides superior levels of performance when initial conditions and inputs are derived from inference-based rules.« less

  17. Gold-Catalyzed Solid-Phase Synthesis of 3,4-Dihydropyrazin-2(1H)-ones: Relevant Pharmacophores and Peptide Backbone Constraints.

    PubMed

    Přibylka, Adam; Krchňák, Viktor

    2017-11-13

    Here, we report the efficient solid-phase synthesis of N-propargyl peptides using Fmoc-amino acids and propargyl alcohol as key building blocks. Gold-catalyzed nucleophilic addition to the triple bond induced C-N bond formation, which triggered intramolecular cyclization, yielding 1,3,4-trisubstituted-5-methyl-3,4-dihydropyrazin-2(1H)-ones. Conformations of acyclic and constrained peptides were compared using a two-step conformer distribution analysis at the molecular mechanics level and density functional theory. The results indicated that the incorporation of heterocyclic molecular scaffold into a short peptide sequence adopted extended conformation of peptide chain. The amide bond adjacent to the constraint did not show significant preference for either cis or trans isomerism. Prepared model compounds demonstrate a proof of concept for gold-catalyzed polymer-supported synthesis of variously substituted 3,4-dihydropyrazin-2(1H)-ones for applications in drug discovery and peptide backbone constraints.

  18. Non-iterative distance constraints enforcement for cloth drapes simulation

    NASA Astrophysics Data System (ADS)

    Hidajat, R. L. L. G.; Wibowo, Arifin, Z.; Suyitno

    2016-03-01

    A cloth simulation represents the behavior of cloth objects such as flag, tablecloth, or even garments has application in clothing animation for games and virtual shops. Elastically deformable models have widely used to provide realistic and efficient simulation, however problem of overstretching is encountered. We introduce a new cloth simulation algorithm that replaces iterative distance constraint enforcement steps with non-iterative ones for preventing over stretching in a spring-mass system for cloth modeling. Our method is based on a simple position correction procedure applied at one end of a spring. In our experiments, we developed a rectangle cloth model which is initially at a horizontal position with one point is fixed, and it is allowed to drape by its own weight. Our simulation is able to achieve a plausible cloth drapes as in reality. This paper aims to demonstrate the reliability of our approach to overcome overstretches while decreasing the computational cost of the constraint enforcement process due to an iterative procedure that is eliminated.

  19. Solving an inverse eigenvalue problem with triple constraints on eigenvalues, singular values, and diagonal elements

    NASA Astrophysics Data System (ADS)

    Wu, Sheng-Jhih; Chu, Moody T.

    2017-08-01

    An inverse eigenvalue problem usually entails two constraints, one conditioned upon the spectrum and the other on the structure. This paper investigates the problem where triple constraints of eigenvalues, singular values, and diagonal entries are imposed simultaneously. An approach combining an eclectic mix of skills from differential geometry, optimization theory, and analytic gradient flow is employed to prove the solvability of such a problem. The result generalizes the classical Mirsky, Sing-Thompson, and Weyl-Horn theorems concerning the respective majorization relationships between any two of the arrays of main diagonal entries, eigenvalues, and singular values. The existence theory fills a gap in the classical matrix theory. The problem might find applications in wireless communication and quantum information science. The technique employed can be implemented as a first-step numerical method for constructing the matrix. With slight modification, the approach might be used to explore similar types of inverse problems where the prescribed entries are at general locations.

  20. A Method for Optimal Load Dispatch of a Multi-zone Power System with Zonal Exchange Constraints

    NASA Astrophysics Data System (ADS)

    Hazarika, Durlav; Das, Ranjay

    2018-04-01

    This paper presented a method for economic generation scheduling of a multi-zone power system having inter zonal operational constraints. For this purpose, the generator rescheduling for a multi area power system having inter zonal operational constraints has been represented as a two step optimal generation scheduling problem. At first, the optimal generation scheduling has been carried out for the zone having surplus or deficient generation with proper spinning reserve using co-ordination equation. The power exchange required for the deficit zones and zones having no generation are estimated based on load demand and generation for the zone. The incremental transmission loss formulas for the transmission lines participating in the power transfer process among the zones are formulated. Using these, incremental transmission loss expression in co-ordination equation, the optimal generation scheduling for the zonal exchange has been determined. Simulation is carried out on IEEE 118 bus test system to examine the applicability and validity of the method.

  1. How Do Severe Constraints Affect the Search Ability of Multiobjective Evolutionary Algorithms in Water Resources?

    NASA Astrophysics Data System (ADS)

    Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.

    2015-12-01

    This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or more metrics.

  2. Giant Panda Maternal Care: A Test of the Experience Constraint Hypothesis.

    PubMed

    Snyder, Rebecca J; Perdue, Bonnie M; Zhang, Zhihe; Maple, Terry L; Charlton, Benjamin D

    2016-06-07

    The body condition constraint and the experience condition constraint hypotheses have both been proposed to account for differences in reproductive success between multiparous (experienced) and primiparous (first-time) mothers. However, because primiparous mothers are typically characterized by both inferior body condition and lack of experience when compared to multiparous mothers, interpreting experience related differences in maternal care as support for either the body condition constraint hypothesis or the experience constraint hypothesis is extremely difficult. Here, we examined maternal behaviour in captive giant pandas, allowing us to simultaneously control for body condition and provide a rigorous test of the experience constraint hypothesis in this endangered animal. We found that multiparous mothers spent more time engaged in key maternal behaviours (nursing, grooming, and holding cubs) and had significantly less vocal cubs than primiparous mothers. This study provides the first evidence supporting the experience constraint hypothesis in the order Carnivora, and may have utility for captive breeding programs in which it is important to monitor the welfare of this species' highly altricial cubs, whose survival is almost entirely dependent on receiving adequate maternal care during the first few weeks of life.

  3. MO-D-213-01: Workflow Monitoring for a High Volume Radiation Oncology Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laub, S; Dunn, M; Galbreath, G

    2015-06-15

    Purpose: Implement a center wide communication system that increases interdepartmental transparency and accountability while decreasing redundant work and treatment delays by actively monitoring treatment planning workflow. Methods: Intake Management System (IMS), a program developed by ProCure Treatment Centers Inc., is a multi-function database that stores treatment planning process information. It was devised to work with the oncology information system (Mosaiq) to streamline interdepartmental workflow.Each step in the treatment planning process is visually represented and timelines for completion of individual tasks are established within the software. The currently active step of each patient’s planning process is highlighted either red or greenmore » according to whether the initially allocated amount of time has passed for the given process. This information is displayed as a Treatment Planning Process Monitor (TPPM), which is shown on screens in the relevant departments throughout the center. This display also includes the individuals who are responsible for each task.IMS is driven by Mosaiq’s quality checklist (QCL) functionality. Each step in the workflow is initiated by a Mosaiq user sending the responsible party a QCL assignment. IMS is connected to Mosaiq and the sending or completing of a QCL updates the associated field in the TPPM to the appropriate status. Results: Approximately one patient a week is identified during the workflow process as needing to have his/her treatment start date modified or resources re-allocated to address the most urgent cases. Being able to identify a realistic timeline for planning each patient and having multiple departments communicate their limitations and time constraints allows for quality plans to be developed and implemented without overburdening any one department. Conclusion: Monitoring the progression of the treatment planning process has increased transparency between departments, which enables efficient communication. Having built-in timelines allows easy prioritization of tasks and resources and facilitates effective time management.« less

  4. Middleware Case Study: MeDICi

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wynne, Adam S.

    2011-05-05

    In many application domains in science and engineering, data produced by sensors, instruments and networks is naturally processed by software applications structured as a pipeline . Pipelines comprise a sequence of software components that progressively process discrete units of data to produce a desired outcome. For example, in a Web crawler that is extracting semantics from text on Web sites, the first stage in the pipeline might be to remove all HTML tags to leave only the raw text of the document. The second step may parse the raw text to break it down into its constituent grammatical parts, suchmore » as nouns, verbs and so on. Subsequent steps may look for names of people or places, interesting events or times so documents can be sequenced on a time line. Each of these steps can be written as a specialized program that works in isolation with other steps in the pipeline. In many applications, simple linear software pipelines are sufficient. However, more complex applications require topologies that contain forks and joins, creating pipelines comprising branches where parallel execution is desirable. It is also increasingly common for pipelines to process very large files or high volume data streams which impose end-to-end performance constraints. Additionally, processes in a pipeline may have specific execution requirements and hence need to be distributed as services across a heterogeneous computing and data management infrastructure. From a software engineering perspective, these more complex pipelines become problematic to implement. While simple linear pipelines can be built using minimal infrastructure such as scripting languages, complex topologies and large, high volume data processing requires suitable abstractions, run-time infrastructures and development tools to construct pipelines with the desired qualities-of-service and flexibility to evolve to handle new requirements. The above summarizes the reasons we created the MeDICi Integration Framework (MIF) that is designed for creating high-performance, scalable and modifiable software pipelines. MIF exploits a low friction, robust, open source middleware platform and extends it with component and service-based programmatic interfaces that make implementing complex pipelines simple. The MIF run-time automatically handles queues between pipeline elements in order to handle request bursts, and automatically executes multiple instances of pipeline elements to increase pipeline throughput. Distributed pipeline elements are supported using a range of configurable communications protocols, and the MIF interfaces provide efficient mechanisms for moving data directly between two distributed pipeline elements.« less

  5. On the interpretations of Langevin stochastic equation in different coordinate systems

    NASA Astrophysics Data System (ADS)

    Martínez, E.; López-Díaz, L.; Torres, L.; Alejos, O.

    2004-01-01

    The stochastic Langevin Landau-Lifshitz equation is usually utilized in micromagnetics formalism to account for thermal effects. Commonly, two different interpretations of the stochastic integrals can be made: Ito and Stratonovich. In this work, the Langevin-Landau-Lifshitz (LLL) equation is written in both Cartesian and Spherical coordinates. If Spherical coordinates are employed, the noise is additive, and therefore, Ito and Stratonovich solutions are equal. This is not the case when (LLL) equation is written in Cartesian coordinates. In this case, the Langevin equation must be interpreted in the Stratonovich sense in order to reproduce correct statistical results. Nevertheless, the statistics of the numerical results obtained from Euler-Ito and Euler-Stratonovich schemes are equivalent due to the additional numerical constraint imposed in Cartesian system after each time step, which itself assures that the magnitude of the magnetization is preserved.

  6. Application of Energy Function as a Measure of Error in the Numerical Solution for Online Transient Stability Assessment

    NASA Astrophysics Data System (ADS)

    Sarojkumar, K.; Krishna, S.

    2016-08-01

    Online dynamic security assessment (DSA) is a computationally intensive task. In order to reduce the amount of computation, screening of contingencies is performed. Screening involves analyzing the contingencies with the system described by a simpler model so that computation requirement is reduced. Screening identifies those contingencies which are sure to not cause instability and hence can be eliminated from further scrutiny. The numerical method and the step size used for screening should be chosen with a compromise between speed and accuracy. This paper proposes use of energy function as a measure of error in the numerical solution used for screening contingencies. The proposed measure of error can be used to determine the most accurate numerical method satisfying the time constraint of online DSA. Case studies on 17 generator system are reported.

  7. Perceived Barriers to Healthy Eating and Physical Activity Among Participants in a Workplace Obesity Intervention.

    PubMed

    Stankevitz, Kayla; Dement, John; Schoenfisch, Ashley; Joyner, Julie; Clancy, Shayna M; Stroo, Marissa; Østbye, Truls

    2017-08-01

    To characterize barriers to healthy eating (BHE) and physical activity (BPA) among participants in a workplace weight management intervention. Steps to health participants completed a questionnaire to ascertain barriers to physical activity and healthy eating faced. Exploratory factor analysis was used to determine the factor structure for BPA and BHE. The relationships of these factors with accelerometer data and dietary behaviors were assessed using linear regression. Barriers to physical activity included time constraints and lack of interest and motivation, and to healthy eating, lack of self-control and convenience, and lack of access to healthy foods. Higher BHE correlated with higher sugary beverage intake but not fruit and vegetable and fat intake. To improve their effectiveness, workplace weight management programs should consider addressing and reducing barriers to healthy eating and physical activity.

  8. Seismic constraints on caldera dynamics from the 2015 Axial Seamount eruption.

    PubMed

    Wilcock, William S D; Tolstoy, Maya; Waldhauser, Felix; Garcia, Charles; Tan, Yen Joe; Bohnenstiehl, DelWayne R; Caplan-Auerbach, Jacqueline; Dziak, Robert P; Arnulf, Adrien F; Mann, M Everett

    2016-12-16

    Seismic observations in volcanically active calderas are challenging. A new cabled observatory atop Axial Seamount on the Juan de Fuca ridge allows unprecedented real-time monitoring of a submarine caldera. Beginning on 24 April 2015, the seismic network captured an eruption that culminated in explosive acoustic signals where lava erupted on the seafloor. Extensive seismic activity preceding the eruption shows that inflation is accommodated by the reactivation of an outward-dipping caldera ring fault, with strong tidal triggering indicating a critically stressed system. The ring fault accommodated deflation during the eruption and provided a pathway for a dike that propagated south and north beneath the caldera's east wall. Once north of the caldera, the eruption stepped westward, and a dike propagated along the extensional north rift. Copyright © 2016, American Association for the Advancement of Science.

  9. Reproductive constraints, direct fitness and indirect fitness benefits explain helping behaviour in the primitively eusocial wasp, Polistes canadensis.

    PubMed

    Sumner, Seirian; Kelstrup, Hans; Fanelli, Daniele

    2010-06-07

    A key step in the evolution of sociality is the abandonment of independent breeding in favour of helping. In cooperatively breeding vertebrates and primitively eusocial insects, helpers are capable of leaving the group and reproducing independently, and yet many do not. A fundamental question therefore is why do helpers help? Helping behaviour may be explained by constraints on independent reproduction and/or benefits to individuals from helping. Here, we examine simultaneously the reproductive constraints and fitness benefits underlying helping behaviour in a primitively eusocial paper wasp. We gave 31 helpers the opportunity to become egg-layers on their natal nests by removing nestmates. This allowed us to determine whether helpers are reproductively constrained in any way. We found that age strongly influenced whether an ex-helper could become an egg-layer, such that young ex-helpers could become egg-layers while old ex-helpers were less able. These differential reproductive constraints enabled us to make predictions about the behaviours of ex-helpers, depending on the relative importance of direct and indirect fitness benefits. We found little evidence that indirect fitness benefits explain helping behaviour, as 71 per cent of ex-helpers left their nests before the end of the experiment. In the absence of reproductive constraints, however, young helpers value direct fitness opportunities over indirect fitness. We conclude that a combination of reproductive constraints and potential for future direct reproduction explain helping behaviour in this species. Testing several competing explanations for helping behaviour simultaneously promises to advance our understanding of social behaviour in animal groups.

  10. Simultaneous multislice refocusing via time optimal control.

    PubMed

    Rund, Armin; Aigner, Christoph Stefan; Kunisch, Karl; Stollberger, Rudolf

    2018-02-09

    Joint design of minimum duration RF pulses and slice-selective gradient shapes for MRI via time optimal control with strict physical constraints, and its application to simultaneous multislice imaging. The minimization of the pulse duration is cast as a time optimal control problem with inequality constraints describing the refocusing quality and physical constraints. It is solved with a bilevel method, where the pulse length is minimized in the upper level, and the constraints are satisfied in the lower level. To address the inherent nonconvexity of the optimization problem, the upper level is enhanced with new heuristics for finding a near global optimizer based on a second optimization problem. A large set of optimized examples shows an average temporal reduction of 87.1% for double diffusion and 74% for turbo spin echo pulses compared to power independent number of slices pulses. The optimized results are validated on a 3T scanner with phantom measurements. The presented design method computes minimum duration RF pulse and slice-selective gradient shapes subject to physical constraints. The shorter pulse duration can be used to decrease the effective echo time in existing echo-planar imaging or echo spacing in turbo spin echo sequences. © 2018 International Society for Magnetic Resonance in Medicine.

  11. Local constraints on cosmic string loops from photometry and pulsar timing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pshirkov, M. S.; Tuntsov, A. V.; Sternberg Astronomical Institute, M.V. Lomonosov Moscow State University, 119992

    2010-04-15

    We constrain the cosmological density of cosmic string loops using two observational signatures--gravitational microlensing and the Kaiser-Stebbins effect. Photometry from RXTE and CoRoT space missions and pulsar timing from Parkes Pulsar Timing Array, Arecibo and Green Bank radio telescopes allow us to probe cosmic strings in a wide range of tensions G{mu}/c{sup 2}=10{sup -16} divide 10{sup -10}. We find that pulsar timing data provide the most stringent constraints on the abundance of light strings at the level {Omega}{sub s{approx}}10{sup -3}. Future observational facilities such as the Square Kilometer Array will allow one to improve these constraints by orders of magnitude.

  12. Ammonia oxidizer populations vary with nitrogen cycling across a tropical montane mean annual temperature gradient

    Treesearch

    S. Pierre; I. Hewson; J. P. Sparks; C. M. Litton; C. Giardina; P. M. Groffman; T. J. Fahey

    2017-01-01

    Functional gene approaches have been used to better understand the roles of microbes in driving forest soil nitrogen (N) cycling rates and bioavailability. Ammonia oxidation is a rate limiting step in nitrification, and is a key area for understanding environmental constraints on N availability in forests. We studied how increasing temperature affects the role of...

  13. Randomized Pilot Trial of a Telephone Symptom Management Intervention for Symptomatic Lung Cancer Patients and Their Family Caregivers.

    PubMed

    Mosher, Catherine E; Winger, Joseph G; Hanna, Nasser; Jalal, Shadia I; Einhorn, Lawrence H; Birdas, Thomas J; Ceppa, DuyKhanh P; Kesler, Kenneth A; Schmitt, Jordan; Kashy, Deborah A; Champion, Victoria L

    2016-10-01

    Lung cancer is one of the most common cancers affecting both men and women and is associated with high symptom burden and psychological distress. Lung cancer patients' family caregivers also show high rates of distress. However, few interventions have been tested to alleviate significant problems of this population. This study examined the preliminary efficacy of telephone-based symptom management (TSM) for symptomatic lung cancer patients and their family caregivers. Symptomatic lung cancer patients and caregivers (n = 106 dyads) were randomly assigned to four sessions of TSM consisting of cognitive-behavioral and emotion-focused therapy or an education/support condition. Patients completed measures of physical and psychological symptoms, self-efficacy for managing symptoms, and perceived social constraints from the caregiver; caregivers completed measures of psychological symptoms, self-efficacy for helping the patient manage symptoms and managing their own emotions, perceived social constraints from the patient, and caregiving burden. No significant group differences were found for all patient outcomes and caregiver self-efficacy for helping the patient manage symptoms and caregiving burden at two- and six-weeks post-intervention. Small effects in favor of TSM were found regarding caregiver self-efficacy for managing their own emotions and perceived social constraints from the patient. Study outcomes did not significantly change over time in either group. Findings suggest that our brief telephone-based psychosocial intervention is not efficacious for symptomatic lung cancer patients and their family caregivers. Next steps include examining specific intervention components in relation to study outcomes, mechanisms of change, and differing intervention doses and modalities. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  14. Formal Specification and Automatic Analysis of Business Processes under Authorization Constraints: An Action-Based Approach

    NASA Astrophysics Data System (ADS)

    Armando, Alessandro; Giunchiglia, Enrico; Ponta, Serena Elisa

    We present an approach to the formal specification and automatic analysis of business processes under authorization constraints based on the action language \\cal{C}. The use of \\cal{C} allows for a natural and concise modeling of the business process and the associated security policy and for the automatic analysis of the resulting specification by using the Causal Calculator (CCALC). Our approach improves upon previous work by greatly simplifying the specification step while retaining the ability to perform a fully automatic analysis. To illustrate the effectiveness of the approach we describe its application to a version of a business process taken from the banking domain and use CCALC to determine resource allocation plans complying with the security policy.

  15. Research on cutting path optimization of sheet metal parts based on ant colony algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Z. Y.; Ling, H.; Li, L.; Wu, L. H.; Liu, N. B.

    2017-09-01

    In view of the disadvantages of the current cutting path optimization methods of sheet metal parts, a new method based on ant colony algorithm was proposed in this paper. The cutting path optimization problem of sheet metal parts was taken as the research object. The essence and optimization goal of the optimization problem were presented. The traditional serial cutting constraint rule was improved. The cutting constraint rule with cross cutting was proposed. The contour lines of parts were discretized and the mathematical model of cutting path optimization was established. Thus the problem was converted into the selection problem of contour lines of parts. Ant colony algorithm was used to solve the problem. The principle and steps of the algorithm were analyzed.

  16. Inferring segmented dense motion layers using 5D tensor voting.

    PubMed

    Min, Changki; Medioni, Gérard

    2008-09-01

    We present a novel local spatiotemporal approach to produce motion segmentation and dense temporal trajectories from an image sequence. A common representation of image sequences is a 3D spatiotemporal volume, (x,y,t), and its corresponding mathematical formalism is the fiber bundle. However, directly enforcing the spatiotemporal smoothness constraint is difficult in the fiber bundle representation. Thus, we convert the representation into a new 5D space (x,y,t,vx,vy) with an additional velocity domain, where each moving object produces a separate 3D smooth layer. The smoothness constraint is now enforced by extracting 3D layers using the tensor voting framework in a single step that solves both correspondence and segmentation simultaneously. Motion segmentation is achieved by identifying those layers, and the dense temporal trajectories are obtained by converting the layers back into the fiber bundle representation. We proceed to address three applications (tracking, mosaic, and 3D reconstruction) that are hard to solve from the video stream directly because of the segmentation and dense matching steps, but become straightforward with our framework. The approach does not make restrictive assumptions about the observed scene or camera motion and is therefore generally applicable. We present results on a number of data sets.

  17. Defining a genetic ideotype for crop improvement.

    PubMed

    Trethowan, Richard M

    2014-01-01

    While plant breeders traditionally base selection on phenotype, the development of genetic ideotypes can help focus the selection process. This chapter provides a road map for the establishment of a refined genetic ideotype. The first step is an accurate definition of the target environment including the underlying constraints, their probability of occurrence, and impact on phenotype. Once the environmental constraints are established, the wealth of information on plant physiological responses to stresses, known gene information, and knowledge of genotype ×environment and gene × environment interaction help refine the target ideotype and form a basis for cross prediction.Once a genetic ideotype is defined the challenge remains to build the ideotype in a plant breeding program. A number of strategies including marker-assisted recurrent selection and genomic selection can be used that also provide valuable information for the optimization of genetic ideotype. However, the informatics required to underpin the realization of the genetic ideotype then becomes crucial. The reduced cost of genotyping and the need to combine pedigree, phenotypic, and genetic data in a structured way for analysis and interpretation often become the rate-limiting steps, thus reducing genetic gain. Systems for managing these data and an example of ideotype construction for a defined environment type are discussed.

  18. Precision reconstruction of manufactured free-form components

    NASA Astrophysics Data System (ADS)

    Ristic, Mihailo; Brujic, Djordje; Ainsworth, Iain

    2000-03-01

    Manufacturing needs in many industries, especially the aerospace and the automotive, involve CAD remodeling of manufactured free-form parts using NURBS. This is typically performed as part of 'first article inspection' or 'closing the design loop.' The reconstructed model must satisfy requirements such as accuracy, compatibility with the original CAD model and adherence to various constraints. The paper outlines a methodology for realizing this task. Efficiency and quality of the results are achieved by utilizing the nominal CAD model. It is argued that measurement and remodeling steps are equally important. We explain how the measurement was optimized in terms of accuracy, point distribution and measuring speed using a CMM. Remodeling steps include registration, data segmentation, parameterization and surface fitting. Enforcement of constraints such as continuity was performed as part of the surface fitting process. It was found necessary that the relevant algorithms are able to perform in the presence of measurement noise, while making no special assumptions about regularity of data distribution. In order to deal with real life situations, a number of supporting functions for geometric modeling were required and these are described. The presented methodology was applied using real aeroengine parts and the experimental results are presented.

  19. SENSITIVITY OF HELIOSEISMIC TRAVEL TIMES TO THE IMPOSITION OF A LORENTZ FORCE LIMITER IN COMPUTATIONAL HELIOSEISMOLOGY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moradi, Hamed; Cally, Paul S., E-mail: hamed.moradi@monash.edu

    The rapid exponential increase in the Alfvén wave speed with height above the solar surface presents a serious challenge to physical modeling of the effects of magnetic fields on solar oscillations, as it introduces a significant Courant-Friedrichs-Lewy time-step constraint for explicit numerical codes. A common approach adopted in computational helioseismology, where long simulations in excess of 10 hr (hundreds of wave periods) are often required, is to cap the Alfvén wave speed by artificially modifying the momentum equation when the ratio between the Lorentz and hydrodynamic forces becomes too large. However, recent studies have demonstrated that the Alfvén wave speedmore » plays a critical role in the MHD mode conversion process, particularly in determining the reflection height of the upwardly propagating helioseismic fast wave. Using numerical simulations of helioseismic wave propagation in constant inclined (relative to the vertical) magnetic fields we demonstrate that the imposition of such artificial limiters significantly affects time-distance travel times unless the Alfvén wave-speed cap is chosen comfortably in excess of the horizontal phase speeds under investigation.« less

  20. In search of methods enhancing fluency in reading: An examination of the relations between time constraints and processes of reading in readers of German.

    PubMed

    Bar-Kochva, Irit; Hasselhorn, Marcus

    2015-12-01

    The attainment of fluency in reading is a major difficulty for reading-disabled people. Manipulations applied on the presentation of texts, leading to "on-line" effects on reading (i.e., while texts are manipulated), are one direction of examinations in search of methods affecting reading. The imposing of time constraints, by deleting one letter after the other from texts presented on a computer screen, has been established as such a method. In an attempt to further understand its nature, we tested the relations between time constraints and processes of reading: phonological decoding of small orthogrpahic units and the addressing of orthographic representations from the mental lexicon. We also examined whether the type of orthogrpahic unit deleted (lexical, sublexical, or nonlexical unit) has any additional effect. Participants were German fifth graders with (n = 29) or without (n = 34) reading disability. Time constraints enhanced fluency in reading in both groups, and to a similar extent, across conditions. Comprehension was unimpaired. These results place the very principle of time constraints, regardless of the orthographic unit manipulated, as a critical factor affecting fluency in reading. However, phonological decoding explained a significant amount of variance in fluency in reading across all conditions in reading-disabled children, whereas the addressing of orthographic representations was the consistent predictor of fluency in reading in regular readers. These results indicate a qualitative difference in the processes explaining the variance in fluency in reading in regular and reading-disabled readers and suggest that time constraints might not have an effect on the relations between these processes and reading performance. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. A Radiation Transfer Solver for Athena Using Short Characteristics

    NASA Astrophysics Data System (ADS)

    Davis, Shane W.; Stone, James M.; Jiang, Yan-Fei

    2012-03-01

    We describe the implementation of a module for the Athena magnetohydrodynamics (MHD) code that solves the time-independent, multi-frequency radiative transfer (RT) equation on multidimensional Cartesian simulation domains, including scattering and non-local thermodynamic equilibrium (LTE) effects. The module is based on well known and well tested algorithms developed for modeling stellar atmospheres, including the method of short characteristics to solve the RT equation, accelerated Lambda iteration to handle scattering and non-LTE effects, and parallelization via domain decomposition. The module serves several purposes: it can be used to generate spectra and images, to compute a variable Eddington tensor (VET) for full radiation MHD simulations, and to calculate the heating and cooling source terms in the MHD equations in flows where radiation pressure is small compared with gas pressure. For the latter case, the module is combined with the standard MHD integrators using operator splitting: we describe this approach in detail, including a new constraint on the time step for stability due to radiation diffusion modes. Implementation of the VET method for radiation pressure dominated flows is described in a companion paper. We present results from a suite of test problems for both the RT solver itself and for dynamical problems that include radiative heating and cooling. These tests demonstrate that the radiative transfer solution is accurate and confirm that the operator split method is stable, convergent, and efficient for problems of interest. We demonstrate there is no need to adopt ad hoc assumptions of questionable accuracy to solve RT problems in concert with MHD: the computational cost for our general-purpose module for simple (e.g., LTE gray) problems can be comparable to or less than a single time step of Athena's MHD integrators, and only few times more expensive than that for more general (non-LTE) problems.

  2. A two-step initial mass function:. Consequences of clustered star formation for binary properties

    NASA Astrophysics Data System (ADS)

    Durisen, R. H.; Sterzik, M. F.; Pickett, B. K.

    2001-06-01

    If stars originate in transient bound clusters of moderate size, these clusters will decay due to dynamic interactions in which a hard binary forms and ejects most or all the other stars. When the cluster members are chosen at random from a reasonable initial mass function (IMF), the resulting binary characteristics do not match current observations. We find a significant improvement in the trends of binary properties from this scenario when an additional constraint is taken into account, namely that there is a distribution of total cluster masses set by the masses of the cloud cores from which the clusters form. Two distinct steps then determine final stellar masses - the choice of a cluster mass and the formation of the individual stars. We refer to this as a ``two-step'' IMF. Simple statistical arguments are used in this paper to show that a two-step IMF, combined with typical results from dynamic few-body system decay, tends to give better agreement between computed binary characteristics and observations than a one-step mass selection process.

  3. Motionless phase stepping in X-ray phase contrast imaging with a compact source

    PubMed Central

    Miao, Houxun; Chen, Lei; Bennett, Eric E.; Adamo, Nick M.; Gomella, Andrew A.; DeLuca, Alexa M.; Patel, Ajay; Morgan, Nicole Y.; Wen, Han

    2013-01-01

    X-ray phase contrast imaging offers a way to visualize the internal structures of an object without the need to deposit significant radiation, and thereby alleviate the main concern in X-ray diagnostic imaging procedures today. Grating-based differential phase contrast imaging techniques are compatible with compact X-ray sources, which is a key requirement for the majority of clinical X-ray modalities. However, these methods are substantially limited by the need for mechanical phase stepping. We describe an electromagnetic phase-stepping method that eliminates mechanical motion, thus removing the constraints in speed, accuracy, and flexibility. The method is broadly applicable to both projection and tomography imaging modes. The transition from mechanical to electromagnetic scanning should greatly facilitate the translation of X-ray phase contrast techniques into mainstream applications. PMID:24218599

  4. A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.

    PubMed

    Quan, Quan; Cai, Kai-Yuan

    2016-02-01

    In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.

  5. A step forward in understanding step-overs: the case of the Dead Sea Fault in northern Israel

    NASA Astrophysics Data System (ADS)

    Dembo, Neta; Granot, Roi; Hamiel, Yariv

    2017-04-01

    The rotational deformation field around step-overs between segments of strike-slip faults is poorly resolved. Vertical-axis paleomagnetic rotations can be used to characterize the deformation field, and together with mechanical modeling, can provide constraints on the characteristics of the adjacent fault segments. The northern Dead Sea Fault, a major segmented sinistral transform fault that straddles the boundary between the Arabian Plate and Sinai Subplate, offers an appropriate tectonic setting for our detailed mechanical and paleomagnetic investigation. We examine the paleomagnetic vertical-axis rotations of Neogene-Pleistocene basalt outcrops surrounding a right step-over between two prominent segments of the fault: the Jordan Gorge section and the Hula East Boundary Fault. Results from 20 new paleomagnetic sites reveal significant (>20˚) counterclockwise rotations within the step-over and small clockwise rotations in the vicinity. Sites located further (>2.5 km) away from the step-over generally experience negligible to minor rotations. Finally, we construct a mechanical model guided by the observed rotational field that allows us to characterize the structural, mechanical and kinematic behavior of the Dead Sea Fault in northern Israel.

  6. Approximation-Based Adaptive Neural Tracking Control of Nonlinear MIMO Unknown Time-Varying Delay Systems With Full State Constraints.

    PubMed

    Li, Da-Peng; Li, Dong-Juan; Liu, Yan-Jun; Tong, Shaocheng; Chen, C L Philip

    2017-10-01

    This paper deals with the tracking control problem for a class of nonlinear multiple input multiple output unknown time-varying delay systems with full state constraints. To overcome the challenges which cause by the appearances of the unknown time-varying delays and full-state constraints simultaneously in the systems, an adaptive control method is presented for such systems for the first time. The appropriate Lyapunov-Krasovskii functions and a separation technique are employed to eliminate the effect of unknown time-varying delays. The barrier Lyapunov functions are employed to prevent the violation of the full state constraints. The singular problems are dealt with by introducing the signal function. Finally, it is proven that the proposed method can both guarantee the good tracking performance of the systems output, all states are remained in the constrained interval and all the closed-loop signals are bounded in the design process based on choosing appropriate design parameters. The practicability of the proposed control technique is demonstrated by a simulation study in this paper.

  7. [Application of ordinary Kriging method in entomologic ecology].

    PubMed

    Zhang, Runjie; Zhou, Qiang; Chen, Cuixian; Wang, Shousong

    2003-01-01

    Geostatistics is a statistic method based on regional variables and using the tool of variogram to analyze the spatial structure and the patterns of organism. In simulating the variogram within a great range, though optimal simulation cannot be obtained, the simulation method of a dialogue between human and computer can be used to optimize the parameters of the spherical models. In this paper, the method mentioned above and the weighted polynomial regression were utilized to simulate the one-step spherical model, the two-step spherical model and linear function model, and the available nearby samples were used to draw on the ordinary Kriging procedure, which provided a best linear unbiased estimate of the constraint of the unbiased estimation. The sum of square deviation between the estimating and measuring values of varying theory models were figured out, and the relative graphs were shown. It was showed that the simulation based on the two-step spherical model was the best simulation, and the one-step spherical model was better than the linear function model.

  8. Solar electric geocentric transfer with attitude constraints: Analysis

    NASA Technical Reports Server (NTRS)

    Sackett, L. L.; Malchow, H. L.; Delbaum, T. N.

    1975-01-01

    A time optimal or nearly time optimal trajectory program was developed for solar electric geocentric transfer with or without attitude constraints and with an optional initial high thrust stage. The method of averaging reduces computation time. A nonsingular set of orbital elements is used. The constraints, which are those of one of the SERT-C designs, introduce complexities into the analysis and the solution yields possible discontinuous changes in thrust direction. The power degradation due to VanAllen radiation is modeled analytically. A wide range of solar cell characteristics is assumed. Effects such as oblateness and shadowing are included. The analysis and the results of many example runs are included.

  9. Development of a breast self-examination program for the Internet: health information for Korean women.

    PubMed

    Kim, H S; Kim, E; Kim, J W

    2001-04-01

    Internet-based health information will enable us to interact with many people despite distance and time constraints. Informational media by computer is expected to become an important factor that affects health behavior. This study was done to develop an accessible multimedia program about breast self-examination on the Internet. This study was designed by using the two steps of need assessment and program development. For the need assessment step, a survey was carried out. The sample consisted of the 82 women of Yonsei University selected by convenient random sampling. At the program development step, screen design took into account perspectives of computer engineering. A storyboard for every screen was made via screen design and then ported to computer using the Netscape Navigator program. A breast self-examination program was developed using Netscape 4.0 on the Windows 98 platform. The multimedia program, including text, graphics, animation, and sound, was constructed with HTML language using Memo Sheet in Netscape Navigator. The contents of health information posted on the Internet included general information about breast cancer, the importance of breast self-examination, self-risk appraisal of breast cancer, the diverse methods about breast self-examination, the monthly check list graph, and social network for consultation. It is possible to interact with clients through the Question and Answer function on screen. This Internet-based health information program provides enough information, which can be accessed using search systems on the Internet.

  10. Time management displays for shuttle countdown

    NASA Technical Reports Server (NTRS)

    Beller, Arthur E.; Hadaller, H. Greg; Ricci, Mark J.

    1992-01-01

    The Intelligent Launch Decision Support System project is developing a Time Management System (TMS) for the NASA Test Director (NTD) to use for time management during Shuttle terminal countdown. TMS is being developed in three phases: an information phase; a tool phase; and an advisor phase. The information phase is an integrated display (TMID) of firing room clocks, of graphic timelines with Ground Launch Sequencer events, and of constraints. The tool phase is a what-if spreadsheet (TMWI) for devising plans for resuming from unplanned hold situations. It is tied to information in TMID, propagates constraints forward and backward to complete unspecified values, and checks the plan against constraints. The advisor phase is a situation advisor (TMSA), which proactively suggests tactics. A concept prototype for TMSA is under development. The TMID is currently undergoing field testing. Displays for TMID and TMWI are described. Descriptions include organization, rationale for organization, implementation choices and constraints, and use by NTD.

  11. Giant Panda Maternal Care: A Test of the Experience Constraint Hypothesis

    PubMed Central

    Snyder, Rebecca J.; Perdue, Bonnie M.; Zhang, Zhihe; Maple, Terry L.; Charlton, Benjamin D.

    2016-01-01

    The body condition constraint and the experience condition constraint hypotheses have both been proposed to account for differences in reproductive success between multiparous (experienced) and primiparous (first-time) mothers. However, because primiparous mothers are typically characterized by both inferior body condition and lack of experience when compared to multiparous mothers, interpreting experience related differences in maternal care as support for either the body condition constraint hypothesis or the experience constraint hypothesis is extremely difficult. Here, we examined maternal behaviour in captive giant pandas, allowing us to simultaneously control for body condition and provide a rigorous test of the experience constraint hypothesis in this endangered animal. We found that multiparous mothers spent more time engaged in key maternal behaviours (nursing, grooming, and holding cubs) and had significantly less vocal cubs than primiparous mothers. This study provides the first evidence supporting the experience constraint hypothesis in the order Carnivora, and may have utility for captive breeding programs in which it is important to monitor the welfare of this species’ highly altricial cubs, whose survival is almost entirely dependent on receiving adequate maternal care during the first few weeks of life. PMID:27272352

  12. Concurrent schedules: Effects of time- and response-allocation constraints

    PubMed Central

    Davison, Michael

    1991-01-01

    Five pigeons were trained on concurrent variable-interval schedules arranged on two keys. In Part 1 of the experiment, the subjects responded under no constraints, and the ratios of reinforcers obtainable were varied over five levels. In Part 2, the conditions of the experiment were changed such that the time spent responding on the left key before a subsequent changeover to the right key determined the minimum time that must be spent responding on the right key before a changeover to the left key could occur. When the left key provided a higher reinforcer rate than the right key, this procedure ensured that the time allocated to the two keys was approximately equal. The data showed that such a time-allocation constraint only marginally constrained response allocation. In Part 3, the numbers of responses emitted on the left key before a changeover to the right key determined the minimum number of responses that had to be emitted on the right key before a changeover to the left key could occur. This response constraint completely constrained time allocation. These data are consistent with the view that response allocation is a fundamental process (and time allocation a derivative process), or that response and time allocation are independently controlled, in concurrent-schedule performance. PMID:16812632

  13. Seeking Time within Time: Exploring the Temporal Constraints of Women Teachers' Experiences as Graduate Students and Novice Researchers

    ERIC Educational Resources Information Center

    Kukner, Jennifer Mitton

    2014-01-01

    The primary focus of this qualitative study is an inquiry into three female teachers' experiences as novice researchers. Over the course of an academic year I maintained a focus upon participants' research experiences and their use of time as they conducted research studies. Delving into the temporal constraints that informed participants'…

  14. Examining the Effect of Time Constraint on the Online Mastery Learning Approach towards Improving Postgraduate Students' Achievement

    ERIC Educational Resources Information Center

    Ee, Mong Shan; Yeoh, William; Boo, Yee Ling; Boulter, Terry

    2018-01-01

    Time control plays a critical role within the online mastery learning (OML) approach. This paper examines the two commonly implemented mastery learning strategies--personalised system of instructions and learning for mastery (LFM)--by focusing on what occurs when there is an instructional time constraint. Using a large data set from a postgraduate…

  15. A Pilot Study Examining the Effects of Time Constraints on Student Performance in Accounting Classes

    ERIC Educational Resources Information Center

    Morris, David E., Sr.; Scott, John

    2017-01-01

    The purpose of this study was to examine the effects, if any, of time constraints on the success of accounting students completing exams. This study examined how time allowed to take exams affected the grades on examinations in three different accounting classes. Two were sophomore classes and one was a senior accounting class. This limited pilot…

  16. A study of the rate-controlled constrained-equilibrium dimension reduction method and its different implementations

    NASA Astrophysics Data System (ADS)

    Hiremath, Varun; Pope, Stephen B.

    2013-04-01

    The Rate-Controlled Constrained-Equilibrium (RCCE) method is a thermodynamic based dimension reduction method which enables representation of chemistry involving n s species in terms of fewer n r constraints. Here we focus on the application of the RCCE method to Lagrangian particle probability density function based computations. In these computations, at every reaction fractional step, given the initial particle composition (represented using RCCE), we need to compute the reaction mapping, i.e. the particle composition at the end of the time step. In this work we study three different implementations of RCCE for computing this reaction mapping, and compare their relative accuracy and efficiency. These implementations include: (1) RCCE/TIFS (Trajectory In Full Space): this involves solving a system of n s rate-equations for all the species in the full composition space to obtain the reaction mapping. The other two implementations obtain the reaction mapping by solving a reduced system of n r rate-equations obtained by projecting the n s rate-equations for species evaluated in the full space onto the constrained subspace. These implementations include (2) RCCE: this is the classical implementation of RCCE which uses a direct projection of the rate-equations for species onto the constrained subspace; and (3) RCCE/RAMP (Reaction-mixing Attracting Manifold Projector): this is a new implementation introduced here which uses an alternative projector obtained using the RAMP approach. We test these three implementations of RCCE for methane/air premixed combustion in the partially-stirred reactor with chemistry represented using the n s=31 species GRI-Mech 1.2 mechanism with n r=13 to 19 constraints. We show that: (a) the classical RCCE implementation involves an inaccurate projector which yields large errors (over 50%) in the reaction mapping; (b) both RCCE/RAMP and RCCE/TIFS approaches yield significantly lower errors (less than 2%); and (c) overall the RCCE/TIFS approach is the most accurate, efficient (by orders of magnitude) and robust implementation.

  17. Effect of exoskeletal joint constraint and passive resistance on metabolic energy expenditure: Implications for walking in paraplegia.

    PubMed

    Chang, Sarah R; Kobetic, Rudi; Triolo, Ronald J

    2017-01-01

    An important consideration in the design of a practical system to restore walking in individuals with spinal cord injury is to minimize metabolic energy demand on the user. In this study, the effects of exoskeletal constraints on metabolic energy expenditure were evaluated in able-bodied volunteers to gain insight into the demands of walking with a hybrid neuroprosthesis after paralysis. The exoskeleton had a hydraulic mechanism to reciprocally couple hip flexion and extension, unlocked hydraulic stance controlled knee mechanisms, and ankles fixed at neutral by ankle-foot orthoses. These mechanisms added passive resistance to the hip (15 Nm) and knee (6 Nm) joints while the exoskeleton constrained joint motion to the sagittal plane. The average oxygen consumption when walking with the exoskeleton was 22.5 ± 3.4 ml O2/min/kg as compared to 11.7 ± 2.0 ml O2/min/kg when walking without the exoskeleton at a comparable speed. The heart rate and physiological cost index with the exoskeleton were at least 30% and 4.3 times higher, respectively, than walking without it. The maximum average speed achieved with the exoskeleton was 1.2 ± 0.2 m/s, at a cadence of 104 ± 11 steps/min, and step length of 70 ± 7 cm. Average peak hip joint angles (25 ± 7°) were within normal range, while average peak knee joint angles (40 ± 8°) were less than normal. Both hip and knee angular velocities were reduced with the exoskeleton as compared to normal. While the walking speed achieved with the exoskeleton could be sufficient for community ambulation, metabolic energy expenditure was significantly increased and unsustainable for such activities. This suggests that passive resistance, constraining leg motion to the sagittal plane, reciprocally coupling the hip joints, and weight of exoskeleton place considerable limitations on the utility of the device and need to be minimized in future designs of practical hybrid neuroprostheses for walking after paraplegia.

  18. MapMaker and PathTracer for tracking carbon in genome-scale metabolic models

    PubMed Central

    Tervo, Christopher J.; Reed, Jennifer L.

    2016-01-01

    Constraint-based reconstruction and analysis (COBRA) modeling results can be difficult to interpret given the large numbers of reactions in genome-scale models. While paths in metabolic networks can be found, existing methods are not easily combined with constraint-based approaches. To address this limitation, two tools (MapMaker and PathTracer) were developed to find paths (including cycles) between metabolites, where each step transfers carbon from reactant to product. MapMaker predicts carbon transfer maps (CTMs) between metabolites using only information on molecular formulae and reaction stoichiometry, effectively determining which reactants and products share carbon atoms. MapMaker correctly assigned CTMs for over 97% of the 2,251 reactions in an Escherichia coli metabolic model (iJO1366). Using CTMs as inputs, PathTracer finds paths between two metabolites. PathTracer was applied to iJO1366 to investigate the importance of using CTMs and COBRA constraints when enumerating paths, to find active and high flux paths in flux balance analysis (FBA) solutions, to identify paths for putrescine utilization, and to elucidate a potential CO2 fixation pathway in E. coli. These results illustrate how MapMaker and PathTracer can be used in combination with constraint-based models to identify feasible, active, and high flux paths between metabolites. PMID:26771089

  19. Preparing for Science at Sea - a Chief Scientists Training Cruise on Board the RV Sikuliaq

    NASA Astrophysics Data System (ADS)

    Coakley, B.; Pockalny, R. A.

    2017-12-01

    As part of their education, marine geology and geophysics students spend time at sea, collecting, processing and interpreting data to earn their degrees. While this is a critical component of their preparation, it is an incomplete introduction to the process of doing science at sea. Most students are unfamiliar with the proposal process. Many students spend their time at sea performing assigned tasks without responsibility or participation in cruise planning and execution. In December 2016, we conducted a two-week-long, NSF-funded "Chief Scientist Training Cruise" aboard the R/V Sikuliaq designed to complete their introduction to seagoing science by giving the students the opportunity to plan and execute surveys based hypotheses they formulated. The educational process began with applicants responding to a request for proposals (RFP), which provided a framework for the scientific potential of the cruise. This process continued training through two days of workshops and presentations at the Hawai'i Institute of Geophysics. The students used existing data to define hypotheses, plan surveys, and collect/analyze data to test their hypothesis. The survey design was subject to the time constraints imposed by the ship schedule and the physical constraints imposed by the ship's equipment. The training and sea time made it possible to address all of steps of the scientific process, including proposal writing. Once underway, the combination of conducting the planned surveys and attending daily presentations helped familiarize the students with at-sea operations, the equipment on board the RV Sikuliaq, and the process of writing proposals to NSF for sea-going science. Questionnaires conducted prior to the cruise and in the final days before arriving in port document the success of this training program for developing the abilities and confidence in identifying significant scientific problems, preparing proposals to secure funding, and planning and directing ship surveys.

  20. Step-by-step guideline for disease-specific costing studies in low- and middle-income countries: a mixed methodology

    PubMed Central

    Hendriks, Marleen E.; Kundu, Piyali; Boers, Alexander C.; Bolarinwa, Oladimeji A.; te Pas, Mark J.; Akande, Tanimola M.; Agbede, Kayode; Gomez, Gabriella B.; Redekop, William K.; Schultsz, Constance; Tan, Siok Swan

    2014-01-01

    Background Disease-specific costing studies can be used as input into cost-effectiveness analyses and provide important information for efficient resource allocation. However, limited data availability and limited expertise constrain such studies in low- and middle-income countries (LMICs). Objective To describe a step-by-step guideline for conducting disease-specific costing studies in LMICs where data availability is limited and to illustrate how the guideline was applied in a costing study of cardiovascular disease prevention care in rural Nigeria. Design The step-by-step guideline provides practical recommendations on methods and data requirements for six sequential steps: 1) definition of the study perspective, 2) characterization of the unit of analysis, 3) identification of cost items, 4) measurement of cost items, 5) valuation of cost items, and 6) uncertainty analyses. Results We discuss the necessary tradeoffs between the accuracy of estimates and data availability constraints at each step and illustrate how a mixed methodology of accurate bottom-up micro-costing and more feasible approaches can be used to make optimal use of all available data. An illustrative example from Nigeria is provided. Conclusions An innovative, user-friendly guideline for disease-specific costing in LMICs is presented, using a mixed methodology to account for limited data availability. The illustrative example showed that the step-by-step guideline can be used by healthcare professionals in LMICs to conduct feasible and accurate disease-specific cost analyses. PMID:24685170

  1. Powered Descent Guidance with General Thrust-Pointing Constraints

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Acikmese, Behcet; Blackmore, Lars

    2013-01-01

    The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.

  2. Constraints on Ho from Time-Delay Measurements of PG1115+080

    NASA Technical Reports Server (NTRS)

    Chartas, George

    2003-01-01

    The observations that were performed as part of the award titled: Constraints on Ho From Time-Delay Measurements of PG1115+080 resulted in several scientific publications and presentations. We list these publications and presentations and provide brief description of the important science presented in them.

  3. SU-F-J-97: A Joint Registration and Segmentation Approach for Large Bladder Deformations in Adaptive Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derksen, A; Koenig, L; Heldmann, S

    Purpose: To improve results of deformable image registration (DIR) in adaptive radiotherapy for large bladder deformations in CT/CBCT pelvis imaging. Methods: A variational multi-modal DIR algorithm is incorporated in a joint iterative scheme, alternating between segmentation based bladder matching and registration. Using an initial DIR to propagate the bladder contour to the CBCT, in a segmentation step the contour is improved by discrete image gradient sampling along all surface normals and adapting the delineation to match the location of each maximum (with a search range of +−5/2mm at the superior/inferior bladder side and step size of 0.5mm). An additional graph-cutmore » based constraint limits the maximum difference between neighboring points. This improved contour is utilized in a subsequent DIR with a surface matching constraint. By calculating an euclidean distance map of the improved contour surface, the new constraint enforces the DIR to map each point of the original contour onto the improved contour. The resulting deformation is then used as a starting guess to compute a deformation update, which can again be used for the next segmentation step. The result is a dense deformation, able to capture much larger bladder deformations. The new method is evaluated on ten CT/CBCT male pelvis datasets, calculating Dice similarity coefficients (DSC) between the final propagated bladder contour and a manually delineated gold standard on the CBCT image. Results: Over all ten cases, an average DSC of 0.93±0.03 is achieved on the bladder. Compared with the initial DIR (0.88±0.05), the DSC is equal (2 cases) or improved (8 cases). Additionally, DSC accuracy of femoral bones (0.94±0.02) was not affected. Conclusion: The new approach shows that using the presented alternating segmentation/registration approach, the results of bladder DIR in the pelvis region can be greatly improved, especially for cases with large variations in bladder volume. Fraunhofer MEVIS received funding from a research grant by Varian Medical Systems.« less

  4. Relative sea-level data from southwest Scotland constrain meltwater-driven sea-level jumps prior to the 8.2 kyr BP event

    NASA Astrophysics Data System (ADS)

    Lawrence, Thomas; Long, Antony J.; Gehrels, W. Roland; Jackson, Luke P.; Smith, David E.

    2016-11-01

    The most significant climate cooling of the Holocene is centred on 8.2 kyr BP (the '8.2 event'). Its cause is widely attributed to an abrupt slowdown of the Atlantic Meridional Overturning Circulation (AMOC) associated with the sudden drainage of Laurentide proglacial Lakes Agassiz and Ojibway, but model simulations have difficulty reproducing the event with a single-pulse scenario of freshwater input. Several lines of evidence point to multiple episodes of freshwater release from the decaying Laurentide Ice Sheet (LIS) between ∼8900 and ∼8200 cal yr BP, yet the precise number, timing and magnitude of these events - critical constraints for AMOC simulations - are far from resolved. Here we present a high-resolution relative sea level (RSL) record for the period 8800 to 7800 cal yr BP developed from estuarine and salt-marsh deposits in SW Scotland. We find that RSL rose abruptly in three steps by 0.35 m, 0.7 m and 0.4 m (mean) at 8760-8640, 8595-8465, 8323-8218 cal yr BP respectively. The timing of these RSL steps correlate closely with short-lived events expressed in North Atlantic proxy climate and oceanographic records, providing evidence of at least three distinct episodes of enhanced meltwater discharge from the decaying LIS prior to the 8.2 event. Our observations can be used to test the fidelity of both climate and ice-sheet models in simulating abrupt change during the early Holocene.

  5. Issues in knowledge representation to support maintainability: A case study in scientific data preparation

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Kandt, R. Kirk; Roden, Joseph; Burleigh, Scott; King, Todd; Joy, Steve

    1992-01-01

    Scientific data preparation is the process of extracting usable scientific data from raw instrument data. This task involves noise detection (and subsequent noise classification and flagging or removal), extracting data from compressed forms, and construction of derivative or aggregate data (e.g. spectral densities or running averages). A software system called PIPE provides intelligent assistance to users developing scientific data preparation plans using a programming language called Master Plumber. PIPE provides this assistance capability by using a process description to create a dependency model of the scientific data preparation plan. This dependency model can then be used to verify syntactic and semantic constraints on processing steps to perform limited plan validation. PIPE also provides capabilities for using this model to assist in debugging faulty data preparation plans. In this case, the process model is used to focus the developer's attention upon those processing steps and data elements that were used in computing the faulty output values. Finally, the dependency model of a plan can be used to perform plan optimization and runtime estimation. These capabilities allow scientists to spend less time developing data preparation procedures and more time on scientific analysis tasks. Because the scientific data processing modules (called fittings) evolve to match scientists' needs, issues regarding maintainability are of prime importance in PIPE. This paper describes the PIPE system and describes how issues in maintainability affected the knowledge representation used in PIPE to capture knowledge about the behavior of fittings.

  6. Specific arithmetic calculation deficits in children with Turner syndrome.

    PubMed

    Rovet, J; Szekely, C; Hockenberry, M N

    1994-12-01

    Study 1 compared arithmetic processing skills on the WRAT-R in 45 girls with Turner syndrome (TS) and 92 age-matched female controls. Results revealed significant underachievement by subjects with TS, which reflected their poorer performance on problems requiring the retrieval of addition and multiplication facts and procedural knowledge for addition and division operations. TS subjects did not differ qualitatively from controls in type of procedural error committed. Study 2, which compared the performance of 10 subjects with TS and 31 controls on the Keymath Diagnostic Arithmetic Test, showed that the TS group had less adequate knowledge of arithmetic, subtraction, and multiplication procedures but did not differ from controls on Fact items. Error analyses revealed that TS subjects were more likely to confuse component steps or fail to separate intermediate steps or to complete problems. TS subjects relied to a greater degree on verbal than visual-spatial abilities in arithmetic processing while their visual-spatial abilities were associated with retrieval of simple multidigit addition facts and knowledge of subtraction, multiplication, and division procedures. Differences between the TS and control groups increased with age for Keymath, but not WRAT-R, procedures. Discrepant findings are related to the different task constraints (timed vs. untimed, single vs. alternate versions, size of item pool) and the use of different strategies (counting vs. fact retrieval). It is concluded that arithmetic difficulties in females with TS are due to less adequate procedural skills, combined with poorer fact retrieval in timed testing situations, rather than to inadequate visual-spatial abilities.

  7. Speededness and Adaptive Testing

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Xiong, Xinhui

    2013-01-01

    Two simple constraints on the item parameters in a response--time model are proposed to control the speededness of an adaptive test. As the constraints are additive, they can easily be included in the constraint set for a shadow-test approach (STA) to adaptive testing. Alternatively, a simple heuristic is presented to control speededness in plain…

  8. Development of a High-Order Navier-Stokes Solver Using Flux Reconstruction to Simulate Three-Dimensional Vortex Structures in a Curved Artery Model

    NASA Astrophysics Data System (ADS)

    Cox, Christopher

    Low-order numerical methods are widespread in academic solvers and ubiquitous in industrial solvers due to their robustness and usability. High-order methods are less robust and more complicated to implement; however, they exhibit low numerical dissipation and have the potential to improve the accuracy of flow simulations at a lower computational cost when compared to low-order methods. This motivates our development of a high-order compact method using Huynh's flux reconstruction scheme for solving unsteady incompressible flow on unstructured grids. We use Chorin's classic artificial compressibility formulation with dual time stepping to solve unsteady flow problems. In 2D, an implicit non-linear lower-upper symmetric Gauss-Seidel scheme with backward Euler discretization is used to efficiently march the solution in pseudo time, while a second-order backward Euler discretization is used to march in physical time. We verify and validate implementation of the high-order method coupled with our implicit time stepping scheme using both steady and unsteady incompressible flow problems. The current implicit time stepping scheme is proven effective in satisfying the divergence-free constraint on the velocity field in the artificial compressibility formulation. The high-order solver is extended to 3D and parallelized using MPI. Due to its simplicity, time marching for 3D problems is done explicitly. The feasibility of using the current implicit time stepping scheme for large scale three-dimensional problems with high-order polynomial basis still remains to be seen. We directly use the aforementioned numerical solver to simulate pulsatile flow of a Newtonian blood-analog fluid through a rigid 180-degree curved artery model. One of the most physiologically relevant forces within the cardiovascular system is the wall shear stress. This force is important because atherosclerotic regions are strongly correlated with curvature and branching in the human vasculature, where the shear stress is both oscillatory and multidirectional. Also, the combined effect of curvature and pulsatility in cardiovascular flows produces unsteady vortices. The aim of this research as it relates to cardiovascular fluid dynamics is to predict the spatial and temporal evolution of vortical structures generated by secondary flows, as well as to assess the correlation between multiple vortex pairs and wall shear stress. We use a physiologically (pulsatile) relevant flow rate and generate results using both fully developed and uniform entrance conditions, the latter being motivated by the fact that flow upstream of a curved artery may not have sufficient straight entrance length to become fully developed. Under the two pulsatile inflow conditions, we characterize the morphology and evolution of various vortex pairs and their subsequent effect on relevant haemodynamic wall shear stress metrics.

  9. Proximate effects of temperature versus evolved intrinsic constraints for embryonic development times among temperate and tropical songbirds

    USGS Publications Warehouse

    Ton, Riccardo; Martin, Thomas E.

    2017-01-01

    The relative importance of intrinsic constraints imposed by evolved physiological trade-offs versus the proximate effects of temperature for interspecific variation in embryonic development time remains unclear. Understanding this distinction is important because slow development due to evolved trade-offs can yield phenotypic benefits, whereas slow development from low temperature can yield costs. We experimentally increased embryonic temperature in free-living tropical and north temperate songbird species to test these alternatives. Warmer temperatures consistently shortened development time without costs to embryo mass or metabolism. However, proximate effects of temperature played an increasingly stronger role than intrinsic constraints for development time among species with colder natural incubation temperatures. Long development times of tropical birds have been thought to primarily reflect evolved physiological trade-offs that facilitate their greater longevity. In contrast, our results indicate a much stronger role of temperature in embryonic development time than currently thought.

  10. Highly Parallel Alternating Directions Algorithm for Time Dependent Problems

    NASA Astrophysics Data System (ADS)

    Ganzha, M.; Georgiev, K.; Lirkov, I.; Margenov, S.; Paprzycki, M.

    2011-11-01

    In our work, we consider the time dependent Stokes equation on a finite time interval and on a uniform rectangular mesh, written in terms of velocity and pressure. For this problem, a parallel algorithm based on a novel direction splitting approach is developed. Here, the pressure equation is derived from a perturbed form of the continuity equation, in which the incompressibility constraint is penalized in a negative norm induced by the direction splitting. The scheme used in the algorithm is composed of two parts: (i) velocity prediction, and (ii) pressure correction. This is a Crank-Nicolson-type two-stage time integration scheme for two and three dimensional parabolic problems in which the second-order derivative, with respect to each space variable, is treated implicitly while the other variable is made explicit at each time sub-step. In order to achieve a good parallel performance the solution of the Poison problem for the pressure correction is replaced by solving a sequence of one-dimensional second order elliptic boundary value problems in each spatial direction. The parallel code is implemented using the standard MPI functions and tested on two modern parallel computer systems. The performed numerical tests demonstrate good level of parallel efficiency and scalability of the studied direction-splitting-based algorithm.

  11. SU-E-T-502: Initial Results of a Comparison of Treatment Plans Produced From Automated Prioritized Planning Method and a Commercial Treatment Planning System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiwari, P; Chen, Y; Hong, L

    2015-06-15

    Purpose We developed an automated treatment planning system based on a hierarchical goal programming approach. To demonstrate the feasibility of our method, we report the comparison of prostate treatment plans produced from the automated treatment planning system with those produced by a commercial treatment planning system. Methods In our approach, we prioritized the goals of the optimization, and solved one goal at a time. The purpose of prioritization is to ensure that higher priority dose-volume planning goals are not sacrificed to improve lower priority goals. The algorithm has four steps. The first step optimizes dose to the target structures, whilemore » sparing key sensitive organs from radiation. In the second step, the algorithm finds the best beamlet weight to reduce toxicity risks to normal tissue while holding the objective function achieved in the first step as a constraint, with a small amount of allowed slip. Likewise, the third and fourth steps introduce lower priority normal tissue goals and beam smoothing. We compared with prostate treatment plans from Memorial Sloan Kettering Cancer Center developed using Eclipse, with a prescription dose of 72 Gy. A combination of liear, quadratic, and gEUD objective functions were used with a modified open source solver code (IPOPT). Results Initial plan results on 3 different cases show that the automated planning system is capable of competing or improving on expert-driven eclipse plans. Compared to the Eclipse planning system, the automated system produced up to 26% less mean dose to rectum and 24% less mean dose to bladder while having the same D95 (after matching) to the target. Conclusion We have demonstrated that Pareto optimal treatment plans can be generated automatically without a trial-and-error process. The solver finds an optimal plan for the given patient, as opposed to database-driven approaches that set parameters based on geometry and population modeling.« less

  12. Maximizing and minimizing investment concentration with constraints of budget and investment risk

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-01-01

    In this paper, as a first step in examining the properties of a feasible portfolio subset that is characterized by budget and risk constraints, we assess the maximum and minimum of the investment concentration using replica analysis. To do this, we apply an analytical approach of statistical mechanics. We note that the optimization problem considered in this paper is the dual problem of the portfolio optimization problem discussed in the literature, and we verify that these optimal solutions are also dual. We also present numerical experiments, in which we use the method of steepest descent that is based on Lagrange's method of undetermined multipliers, and we compare the numerical results to those obtained by replica analysis in order to assess the effectiveness of our proposed approach.

  13. Constraints and Opportunities with Interview Transcription: Towards Reflection in Qualitative Research

    PubMed Central

    Oliver, Daniel G.; Serovich, Julianne M.; Mason, Tina L.

    2006-01-01

    In this paper we discuss the complexities of interview transcription. While often seen as a behind-the-scenes task, we suggest that transcription is a powerful act of representation. Transcription is practiced in multiple ways, often using naturalism, in which every utterance is captured in as much detail as possible, and/or denaturalism, in which grammar is corrected, interview noise (e.g., stutters, pauses, etc.) is removed and nonstandard accents (i.e., non-majority) are standardized. In this article, we discuss the constraints and opportunities of our transcription decisions and point to an intermediate, reflective step. We suggest that researchers incorporate reflection into their research design by interrogating their transcription decisions and the possible impact these decisions may have on participants and research outcomes. PMID:16534533

  14. A fast algorithm for solving a linear feasibility problem with application to Intensity-Modulated Radiation Therapy.

    PubMed

    Herman, Gabor T; Chen, Wei

    2008-03-01

    The goal of Intensity-Modulated Radiation Therapy (IMRT) is to deliver sufficient doses to tumors to kill them, but without causing irreparable damage to critical organs. This requirement can be formulated as a linear feasibility problem. The sequential (i.e., iteratively treating the constraints one after another in a cyclic fashion) algorithm ART3 is known to find a solution to such problems in a finite number of steps, provided that the feasible region is full dimensional. We present a faster algorithm called ART3+. The idea of ART3+ is to avoid unnecessary checks on constraints that are likely to be satisfied. The superior performance of the new algorithm is demonstrated by mathematical experiments inspired by the IMRT application.

  15. Coding for Communication Channels with Dead-Time Constraints

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2004-01-01

    Coding schemes have been designed and investigated specifically for optical and electronic data-communication channels in which information is conveyed via pulse-position modulation (PPM) subject to dead-time constraints. These schemes involve the use of error-correcting codes concatenated with codes denoted constrained codes. These codes are decoded using an interactive method. In pulse-position modulation, time is partitioned into frames of Mslots of equal duration. Each frame contains one pulsed slot (all others are non-pulsed). For a given channel, the dead-time constraints are defined as a maximum and a minimum on the allowable time between pulses. For example, if a Q-switched laser is used to transmit the pulses, then the minimum allowable dead time is the time needed to recharge the laser for the next pulse. In the case of bits recorded on a magnetic medium, the minimum allowable time between pulses depends on the recording/playback speed and the minimum distance between pulses needed to prevent interference between adjacent bits during readout. The maximum allowable dead time for a given channel is the maximum time for which it is possible to satisfy the requirement to synchronize slots. In mathematical shorthand, the dead-time constraints for a given channel are represented by the pair of integers (d,k), where d is the minimum allowable number of zeroes between ones and k is the maximum allowable number of zeroes between ones. A system of the type to which the present schemes apply is represented by a binary- input, real-valued-output channel model illustrated in the figure. At the transmitting end, information bits are first encoded by use of an error-correcting code, then further encoded by use of a constrained code. Several constrained codes for channels subject to constraints of (d,infinity) have been investigated theoretically and computationally. The baseline codes chosen for purposes of comparison were simple PPM codes characterized by M-slot PPM frames separated by d-slot dead times.

  16. The future of health technology assessment in healthcare decision making in Asia.

    PubMed

    Yang, Bong-Min

    2009-01-01

    Most countries have healthcare resource constraints and it is easy to identify new health technologies as an area in need of resource management, particularly given that new health technologies usually increase rather than save costs. Resource constraints are even more noticeable in Asia than in other regions, with a comparatively greater speed of population aging and the development of health security systems. The healthcare industry and policy makers in Asia generally understand that rationing in healthcare delivery is inevitable and have come to accept health technology assessment (HTA) as a policy option. The HTA policy framework is slowly penetrating Asia; South Korea was the first country to regulate the use of pharmacoeconomic evidence in drug reimbursement decision making. The South Korean HTA policy was initially a surprise in Asia in that the policy was suddenly introduced with a short period of preparation, but industry, researchers and policy makers both in- and outside of South Korea have come to accept it as necessary and logical. Thailand and Taiwan have also taken steps towards using pharmacoeconomic evidence in HTA, while other Asian countries are planning to implement such policies. However, it could be some time before a legitimate pharmacoeconomic-based HTA policy is actually implemented in each country, and the course of action will vary depending on the policy culture, healthcare system and public trust in bureaucracy of each country.

  17. A methodology for analysing lateral coupled behavior of high speed railway vehicles and structures

    NASA Astrophysics Data System (ADS)

    Antolín, P.; Goicolea, J. M.; Astiz, M. A.; Alonso, A.

    2010-06-01

    Continuous increment of the speed of high speed trains entails the increment of kinetic energy of the trains. The main goal of this article is to study the coupled lateral behavior of vehicle-structure systems for high speed trains. Non linear finite element methods are used for structures whereas multibody dynamics methods are employed for vehicles. Special attention must be paid when dealing with contact rolling constraints for coupling bridge decks and train wheels. The dynamic models must include mixed variables (displacements and creepages). Additionally special attention must be paid to the contact algorithms adequate to wheel-rail contact. The coupled vehicle-structure system is studied in a implicit dynamic framework. Due to the presence of very different systems (trains and bridges), different frequencies are involved in the problem leading to stiff systems. Regarding to contact methods, a main branch is studied in normal contact between train wheels and bridge decks: penalty method. According to tangential contact FastSim algorithm solves the tangential contact at each time step solving a differential equation involving relative displacements and creepage variables. Integration for computing the total forces in the contact ellipse domain is performed for each train wheel and each solver iteration. Coupling between trains and bridges requires a special treatment according to the kinetic constraints imposed in the wheel-rail pair and the load transmission. A numerical example is performed.

  18. Automatic Design of Synthetic Gene Circuits through Mixed Integer Non-linear Programming

    PubMed Central

    Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias

    2012-01-01

    Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits. PMID:22536398

  19. Optimization of active distribution networks: Design and analysis of significative case studies for enabling control actions of real infrastructure

    NASA Astrophysics Data System (ADS)

    Moneta, Diana; Mora, Paolo; Viganò, Giacomo; Alimonti, Gianluca

    2014-12-01

    The diffusion of Distributed Generation (DG) based on Renewable Energy Sources (RES) requires new strategies to ensure reliable and economic operation of the distribution networks and to support the diffusion of DG itself. An advanced algorithm (DISCoVER - DIStribution Company VoltagE Regulator) is being developed to optimize the operation of active network by means of an advanced voltage control based on several regulations. Starting from forecasted load and generation, real on-field measurements, technical constraints and costs for each resource, the algorithm generates for each time period a set of commands for controllable resources that guarantees achievement of technical goals minimizing the overall cost. Before integrating the controller into the telecontrol system of the real networks, and in order to validate the proper behaviour of the algorithm and to identify possible critical conditions, a complete simulation phase has started. The first step is concerning the definition of a wide range of "case studies", that are the combination of network topology, technical constraints and targets, load and generation profiles and "costs" of resources that define a valid context to test the algorithm, with particular focus on battery and RES management. First results achieved from simulation activity on test networks (based on real MV grids) and actual battery characteristics are given, together with prospective performance on real case applications.

  20. Projections onto the Pareto surface in multicriteria radiation therapy optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bokrantz, Rasmus, E-mail: bokrantz@kth.se, E-mail: rasmus.bokrantz@raysearchlabs.com; Miettinen, Kaisa

    2015-10-15

    Purpose: To eliminate or reduce the error to Pareto optimality that arises in Pareto surface navigation when the Pareto surface is approximated by a small number of plans. Methods: The authors propose to project the navigated plan onto the Pareto surface as a postprocessing step to the navigation. The projection attempts to find a Pareto optimal plan that is at least as good as or better than the initial navigated plan with respect to all objective functions. An augmented form of projection is also suggested where dose–volume histogram constraints are used to prevent that the projection causes a violation ofmore » some clinical goal. The projections were evaluated with respect to planning for intensity modulated radiation therapy delivered by step-and-shoot and sliding window and spot-scanned intensity modulated proton therapy. Retrospective plans were generated for a prostate and a head and neck case. Results: The projections led to improved dose conformity and better sparing of organs at risk (OARs) for all three delivery techniques and both patient cases. The mean dose to OARs decreased by 3.1 Gy on average for the unconstrained form of the projection and by 2.0 Gy on average when dose–volume histogram constraints were used. No consistent improvements in target homogeneity were observed. Conclusions: There are situations when Pareto navigation leaves room for improvement in OAR sparing and dose conformity, for example, if the approximation of the Pareto surface is coarse or the problem formulation has too permissive constraints. A projection onto the Pareto surface can identify an inaccurate Pareto surface representation and, if necessary, improve the quality of the navigated plan.« less

  1. An OpenMI Implementation of a Water Resources System using Simple Script Wrappers

    NASA Astrophysics Data System (ADS)

    Steward, D. R.; Aistrup, J. A.; Kulcsar, L.; Peterson, J. M.; Welch, S. M.; Andresen, D.; Bernard, E. A.; Staggenborg, S. A.; Bulatewicz, T.

    2013-12-01

    This team has developed an adaption of the Open Modelling Interface (OpenMI) that utilizes Simple Script Wrappers. Code is made OpenMI compliant through organization within three modules that initialize, perform time steps, and finalize results. A configuration file is prepared that specifies variables a model expects to receive as input and those it will make available as output. An example is presented for groundwater, economic, and agricultural production models in the High Plains Aquifer region of Kansas. Our models use the programming environments in Scilab and Matlab, along with legacy Fortran code, and our Simple Script Wrappers can also use Python. These models are collectively run within this interdisciplinary framework from initial conditions into the future. It will be shown that by applying model constraints to one model, the impact may be accessed on changes to the water resources system.

  2. Recursive optimal pruning with applications to tree structured vector quantizers

    NASA Technical Reports Server (NTRS)

    Kiang, Shei-Zein; Baker, Richard L.; Sullivan, Gary J.; Chiu, Chung-Yen

    1992-01-01

    A pruning algorithm of Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search vector quantizers (VQs) for a large range of rates.

  3. Solving Assembly Sequence Planning using Angle Modulated Simulated Kalman Filter

    NASA Astrophysics Data System (ADS)

    Mustapa, Ainizar; Yusof, Zulkifli Md.; Adam, Asrul; Muhammad, Badaruddin; Ibrahim, Zuwairie

    2018-03-01

    This paper presents an implementation of Simulated Kalman Filter (SKF) algorithm for optimizing an Assembly Sequence Planning (ASP) problem. The SKF search strategy contains three simple steps; predict-measure-estimate. The main objective of the ASP is to determine the sequence of component installation to shorten assembly time or save assembly costs. Initially, permutation sequence is generated to represent each agent. Each agent is then subjected to a precedence matrix constraint to produce feasible assembly sequence. Next, the Angle Modulated SKF (AMSKF) is proposed for solving ASP problem. The main idea of the angle modulated approach in solving combinatorial optimization problem is to use a function, g(x), to create a continuous signal. The performance of the proposed AMSKF is compared against previous works in solving ASP by applying BGSA, BPSO, and MSPSO. Using a case study of ASP, the results show that AMSKF outperformed all the algorithms in obtaining the best solution.

  4. Teaching psychotherapy to psychiatric residents in Israel.

    PubMed

    Shalev, Arieh Y

    2007-01-01

    This work examines the rationale for, and the feasibility of teaching psychotherapy to psychiatric residents, and the "what if" of dropping it from the curriculum. Psychotherapy is one of the pillars of psychiatry. However, current economic constraints and the increasing weight of phenomenological and biological psychiatry make it more difficult to prioritize and allocate resources to its teaching. The term psychotherapy encompasses several techniques, some of which are extremely effective. It often confounds skills, attitudes, theory, body of knowledge and specific practices. Looking at each component separately, a stepped curriculum for teaching is outlined; alternatives to traditional theories are offered; and the need to allocate time and resources for teaching and learning are shown as the rate-limiting factor for the survival of psychotherapy within psychiatry. Not limited to residents, the debate about psychotherapy in psychiatry concerns the profession's core identity and its traditional person-centered nature.

  5. HERMIES-3: A step toward autonomous mobility, manipulation, and perception

    NASA Technical Reports Server (NTRS)

    Weisbin, C. R.; Burks, B. L.; Einstein, J. R.; Feezell, R. R.; Manges, W. W.; Thompson, D. H.

    1989-01-01

    HERMIES-III is an autonomous robot comprised of a seven degree-of-freedom (DOF) manipulator designed for human scale tasks, a laser range finder, a sonar array, an omni-directional wheel-driven chassis, multiple cameras, and a dual computer system containing a 16-node hypercube expandable to 128 nodes. The current experimental program involves performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES-III). The environment in which the robots operate has been designed to include multiple valves, pipes, meters, obstacles on the floor, valves occluded from view, and multiple paths of differing navigation complexity. The ongoing research program supports the development of autonomous capability for HERMIES-IIB and III to perform complex navigation and manipulation under time constraints, while dealing with imprecise sensory information.

  6. Observation Planning Made Simple with Science Opportunity Analyzer (SOA)

    NASA Technical Reports Server (NTRS)

    Streiffert, Barbara A.; Polanskey, Carol A.

    2004-01-01

    As NASA undertakes the exploration of the Moon and Mars as well as the rest of the Solar System while continuing to investigate Earth's oceans, winds, atmosphere, weather, etc., the ever-existing need to allow operations users to easily define their observations increases. Operation teams need to be able to determine the best time to perform an observation, as well as its duration and other parameters such as the observation target. In addition, operations teams need to be able to check the observation for validity against objectives and intent as well as spacecraft constraints such as turn rates and acceleration or pointing exclusion zones. Science Opportunity Analyzer (SOA), in development for the last six years, is a multi-mission toolset that has been built to meet those needs. The operations team can follow six simple steps and define his/her observation without having to know the complexities of orbital mechanics, coordinate transformations, or the spacecraft itself.

  7. Computer software tool REALM for sustainable water allocation and management.

    PubMed

    Perera, B J C; James, B; Kularathna, M D U

    2005-12-01

    REALM (REsource ALlocation Model) is a generalised computer simulation package that models harvesting and bulk distribution of water resources within a water supply system. It is a modeling tool, which can be applied to develop specific water allocation models. Like other water resource simulation software tools, REALM uses mass-balance accounting at nodes, while the movement of water within carriers is subject to capacity constraints. It uses a fast network linear programming algorithm to optimise the water allocation within the network during each simulation time step, in accordance with user-defined operating rules. This paper describes the main features of REALM and provides potential users with an appreciation of its capabilities. In particular, it describes two case studies covering major urban and rural water supply systems. These case studies illustrate REALM's capabilities in the use of stochastically generated data in water supply planning and management, modelling of environmental flows, and assessing security of supply issues.

  8. Upscaling of Hydraulic Conductivity using the Double Constraint Method

    NASA Astrophysics Data System (ADS)

    El-Rawy, Mustafa; Zijl, Wouter; Batelaan, Okke

    2013-04-01

    The mathematics and modeling of flow through porous media is playing an increasingly important role for the groundwater supply, subsurface contaminant remediation and petroleum reservoir engineering. In hydrogeology hydraulic conductivity data are often collected at a scale that is smaller than the grid block dimensions of a groundwater model (e.g. MODFLOW). For instance, hydraulic conductivities determined from the field using slug and packer tests are measured in the order of centimeters to meters, whereas numerical groundwater models require conductivities representative of tens to hundreds of meters of grid cell length. Therefore, there is a need for upscaling to decrease the number of grid blocks in a groundwater flow model. Moreover, models with relatively few grid blocks are simpler to apply, especially when the model has to run many times, as is the case when it is used to assimilate time-dependent data. Since the 1960s different methods have been used to transform a detailed description of the spatial variability of hydraulic conductivity to a coarser description. In this work we will investigate a relatively simple, but instructive approach: the Double Constraint Method (DCM) to identify the coarse-scale conductivities to decrease the number of grid blocks. Its main advantages are robustness and easy implementation, enabling to base computations on any standard flow code with some post processing added. The inversion step of the double constraint method is based on a first forward run with all known fluxes on the boundary and in the wells, followed by a second forward run based on the heads measured on the phreatic surface (i.e. measured in shallow observation wells) and in deeper observation wells. Upscaling, in turn is inverse modeling (DCM) to determine conductivities in coarse-scale grid blocks from conductivities in fine-scale grid blocks. In such a way that the head and flux boundary conditions applied to the fine-scale model are also honored at the coarse-scale. Exemplification will be presented for the Kleine Nete catchment, Belgium. As a result we identified coarse-scale conductivities while decreasing the number of grid blocks with the advantage that a model run costs less computation time and requires less memory space. In addition, ranking of models was investigated.

  9. On the exact solvability of the anisotropic central spin model: An operator approach

    NASA Astrophysics Data System (ADS)

    Wu, Ning

    2018-07-01

    Using an operator approach based on a commutator scheme that has been previously applied to Richardson's reduced BCS model and the inhomogeneous Dicke model, we obtain general exact solvability requirements for an anisotropic central spin model with XXZ-type hyperfine coupling between the central spin and the spin bath, without any prior knowledge of integrability of the model. We outline basic steps of the usage of the operators approach, and pedagogically summarize them into two Lemmas and two Constraints. Through a step-by-step construction of the eigen-problem, we show that the condition gj‧2 - gj2 = c naturally arises for the model to be exactly solvable, where c is a constant independent of the bath-spin index j, and {gj } and { gj‧ } are the longitudinal and transverse hyperfine interactions, respectively. The obtained conditions and the resulting Bethe ansatz equations are consistent with that in previous literature.

  10. Novel adaptive neural control design for a constrained flexible air-breathing hypersonic vehicle based on actuator compensation

    NASA Astrophysics Data System (ADS)

    Bu, Xiangwei; Wu, Xiaoyan; He, Guangjun; Huang, Jiaqi

    2016-03-01

    This paper investigates the design of a novel adaptive neural controller for the longitudinal dynamics of a flexible air-breathing hypersonic vehicle with control input constraints. To reduce the complexity of controller design, the vehicle dynamics is decomposed into the velocity subsystem and the altitude subsystem, respectively. For each subsystem, only one neural network is utilized to approach the lumped unknown function. By employing a minimal-learning parameter method to estimate the norm of ideal weight vectors rather than their elements, there are only two adaptive parameters required for neural approximation. Thus, the computational burden is lower than the ones derived from neural back-stepping schemes. Specially, to deal with the control input constraints, additional systems are exploited to compensate the actuators. Lyapunov synthesis proves that all the closed-loop signals involved are uniformly ultimately bounded. Finally, simulation results show that the adopted compensation scheme can tackle actuator constraint effectively and moreover velocity and altitude can stably track their reference trajectories even when the physical limitations on control inputs are in effect.

  11. Visual Control for Multirobot Organized Rendezvous.

    PubMed

    Lopez-Nicolas, G; Aranda, M; Mezouar, Y; Sagues, C

    2012-08-01

    This paper addresses the problem of visual control of a set of mobile robots. In our framework, the perception system consists of an uncalibrated flying camera performing an unknown general motion. The robots are assumed to undergo planar motion considering nonholonomic constraints. The goal of the control task is to drive the multirobot system to a desired rendezvous configuration relying solely on visual information given by the flying camera. The desired multirobot configuration is defined with an image of the set of robots in that configuration without any additional information. We propose a homography-based framework relying on the homography induced by the multirobot system that gives a desired homography to be used to define the reference target, and a new image-based control law that drives the robots to the desired configuration by imposing a rigidity constraint. This paper extends our previous work, and the main contributions are that the motion constraints on the flying camera are removed, the control law is improved by reducing the number of required steps, the stability of the new control law is proved, and real experiments are provided to validate the proposal.

  12. High-order tracking differentiator based adaptive neural control of a flexible air-breathing hypersonic vehicle subject to actuators constraints.

    PubMed

    Bu, Xiangwei; Wu, Xiaoyan; Tian, Mingyan; Huang, Jiaqi; Zhang, Rui; Ma, Zhen

    2015-09-01

    In this paper, an adaptive neural controller is exploited for a constrained flexible air-breathing hypersonic vehicle (FAHV) based on high-order tracking differentiator (HTD). By utilizing functional decomposition methodology, the dynamic model is reasonably decomposed into the respective velocity subsystem and altitude subsystem. For the velocity subsystem, a dynamic inversion based neural controller is constructed. By introducing the HTD to adaptively estimate the newly defined states generated in the process of model transformation, a novel neural based altitude controller that is quite simpler than the ones derived from back-stepping is addressed based on the normal output-feedback form instead of the strict-feedback formulation. Based on minimal-learning parameter scheme, only two neural networks with two adaptive parameters are needed for neural approximation. Especially, a novel auxiliary system is explored to deal with the problem of control inputs constraints. Finally, simulation results are presented to test the effectiveness of the proposed control strategy in the presence of system uncertainties and actuators constraints. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  13. A globally convergent Lagrange and barrier function iterative algorithm for the traveling salesman problem.

    PubMed

    Dang, C; Xu, L

    2001-03-01

    In this paper a globally convergent Lagrange and barrier function iterative algorithm is proposed for approximating a solution of the traveling salesman problem. The algorithm employs an entropy-type barrier function to deal with nonnegativity constraints and Lagrange multipliers to handle linear equality constraints, and attempts to produce a solution of high quality by generating a minimum point of a barrier problem for a sequence of descending values of the barrier parameter. For any given value of the barrier parameter, the algorithm searches for a minimum point of the barrier problem in a feasible descent direction, which has a desired property that the nonnegativity constraints are always satisfied automatically if the step length is a number between zero and one. At each iteration the feasible descent direction is found by updating Lagrange multipliers with a globally convergent iterative procedure. For any given value of the barrier parameter, the algorithm converges to a stationary point of the barrier problem without any condition on the objective function. Theoretical and numerical results show that the algorithm seems more effective and efficient than the softassign algorithm.

  14. Robust pattern decoding in shape-coded structured light

    NASA Astrophysics Data System (ADS)

    Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai

    2017-09-01

    Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.

  15. Toward a unifying model for the late Neoproterozoic sulfur cycle

    NASA Astrophysics Data System (ADS)

    Johnston, D. T.; Gill, B. C.; Ries, J. B.; OBrien, T.; Macdonald, F. A.

    2011-12-01

    The latest Proterozoic has always fascinated Earth historians. Between the long identified enigmas surrounding the sudden appearance of animals and the more recent infatuation with large-sale geochemical anomalies (i.e. the Shuram - Wonaka event), the closing 90 million years of the Proterozoic - the Ediacaran - houses a number of important and unanswered questions. Detailed redox geochemistry and stable isotope reconstructions of stratigraphic units covering this time interval have begun to unravel some of it's mysteries, however much remains to be explained. The sulfur cycle, with it's intimate links to both the marine carbon cycle (through remineralization reactions) and overall oxidant budgets (via seawater sulfate), sits poised to provide a sharp tool to track environmental change. Previous work has recognized this potential, and serves as a point of entrance for our current work. However what is lacking - and the goal of this study - is to place quantitative constraints the geochemical evolution of marine basins through this interval. Here we will present multiple sulfur isotope data from pyrite and sulfates through Ediacaran stratigraphy from the Yukon, Russia and Namibia. To maximize the utility of sulfur isotope studies, we have focused on Ediacaran stratigraphic sections from multiple continents that record both the Shuram anomaly and contain rich fossil records. These sections provide, when interpreted together, a fresh opportunity to revisit the geochemical setting that gave rise to animals. Importantly, the inclusion of multiple sulfur isotope data allows us to place further constraints on the mechanisms underpinning isotopic variability. For instance, when coupled with new experimental data, tighter constraints are provided on how fractionation scales with sulfate concentrations. This may allow for decoupling changes in biological fractionations from modifications to the global sulfur cycle (i.e. changes in seawater sulfate concentrations or the vigor of the oxidative sulfur cycle). Much of this added interpretability comes from an accompanying quantitative modeling treatment. In closing, a unified picture of the late Neoproterozoic sulfur cycle, and how it evolved through time, must provide a quantitative and coherent solution to each of these seemingly disparate observations (paleontology requiring increases in O2, remineralization requiring the consumption of oxidants). This work presents a step toward such a solution.

  16. Prototype Flight Management Capabilities to Explore Temporal RNP Concepts

    NASA Technical Reports Server (NTRS)

    Ballin, Mark G.; Williams, David H.; Allen, Bonnie Danette; Palmer, Michael T.

    2008-01-01

    Next Generation Air Transportation System (NextGen) concepts of operation may require aircraft to fly planned trajectories in four dimensions three spatial dimensions and time. A prototype 4D flight management capability is being developed by NASA to facilitate the development of these concepts. New trajectory generation functions extend today's flight management system (FMS) capabilities that meet a single Required Time of Arrival (RTA) to trajectory solutions that comply with multiple RTA constraints. When a solution is not possible, a constraint management capability relaxes constraints to achieve a trajectory solution that meets the most important constraints as specified by candidate NextGen concepts. New flight guidance functions provide continuous guidance to the aircraft s flight control system to enable it to fly specified 4D trajectories. Guidance options developed for research investigations include a moving time window with varying tolerances that are a function of proximity to imposed constraints, and guidance that recalculates the aircraft s planned trajectory as a function of the estimation of current compliance. Compliance tolerances are related to required navigation performance (RNP) through the extension of existing RNP concepts for lateral containment. A conceptual temporal RNP implementation and prototype display symbology are proposed.

  17. Finite-horizon differential games for missile-target interception system using adaptive dynamic programming with input constraints

    NASA Astrophysics Data System (ADS)

    Sun, Jingliang; Liu, Chunsheng

    2018-01-01

    In this paper, the problem of intercepting a manoeuvring target within a fixed final time is posed in a non-linear constrained zero-sum differential game framework. The Nash equilibrium solution is found by solving the finite-horizon constrained differential game problem via adaptive dynamic programming technique. Besides, a suitable non-quadratic functional is utilised to encode the control constraints into a differential game problem. The single critic network with constant weights and time-varying activation functions is constructed to approximate the solution of associated time-varying Hamilton-Jacobi-Isaacs equation online. To properly satisfy the terminal constraint, an additional error term is incorporated in a novel weight-updating law such that the terminal constraint error is also minimised over time. By utilising Lyapunov's direct method, the closed-loop differential game system and the estimation weight error of the critic network are proved to be uniformly ultimately bounded. Finally, the effectiveness of the proposed method is demonstrated by using a simple non-linear system and a non-linear missile-target interception system, assuming first-order dynamics for the interceptor and target.

  18. Four-dimensional electrical conductivity monitoring of stage-driven river water intrusion: Accounting for water table effects using a transient mesh boundary and conditional inversion constraints

    DOE PAGES

    Johnson, Tim; Versteeg, Roelof; Thomle, Jon; ...

    2015-08-01

    Our paper describes and demonstrates two methods of providing a priori information to the surface-based time-lapse three-dimensional electrical resistivity tomography (ERT) problem for monitoring stage-driven or tide-driven surface water intrusion into aquifers. First, a mesh boundary is implemented that conforms to the known location of the water table through time, thereby enabling the inversion to place a sharp bulk conductivity contrast at that boundary without penalty. Moreover, a nonlinear inequality constraint is used to allow only positive or negative transient changes in EC to occur within the saturated zone, dependent on the relative contrast in fluid electrical conductivity between surfacemore » water and groundwater. A 3-D field experiment demonstrates that time-lapse imaging results using traditional smoothness constraints are unable to delineate river water intrusion. The water table and inequality constraints provide the inversion with the additional information necessary to resolve the spatial extent of river water intrusion through time.« less

  19. Four-dimensional electrical conductivity monitoring of stage-driven river water intrusion: Accounting for water table effects using a transient mesh boundary and conditional inversion constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Tim; Versteeg, Roelof; Thomle, Jon

    Our paper describes and demonstrates two methods of providing a priori information to the surface-based time-lapse three-dimensional electrical resistivity tomography (ERT) problem for monitoring stage-driven or tide-driven surface water intrusion into aquifers. First, a mesh boundary is implemented that conforms to the known location of the water table through time, thereby enabling the inversion to place a sharp bulk conductivity contrast at that boundary without penalty. Moreover, a nonlinear inequality constraint is used to allow only positive or negative transient changes in EC to occur within the saturated zone, dependent on the relative contrast in fluid electrical conductivity between surfacemore » water and groundwater. A 3-D field experiment demonstrates that time-lapse imaging results using traditional smoothness constraints are unable to delineate river water intrusion. The water table and inequality constraints provide the inversion with the additional information necessary to resolve the spatial extent of river water intrusion through time.« less

  20. Motion coordination and programmable teleoperation between two industrial robots

    NASA Technical Reports Server (NTRS)

    Luh, J. Y. S.; Zheng, Y. F.

    1987-01-01

    Tasks for two coordinated industrial robots always bring the robots in contact with a same object. The motion coordination among the robots and the object must be maintained all the time. To plan the coordinated tasks, only one robot's motion is planned according to the required motion of the object. The motion of the second robot is to follow the first one as specified by a set of holonomic equality constraints at every time instant. If any modification of the object's motion is needed in real-time, only the first robot's motion has to be modified accordingly in real-time. The modification for the second robot is done implicitly through the constraint conditions. Thus the operation is simplified. If the object is physically removed, the second robot still continually follows the first one through the constraint conditions. If the first robot is maneuvered through either the teach pendant or the keyboard, the second one moves accordingly to form the teleoperation which is linked through the software programming. Obviously, the second robot does not need to duplicate the first robot's motion. The programming of the constraints specifies their relative motions.

  1. Correcting for the free energy costs of bond or angle constraints in molecular dynamics simulations

    PubMed Central

    König, Gerhard; Brooks, Bernard R.

    2014-01-01

    Background Free energy simulations are an important tool in the arsenal of computational biophysics, allowing the calculation of thermodynamic properties of binding or enzymatic reactions. This paper introduces methods to increase the accuracy and precision of free energy calculations by calculating the free energy costs of constraints during post-processing. The primary purpose of employing constraints for these free energy methods is to increase the phase space overlap between ensembles, which is required for accuracy and convergence. Methods The free energy costs of applying or removing constraints are calculated as additional explicit steps in the free energy cycle. The new techniques focus on hard degrees of freedom and use both gradients and Hessian estimation. Enthalpy, vibrational entropy, and Jacobian free energy terms are considered. Results We demonstrate the utility of this method with simple classical systems involving harmonic and anharmonic oscillators, four-atomic benchmark systems, an alchemical mutation of ethane to methanol, and free energy simulations between alanine and serine. The errors for the analytical test cases are all below 0.0007 kcal/mol, and the accuracy of the free energy results of ethane to methanol is improved from 0.15 to 0.04 kcal/mol. For the alanine to serine case, the phase space overlaps of the unconstrained simulations range between 0.15 and 0.9%. The introduction of constraints increases the overlap up to 2.05%. On average, the overlap increases by 94% relative to the unconstrained value and precision is doubled. Conclusions The approach reduces errors arising from constraints by about an order of magnitude. Free energy simulations benefit from the use of constraints through enhanced convergence and higher precision. General Significance The primary utility of this approach is to calculate free energies for systems with disparate energy surfaces and bonded terms, especially in multi-scale molecular mechanics/quantum mechanics simulations. PMID:25218695

  2. Correcting for the free energy costs of bond or angle constraints in molecular dynamics simulations.

    PubMed

    König, Gerhard; Brooks, Bernard R

    2015-05-01

    Free energy simulations are an important tool in the arsenal of computational biophysics, allowing the calculation of thermodynamic properties of binding or enzymatic reactions. This paper introduces methods to increase the accuracy and precision of free energy calculations by calculating the free energy costs of constraints during post-processing. The primary purpose of employing constraints for these free energy methods is to increase the phase space overlap between ensembles, which is required for accuracy and convergence. The free energy costs of applying or removing constraints are calculated as additional explicit steps in the free energy cycle. The new techniques focus on hard degrees of freedom and use both gradients and Hessian estimation. Enthalpy, vibrational entropy, and Jacobian free energy terms are considered. We demonstrate the utility of this method with simple classical systems involving harmonic and anharmonic oscillators, four-atomic benchmark systems, an alchemical mutation of ethane to methanol, and free energy simulations between alanine and serine. The errors for the analytical test cases are all below 0.0007kcal/mol, and the accuracy of the free energy results of ethane to methanol is improved from 0.15 to 0.04kcal/mol. For the alanine to serine case, the phase space overlaps of the unconstrained simulations range between 0.15 and 0.9%. The introduction of constraints increases the overlap up to 2.05%. On average, the overlap increases by 94% relative to the unconstrained value and precision is doubled. The approach reduces errors arising from constraints by about an order of magnitude. Free energy simulations benefit from the use of constraints through enhanced convergence and higher precision. The primary utility of this approach is to calculate free energies for systems with disparate energy surfaces and bonded terms, especially in multi-scale molecular mechanics/quantum mechanics simulations. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Published by Elsevier B.V.

  3. Multi-Objective Trajectory Optimization of a Hypersonic Reconnaissance Vehicle with Temperature Constraints

    NASA Astrophysics Data System (ADS)

    Masternak, Tadeusz J.

    This research determines temperature-constrained optimal trajectories for a scramjet-based hypersonic reconnaissance vehicle by developing an optimal control formulation and solving it using a variable order Gauss-Radau quadrature collocation method with a Non-Linear Programming (NLP) solver. The vehicle is assumed to be an air-breathing reconnaissance aircraft that has specified takeoff/landing locations, airborne refueling constraints, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom scramjet aircraft model is adapted from previous work and includes flight dynamics, aerodynamics, and thermal constraints. Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and coverage of high-value targets. To solve the optimal control formulation, a MATLAB-based package called General Pseudospectral Optimal Control Software (GPOPS-II) is used, which transcribes continuous time optimal control problems into an NLP problem. In addition, since a mission profile can have varying vehicle dynamics and en-route imposed constraints, the optimal control problem formulation can be broken up into several "phases" with differing dynamics and/or varying initial/final constraints. Optimal trajectories are developed using several different performance costs in the optimal control formulation: minimum time, minimum time with control penalties, and maximum range. The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for larger-scale operational and campaign planning and execution.

  4. Optimization of Aerospace Structure Subject to Damage Tolerance Criteria

    NASA Technical Reports Server (NTRS)

    Akgun, Mehmet A.

    1999-01-01

    The objective of this cooperative agreement was to seek computationally efficient ways to optimize aerospace structures subject to damage tolerance criteria. Optimization was to involve sizing as well as topology optimization. The work was done in collaboration with Steve Scotti, Chauncey Wu and Joanne Walsh at the NASA Langley Research Center. Computation of constraint sensitivity is normally the most time-consuming step of an optimization procedure. The cooperative work first focused on this issue and implemented the adjoint method of sensitivity computation in an optimization code (runstream) written in Engineering Analysis Language (EAL). The method was implemented both for bar and plate elements including buckling sensitivity for the latter. Lumping of constraints was investigated as a means to reduce the computational cost. Adjoint sensitivity computation was developed and implemented for lumped stress and buckling constraints. Cost of the direct method and the adjoint method was compared for various structures with and without lumping. The results were reported in two papers. It is desirable to optimize topology of an aerospace structure subject to a large number of damage scenarios so that a damage tolerant structure is obtained. Including damage scenarios in the design procedure is critical in order to avoid large mass penalties at later stages. A common method for topology optimization is that of compliance minimization which has not been used for damage tolerant design. In the present work, topology optimization is treated as a conventional problem aiming to minimize the weight subject to stress constraints. Multiple damage configurations (scenarios) are considered. Each configuration has its own structural stiffness matrix and, normally, requires factoring of the matrix and solution of the system of equations. Damage that is expected to be tolerated is local and represents a small change in the stiffness matrix compared to the baseline (undamaged) structure. The exact solution to a slightly modified set of equations can be obtained from the baseline solution economically without actually solving the modified system. Sherrnan-Morrison-Woodbury (SMW) formulas are matrix update formulas that allow this. SMW formulas were therefore used here to compute adjoint displacements for sensitivity computation and structural displacements in damaged configurations.

  5. The effects of perceived leisure constraints among Korean university students

    Treesearch

    Sae-Sook Oh; Sei-Yi Oh; Linda L. Caldwell

    2002-01-01

    This study is based on Crawford, Jackson, and Godbey's model of leisure constraints (1991), and examines the relationships between the influences of perceived constraints, frequency of participation, and health status in the context of leisure-time outdoor activities. The study was based on a sample of 234 Korean university students. This study provides further...

  6. The Airlift Planning Problem

    DTIC Science & Technology

    2017-01-02

    to model various aspects of the problem, such as continuous travel times , constraints on weight and rest/active time constraints, using the language of... travel time ; if no feasible insertion point exists, then a new vehicle is added to the solution. This procedure 3 resembles our initialization heuristic...and only if aircraft a is used in the schedule. Each event e must occur at a specific location or “port”. We let τe,e′,a denote the travel time of

  7. A New Method to Test the Einstein’s Weak Equivalence Principle

    NASA Astrophysics Data System (ADS)

    Yu, Hai; Xi, Shao-Qiang; Wang, Fa-Yin

    2018-06-01

    The Einstein’s weak equivalence principle (WEP) is one of the foundational assumptions of general relativity and some other gravity theories. In the theory of parametrized post-Newtonian (PPN), the difference between the PPN parameters γ of different particles or the same type of particle with different energies, Δγ, represents the violation of WEP. Current constraints on Δγ are derived from the observed time delay between correlated particles of astronomical sources. However, the observed time delay is contaminated by other effects, such as the time delays due to different particle emission times, the potential Lorentz invariance violation, and none-zero photon rest mass. Therefore, current constraints are only upper limits. Here, we propose a new method to test WEP based on the fact that the gravitational time delay is direction-dependent while others are not. This is the first method that can naturally correct other time-delay effects. Using the time-delay measurements of BASTE gamma-ray burst sample and the gravitational potential of local super galaxy cluster Laniakea, we find that the constraint on Δγ of different energy photons can be as low as 10‑14. In the future, if more gravitational wave events and fast radio bursts with much more precise time-delay measurements are observed, this method can give a reliable and tight constraint on WEP.

  8. Walking to the Beat of Their Own Drum: How Children and Adults Meet Timing Constraints

    PubMed Central

    Gill, Simone V.

    2015-01-01

    Walking requires adapting to meet task constraints. Between 5- and 7-years old, children’s walking approximates adult walking without constraints. To examine how children and adults adapt to meet timing constraints, 57 5- to 7-year olds and 20 adults walked to slow and fast audio metronome paces. Both children and adults modified their walking. However, at the slow pace, children had more trouble matching the metronome compared to adults. The youngest children’s walking patterns deviated most from the slow metronome pace, and practice improved their performance. Five-year olds were the only group that did not display carryover effects to the metronome paces. Findings are discussed in relation to what contributes to the development of adaptation in children. PMID:26011538

  9. An algebraic method for constructing stable and consistent autoregressive filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu

    2015-02-15

    In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less

  10. Maximum relative speeds of living organisms: Why do bacteria perform as fast as ostriches?

    NASA Astrophysics Data System (ADS)

    Meyer-Vernet, Nicole; Rospars, Jean-Pierre

    2016-12-01

    Self-locomotion is central to animal behaviour and survival. It is generally analysed by focusing on preferred speeds and gaits under particular biological and physical constraints. In the present paper we focus instead on the maximum speed and we study its order-of-magnitude scaling with body size, from bacteria to the largest terrestrial and aquatic organisms. Using data for about 460 species of various taxonomic groups, we find a maximum relative speed of the order of magnitude of ten body lengths per second over a 1020-fold mass range of running and swimming animals. This result implies a locomotor time scale of the order of one tenth of second, virtually independent on body size, anatomy and locomotion style, whose ubiquity requires an explanation building on basic properties of motile organisms. From first-principle estimates, we relate this generic time scale to other basic biological properties, using in particular the recent generalisation of the muscle specific tension to molecular motors. Finally, we go a step further by relating this time scale to still more basic quantities, as environmental conditions at Earth in addition to fundamental physical and chemical constants.

  11. Time constraints in temperate-breeding species: Influence of growing season length on reproductive strategies

    USGS Publications Warehouse

    Gurney, K. E. B.; Clark, R.G.; Slattery, S.M.; Smith-Downey, N. V.; Walker, J.; Armstrong, L.M.; Stephens, S.E.; Petrula, M.; Corcoran, R.M.; Martin, K.H.; Degroot, K.A.; Brook, Rodney W.; Afton, A.D.; Cutting, K.; Warren, J.M.; Fournier, M.; Koons, D.N.

    2011-01-01

    Organisms that reproduce in temperate regions have limited time to produce offspring successfully, and this constraint is expected to be more pronounced in areas with short growing seasons. Information concerning how reproductive ecology of endotherms might be influenced by growing season length (GSL) is rare, and species that breed over a broad geographic range provide an opportunity to study the effects of time constraints on reproductive strategies. We analyzed data from a temperate-breeding bird, the lesser scaup Aythya affinis; hereafter scaup, collected at eight sites across a broad gradient of GSL to evaluate three hypotheses related to reproductive compensation in response to varying time constraints. Clutch initiation date in scaup was unaffected by GSL and was unrelated to latitude; spring thaw dates had a marginal impact on timing of breeding. Clutch size declined during the nesting season, as is reported frequently in bird species, but was also unaffected by GSL. Scaup do not appear to compensate for shorter growing seasons by more rapidly reducing clutch size. This study demonstrates that this species is remarkably consistent in terms of timing of breeding and clutch size, regardless of growing season characteristics. Such inflexibility could make this species particularly sensitive to environmental changes that affect resource availabilities. ?? 2011 The Authors. Ecography ?? 2011 Ecography.

  12. Time constraints in temperate-breeding species: influence of growing season length on reproductive strategies

    USGS Publications Warehouse

    Gurney, K. E. B.; Clark, Russell G.; Slattery, Stuart; Smith-Downey, N. V.; Walker, Jordan I.; Armstrong, L.M.; Stephens, S.E.; Petrula, Michael J.; Corcoran, R.M.; Martin, K.; Degroot, K.A.; Brook, Rodney W.; Afton, Alan D.; Cutting, K.; Warren, J.M.; Fournier, M.; Koons, David N.

    2011-01-01

    Organisms that reproduce in temperate regions have limited time to produce offspring successfully, and this constraint is expected to be more pronounced in areas with short growing seasons. Information concerning how reproductive ecology of endotherms might be influenced by growing season length (GSL) is rare, and species that breed over a broad geographic range provide an opportunity to study the effects of time constraints on reproductive strategies. We analyzed data from a temperate-breeding bird, the lesser scaup Aythya affinis; hereafter scaup, collected at eight sites across a broad gradient of GSL to evaluate three hypotheses related to reproductive compensation in response to varying time constraints. Clutch initiation date in scaup was unaffected by GSL and was unrelated to latitude; spring thaw dates had a marginal impact on timing of breeding. Clutch size declined during the nesting season, as is reported frequently in bird species, but was also unaffected by GSL. Scaup do not appear to compensate for shorter growing seasons by more rapidly reducing clutch size. This study demonstrates that this species is remarkably consistent in terms of timing of breeding and clutch size, regardless of growing season characteristics. Such inflexibility could make this species particularly sensitive to environmental changes that affect resource availabilities.

  13. Cannibalism and activity rate in larval damselflies increase along a latitudinal gradient as a consequence of time constraints.

    PubMed

    Sniegula, Szymon; Golab, Maria J; Johansson, Frank

    2017-07-14

    Predation is ubiquitous in nature. One form of predation is cannibalism, which is affected by many factors such as size structure and resource density. However, cannibalism may also be influenced by abiotic factors such as seasonal time constraints. Since time constraints are greater at high latitudes, cannibalism could be stronger at such latitudes, but we know next to nothing about latitudinal variation in cannibalism. In this study, we examined cannibalism and activity in larvae of the damselfly Lestes sponsa along a latitudinal gradient across Europe. We did this by raising larvae from the egg stage at different temperatures and photoperiods corresponding to different latitudes. We found that the more seasonally time-constrained populations in northern latitudes and individuals subjected to greater seasonal time constraints exhibited a higher level of cannibalism. We also found that activity was higher at north latitude conditions, and thus correlated with cannibalism, suggesting that this behaviour mediates higher levels of cannibalism in time-constrained animals. Our results go counter to the classical latitude-predation pattern which predicts higher predation at lower latitudes, since we found that predation was stronger at higher latitudes. The differences in cannibalism might have implications for population dynamics along the latitudinal gradients, but further experiments are needed to explore this.

  14. Algebra of implicitly defined constraints for gravity as the general form of embedding theory

    NASA Astrophysics Data System (ADS)

    Paston, S. A.; Semenova, E. N.; Franke, V. A.; Sheykin, A. A.

    2017-01-01

    We consider the embedding theory, the approach to gravity proposed by Regge and Teitelboim, in which 4D space-time is treated as a surface in high-dimensional flat ambient space. In its general form, which does not contain artificially imposed constraints, this theory can be viewed as an extension of GR. In the present paper we study the canonical description of the embedding theory in this general form. In this case, one of the natural constraints cannot be written explicitly, in contrast to the case where additional Einsteinian constraints are imposed. Nevertheless, it is possible to calculate all Poisson brackets with this constraint. We prove that the algebra of four emerging constraints is closed, i.e., all of them are first-class constraints. The explicit form of this algebra is also obtained.

  15. Automated identification of brain tumors from single MR images based on segmentation with refined patient-specific priors

    PubMed Central

    Sanjuán, Ana; Price, Cathy J.; Mancini, Laura; Josse, Goulven; Grogan, Alice; Yamamoto, Adam K.; Geva, Sharon; Leff, Alex P.; Yousry, Tarek A.; Seghier, Mohamed L.

    2013-01-01

    Brain tumors can have different shapes or locations, making their identification very challenging. In functional MRI, it is not unusual that patients have only one anatomical image due to time and financial constraints. Here, we provide a modified automatic lesion identification (ALI) procedure which enables brain tumor identification from single MR images. Our method rests on (A) a modified segmentation-normalization procedure with an explicit “extra prior” for the tumor and (B) an outlier detection procedure for abnormal voxel (i.e., tumor) classification. To minimize tissue misclassification, the segmentation-normalization procedure requires prior information of the tumor location and extent. We therefore propose that ALI is run iteratively so that the output of Step B is used as a patient-specific prior in Step A. We test this procedure on real T1-weighted images from 18 patients, and the results were validated in comparison to two independent observers' manual tracings. The automated procedure identified the tumors successfully with an excellent agreement with the manual segmentation (area under the ROC curve = 0.97 ± 0.03). The proposed procedure increases the flexibility and robustness of the ALI tool and will be particularly useful for lesion-behavior mapping studies, or when lesion identification and/or spatial normalization are problematic. PMID:24381535

  16. Performance constraints and compensation for teleoperation with delay

    NASA Technical Reports Server (NTRS)

    Mclaughlin, J. S.; Staunton, B. D.

    1989-01-01

    A classical control perspective is used to characterize performance constraints and evaluate compensation techniques for teleoperation with delay. Use of control concepts such as open and closed loop performance, stability, and bandwidth yield insight to the delay problem. Teleoperator performance constraints are viewed as an open loop time delay lag and as a delay-induced closed loop bandwidth constraint. These constraints are illustrated with a simple analytical tracking example which is corroborated by a real time, 'man-in-the-loop' tracking experiment. The experiment also provides insight to those controller characteristics which are unique to a human operator. Predictive displays and feedforward commands are shown to provide open loop compensation for delay lag. Low pass filtering of telemetry or feedback signals is interpreted as closed loop compensation used to maintain a sufficiently low bandwidth for stability. A new closed loop compensation approach is proposed that uses a reactive (or force feedback) hand controller to restrict system bandwidth by impeding operator inputs.

  17. Marketers Understanding Engineers and Engineers Understanding Marketers: The Opportunities and Constraints of a Cross-Discipline Course Using 3D Printing to Develop Marketable Innovations

    ERIC Educational Resources Information Center

    Reifschneider, Louis; Kaufman, Peter; Langrehr, Frederick W.; Kaufman, Kristina

    2015-01-01

    Marketers are criticized for not understanding the steps in the engineering research and development process and the challenges of manufacturing a new product at a profit. Engineers are criticized for not considering the marketability of and customer interest in such a product during the planning stages. With the development of 3D printing, rapid…

  18. Analytical and Experimental Characterization of Thick-Section Fiber-Metal Laminates

    DTIC Science & Technology

    2013-06-01

    individual metal layers as loading increases. The off-axis deformation properties of the prepreg layers were modeled by using equivalent constraint models...the degraded stiffness of the prepreg layer is found. At each loading step the stiffness properties of individual layers are calculated. These...predicts stress-strain curves on-axis, additional work is needed to study the local interactions between metal and prepreg layers as damage occurs in each

  19. Behavioural variability and motor performance: Effect of practice specialization in front crawl swimming.

    PubMed

    Seifert, L; De Jesus, K; Komar, J; Ribeiro, J; Abraldes, J A; Figueiredo, P; Vilas-Boas, J P; Fernandes, R J

    2016-06-01

    The aim was to examine behavioural variability within and between individuals, especially in a swimming task, to explore how swimmers with various specialty (competitive short distance swimming vs. triathlon) adapt to repetitive events of sub-maximal intensity, controlled in speed but of various distances. Five swimmers and five triathletes randomly performed three variants (with steps of 200, 300 and 400m distances) of a front crawl incremental step test until exhaustion. Multi-camera system was used to collect and analyse eight kinematical and swimming efficiency parameters. Analysis of variance showed significant differences between swimmers and triathletes, with significant individual effect. Cluster analysis put these parameters together to investigate whether each individual used the same pattern(s) and one or several patterns to achieve the task goal. Results exhibited ten patterns for the whole population, with only two behavioural patterns shared between swimmers and triathletes. Swimmers tended to use higher hand velocity and index of coordination than triathletes. Mono-stability occurred in swimmers whatever the task constraint showing high stability, while triathletes revealed bi-stability because they switched to another pattern at mid-distance of the task. Finally, our analysis helped to explain and understand effect of specialty and more broadly individual adaptation to task constraint. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Spacecraft Attitude Maneuver Planning Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Kornfeld, Richard P.

    2004-01-01

    A key enabling technology that leads to greater spacecraft autonomy is the capability to autonomously and optimally slew the spacecraft from and to different attitudes while operating under a number of celestial and dynamic constraints. The task of finding an attitude trajectory that meets all the constraints is a formidable one, in particular for orbiting or fly-by spacecraft where the constraints and initial and final conditions are of time-varying nature. This approach for attitude path planning makes full use of a priori constraint knowledge and is computationally tractable enough to be executed onboard a spacecraft. The approach is based on incorporating the constraints into a cost function and using a Genetic Algorithm to iteratively search for and optimize the solution. This results in a directed random search that explores a large part of the solution space while maintaining the knowledge of good solutions from iteration to iteration. A solution obtained this way may be used as is or as an initial solution to initialize additional deterministic optimization algorithms. A number of representative case examples for time-fixed and time-varying conditions yielded search times that are typically on the order of minutes, thus demonstrating the viability of this method. This approach is applicable to all deep space and planet Earth missions requiring greater spacecraft autonomy, and greatly facilitates navigation and science observation planning.

  1. Methodological aspects of an adaptive multidirectional pattern search to optimize speech perception using three hearing-aid algorithms

    NASA Astrophysics Data System (ADS)

    Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes

    2004-12-01

    In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .

  2. Validation of the alternating conditional estimation algorithm for estimation of flexible extensions of Cox's proportional hazards model with nonlinear constraints on the parameters.

    PubMed

    Wynant, Willy; Abrahamowicz, Michal

    2016-11-01

    Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. An embedded mesh method using piecewise constant multipliers with stabilization: mathematical and numerical aspects

    DOE PAGES

    Puso, M. A.; Kokko, E.; Settgast, R.; ...

    2014-10-22

    An embedded mesh method using piecewise constant multipliers originally proposed by Puso et al. (CMAME, 2012) is analyzed here to determine effects of the pressure stabilization term and small cut cells. The approach is implemented for transient dynamics using the central difference scheme for the time discretization. It is shown that the resulting equations of motion are a stable linear system with a condition number independent of mesh size. Furthermore, we show that the constraints and the stabilization terms can be recast as non-proportional damping such that the time integration of the scheme is provably stable with a critical timemore » step computed from the undamped equations of motion. Effects of small cuts are discussed throughout the presentation. A mesh study is conducted to evaluate the effects of the stabilization on the discretization error and conditioning and is used to recommend an optimal value for stabilization scaling parameter. Several nonlinear problems are also analyzed and compared with comparable conforming mesh results. Finally, we show several demanding problems highlighting the robustness of the proposed approach.« less

  4. Design principles and optimal performance for molecular motors under realistic constraints

    NASA Astrophysics Data System (ADS)

    Tu, Yuhai; Cao, Yuansheng

    2018-02-01

    The performance of a molecular motor, characterized by its power output and energy efficiency, is investigated in the motor design space spanned by the stepping rate function and the motor-track interaction potential. Analytic results and simulations show that a gating mechanism that restricts forward stepping in a narrow window in configuration space is needed for generating high power at physiologically relevant loads. By deriving general thermodynamics laws for nonequilibrium motors, we find that the maximum torque (force) at stall is less than its theoretical limit for any realistic motor-track interactions due to speed fluctuations. Our study reveals a tradeoff for the motor-track interaction: while a strong interaction generates a high power output for forward steps, it also leads to a higher probability of wasteful spontaneous back steps. Our analysis and simulations show that this tradeoff sets a fundamental limit to the maximum motor efficiency in the presence of spontaneous back steps, i.e., loose-coupling. Balancing this tradeoff leads to an optimal design of the motor-track interaction for achieving a maximum efficiency close to 1 for realistic motors that are not perfectly coupled with the energy source. Comparison with existing data and suggestions for future experiments are discussed.

  5. Multi-objective optimization for an automated and simultaneous phase and baseline correction of NMR spectral data

    NASA Astrophysics Data System (ADS)

    Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus

    2018-04-01

    Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.

  6. Analysis of Aircraft Clusters to Measure Sector-Independent Airspace Congestion

    NASA Technical Reports Server (NTRS)

    Bilimoria, Karl D.; Lee, Hilda Q.

    2005-01-01

    The Distributed Air/Ground Traffic Management (DAG-TM) concept of operations* permits appropriately equipped aircraft to conduct Free Maneuvering operations. These independent aircraft have the freedom to optimize their trajectories in real time according to user preferences; however, they also take on the responsibility to separate themselves from other aircraft while conforming to any local Traffic Flow Management (TFM) constraints imposed by the air traffic service provider (ATSP). Examples of local-TFM constraints include temporal constraints such as a required time of arrival (RTA), as well as spatial constraints such as regions of convective weather, special use airspace, and congested airspace. Under current operations, congested airspace typically refers to a sector(s) that cannot accept additional aircraft due to controller workload limitations; hence Dynamic Density (a metric that is indicative of controller workload) can be used to quantify airspace congestion. However, for Free Maneuvering operations under DAG-TM, an additional metric is needed to quantify the airspace congestion problem from the perspective of independent aircraft. Such a metric would enable the ATSP to prevent independent aircraft from entering any local areas of congestion in which the flight deck based systems and procedures may not be able to ensure separation. This new metric, called Gaggle Density, offers the ATSP a mode of control to regulate normal operations and to ensure safety and stability during rare-normal or off-normal situations (e.g., system failures). It may be difficult to certify Free Maneuvering systems for unrestricted operations, but it may be easier to certify systems and procedures for specified levels of Gaggle Density that could be monitored by the ATSP, and maintained through relatively minor flow-rate (RTA type) restrictions. Since flight deck based separation assurance is airspace independent, the challenge is to measure congestion independent of sector boundaries. Figure 1 , reproduced from Ref. 1, depicts an example traffic situation. When the situation is analyzed by sector boundaries (left side of figure), a Dynamic Density metric would identify excessive congestion in the central sector. When the same traffic situation is analyzed independent of sector boundaries (right side of figure), a Gaggle Density metric would identify congestion in two dynamically defined areas covering portions of several sectors. The first step towards measuring airspace-independent congestion is to identify aircraft clusters, i.e., groups of closely spaced aircraft. The objective of this work is to develop techniques to detect and classify clusters of aircraft.

  7. An uncertainty principle for star formation - II. A new method for characterising the cloud-scale physics of star formation and feedback across cosmic history

    NASA Astrophysics Data System (ADS)

    Kruijssen, J. M. Diederik; Schruba, Andreas; Hygate, Alexander P. S.; Hu, Chia-Yu; Haydon, Daniel T.; Longmore, Steven N.

    2018-05-01

    The cloud-scale physics of star formation and feedback represent the main uncertainty in galaxy formation studies. Progress is hampered by the limited empirical constraints outside the restricted environment of the Local Group. In particular, the poorly-quantified time evolution of the molecular cloud lifecycle, star formation, and feedback obstructs robust predictions on the scales smaller than the disc scale height that are resolved in modern galaxy formation simulations. We present a new statistical method to derive the evolutionary timeline of molecular clouds and star-forming regions. By quantifying the excess or deficit of the gas-to-stellar flux ratio around peaks of gas or star formation tracer emission, we directly measure the relative rarity of these peaks, which allows us to derive their lifetimes. We present a step-by-step, quantitative description of the method and demonstrate its practical application. The method's accuracy is tested in nearly 300 experiments using simulated galaxy maps, showing that it is capable of constraining the molecular cloud lifetime and feedback time-scale to <0.1 dex precision. Access to the evolutionary timeline provides a variety of additional physical quantities, such as the cloud-scale star formation efficiency, the feedback outflow velocity, the mass loading factor, and the feedback energy or momentum coupling efficiencies to the ambient medium. We show that the results are robust for a wide variety of gas and star formation tracers, spatial resolutions, galaxy inclinations, and galaxy sizes. Finally, we demonstrate that our method can be applied out to high redshift (z≲ 4) with a feasible time investment on current large-scale observatories. This is a major shift from previous studies that constrained the physics of star formation and feedback in the immediate vicinity of the Sun.

  8. Bayesian functional integral method for inferring continuous data from discrete measurements.

    PubMed

    Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul

    2012-02-08

    Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  9. Sentinel-3 coverage-driven mission design: Coupling of orbit selection and instrument design

    NASA Astrophysics Data System (ADS)

    Cornara, S.; Pirondini, F.; Palmade, J. L.

    2017-11-01

    The first satellite of the Sentinel-3 series was launched in February 2016. Sentinel-3 payload suite encompasses the Ocean and Land Colour Instrument (OLCI) with a swath of 1270 km, the Sea and Land Surface Temperature Radiometer (SLSTR) yielding a dual-view scan with swaths of 1420 km (nadir) and 750 km (oblique view), the Synthetic Aperture Radar Altimeter (SRAL) working in Ku-band and C-band, and the dual-frequency Microwave Radiometer (MWR). In the early stages of mission and system design, the main driver for the Sentinel-3 reference orbit selection was the requirement to achieve a revisit time of two days or less globally over ocean areas with two satellites (i.e. 4-day global coverage with one satellite). The orbit selection was seamlessly coupled with the OLCI instrument design in terms of field of view (FoV) definition driven by the observation zenith angle (OZA) and sunglint constraints applied to ocean observations. The criticality of the global coverage requirement for ocean monitoring derives from the sunglint phenomenon, i.e. the impact on visible channels of the solar ray reflection on the water surface. This constraint was finally overcome thanks to the concurrent optimisation of the orbit parameters, notably the Local Time at Descending Node (LTDN), and the OLCI instrument FoV definition. The orbit selection process started with the identification of orbits with short repeat cycle (2-4 days), firstly to minimise the time required to achieve global coverage with existing constraints, and then to minimise the swath required to obtain global coverage and the maximum required OZA. This step yielded the selection of a 4-day repeat cycle orbit, thus allowing 2-day coverage with two adequately spaced satellites. Then suitable candidate orbits with higher repeat cycles were identified in the proximity of the selected altitudes and the reference orbit was ultimately chosen. Rationale was to keep the swath for global coverage as close as possible to the previous optimum value, but to tailor the repeat cycle length (i.e. the ground-track grid) to optimise the topography mission performances. The final choice converged on the sun-synchronous orbit 14 + 7/27, reference altitude ∼800 km, LTDN = 10h00. Extensive coverage analyses were carried out to characterise the mission performance and the fulfilment of the requirements, encompassing revisit time, number of acquisitions, observation viewing geometry and swath properties. This paper presents a comprehensive overview of the Sentinel-3 orbit selection, starting from coverage requirements and highlighting the close interaction with the instrument design activity.

  10. Using LDPC Code Constraints to Aid Recovery of Symbol Timing

    NASA Technical Reports Server (NTRS)

    Jones, Christopher; Villasnor, John; Lee, Dong-U; Vales, Esteban

    2008-01-01

    A method of utilizing information available in the constraints imposed by a low-density parity-check (LDPC) code has been proposed as a means of aiding the recovery of symbol timing in the reception of a binary-phase-shift-keying (BPSK) signal representing such a code in the presence of noise, timing error, and/or Doppler shift between the transmitter and the receiver. This method and the receiver architecture in which it would be implemented belong to a class of timing-recovery methods and corresponding receiver architectures characterized as pilotless in that they do not require transmission and reception of pilot signals. Acquisition and tracking of a signal of the type described above have traditionally been performed upstream of, and independently of, decoding and have typically involved utilization of a phase-locked loop (PLL). However, the LDPC decoding process, which is iterative, provides information that can be fed back to the timing-recovery receiver circuits to improve performance significantly over that attainable in the absence of such feedback. Prior methods of coupling LDPC decoding with timing recovery had focused on the use of output code words produced as the iterations progress. In contrast, in the present method, one exploits the information available from the metrics computed for the constraint nodes of an LDPC code during the decoding process. In addition, the method involves the use of a waveform model that captures, better than do the waveform models of the prior methods, distortions introduced by receiver timing errors and transmitter/ receiver motions. An LDPC code is commonly represented by use of a bipartite graph containing two sets of nodes. In the graph corresponding to an (n,k) code, the n variable nodes correspond to the code word symbols and the n-k constraint nodes represent the constraints that the code places on the variable nodes in order for them to form a valid code word. The decoding procedure involves iterative computation of values associated with these nodes. A constraint node represents a parity-check equation using a set of variable nodes as inputs. A valid decoded code word is obtained if all parity-check equations are satisfied. After each iteration, the metrics associated with each constraint node can be evaluated to determine the status of the associated parity check. Heretofore, normally, these metrics would be utilized only within the LDPC decoding process to assess whether or not variable nodes had converged to a codeword. In the present method, it is recognized that these metrics can be used to determine accuracy of the timing estimates used in acquiring the sampled data that constitute the input to the LDPC decoder. In fact, the number of constraints that are satisfied exhibits a peak near the optimal timing estimate. Coarse timing estimation (or first-stage estimation as described below) is found via a parametric search for this peak. The present method calls for a two-stage receiver architecture illustrated in the figure. The first stage would correct large time delays and frequency offsets; the second stage would track random walks and correct residual time and frequency offsets. In the first stage, constraint-node feedback from the LDPC decoder would be employed in a search algorithm in which the searches would be performed in successively narrower windows to find the correct time delay and/or frequency offset. The second stage would include a conventional first-order PLL with a decision-aided timing-error detector that would utilize, as its decision aid, decoded symbols from the LDPC decoder. The method has been tested by means of computational simulations in cases involving various timing and frequency errors. The results of the simulations ined in the ideal case of perfect timing in the receiver.

  11. Implicit Formulation of Muscle Dynamics in OpenSim

    NASA Technical Reports Server (NTRS)

    Humphreys, Brad; Dembia, Chris; Lewandowski, Beth; Van Den Bogert, Antonie

    2017-01-01

    Astronauts lose bone and muscle mass during spaceflight. Exercise countermeasure is the primary method for counteracting bone and muscle mass loss in space. New spacecraft exercise device concepts are currently being developed for the NASAs new crew exploration vehicle. The NASA Digital Astronaut Project (DAP) uses computational modeling to help determine if the new exercise devices will be effective as countermeasures. The NASA Digital Astronaut Project is developing the ability to utilize predictive simulation to provide insight into the change in kinematics and kinetics with a change in device and gravitational environment (1-g versus 0-g). For example, in space exercise the subject's body weight is applied in addition to the loads prescribed for musculoskeletal maintenance. How and where these loads are applied obviously directly impacts bone and tissue loads. Additionally, due to space vehicle structural requirements, exercise devices are often placed on vibration isolation systems. This changes the apparent impedance or stiffness of the device as seen by the user. Data collection under these conditions is often impractical and limited. Predictive modeling provides a means to have a virtual subject to test hypotheses. Predictive simulation provides a virtual subject for which we are able to perform studies such as sensitivity to device loading and vibration isolation without the need for laboratory kinematic or kinetic test data.Direct Collocation optimization provides an efficient means to perform task based optimization and predictive modeling. It is relatively straight forward to structure a physical exercise task in a Direct Collocation mathematical formulation: perform a motion such that you start at an initial pose, achieve a given amount of deflection i.e a squat, return to the initial pose, and minimize muscle activation cost. Direct Collocation is advantageous in that it does not require numerical integration to evaluate the objective function. Instead, the system dynamics are transformed to discrete time and the optimizer is constrained such that the solution is not considered to be a valid unless the dynamic equations are satisfied at all time points. The simulation and optimization are effectively done simultaneously. Due to the implicit integration, time steps can be more coarse than in a differential equation solver. In a gait scenario this means that that the model constraints and cost function are evaluated at 100 nodes in the gait cycle versus 10,000 integration steps in a variable-step forward dynamic simulation. Furthermore, no time is wasted on accurate simulations of movements that are far from the optimum. Constrained optimization algorithms require a Jacobian matrix that contains the partial derivatives of each of the dynamic constraints with respect to of each of the state and control variables at all time points. This is a large but sparse matrix. An implicit dynamics formulation requires computation of the dynamic residuals f as a function of the states x and their derivatives, and controls u:f(x, dxdt, u) 0If the dynamics of musculoskeletal system are formulated implicitly, the Jacobian elements are often available analytically, eliminating the need for numerical differentiation; this is obviously computationally advantageous. Additionally, implicit formulation of musculoskeletal dynamics do not suffer from singularities from low mass bodies, zero muscle activation, or other stiff system or

  12. TOTAL ORE PROCESSING INTEGRATION AND MANAGEMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leslie Gertsch; Richard Gertsch

    2005-05-16

    The lessons learned from ore segregation test No.3 were presented to Minntac Mine personnel during the reporting period. Ore was segregated by A-Factor, with low values going to Step 1/2 and high values going to Step 3. During the test, the mine maintained the best split possible for the given production and location constraints. During the test, Step 1&2 A-Factor was lowered more than Step 3 was raised. All other ore quality changes were not manipulated, but the segregation by A-Factor affected most of the other qualities. Magnetic iron, coarse tails, fine tails, silica, and grind changed in response tomore » the split. Segregation was achieved by adding ore from HIS to the Step 3 blend and lowering the amount of LC 1&2 and somewhat lowering the amount of LC 3&4. Conversely, Step 1&2 received less HIS with a corresponding increase in LC 1&2. The amount of IBC was increased to both Steps about one-third of the way into the test. For about the center half of the test, LC 3&4 was reduced to both Steps. The most noticeable layer changes were, then: an increase in the HIS split; a decrease in the LC 1&2 split; adding IBC to both Steps; and lowering LC 3&4 to both Steps. Statistical analysis of the dataset collected during ordinary, non-segregated operation of the mine and mill is continuing. Graphical analysis of blast patterns according to drill monitor data was slowed by student classwork. It is expected to resume after the semester ends in May.« less

  13. Balancing Healthy Meals and Busy Lives: Associations between Work, School, and Family Responsibilities and Perceived Time Constraints among Young Adults

    ERIC Educational Resources Information Center

    Pelletier, Jennifer E.; Laska, Melissa N.

    2012-01-01

    Objective: To characterize associations between perceived time constraints for healthy eating and work, school, and family responsibilities among young adults. Design: Cross-sectional survey. Setting: A large, Midwestern metropolitan region. Participants: A diverse sample of community college (n = 598) and public university (n = 603) students.…

  14. Discretionary Time of Chinese College Students: Activities and Impact of SARS-Induced Constraints on Choices

    ERIC Educational Resources Information Center

    Yang, He; Hutchinson, Susan; Zinn, Harry; Watson, Alan

    2011-01-01

    How people make choices about activity engagement during discretionary time is a topic of increasing interest to those studying quality of life issues. Assuming choices are made to maximize individual welfare, several factors are believed to influence these choices. Constraints theory from the leisure research literature suggests these choices are…

  15. Dependent Measure and Time Constraints Modulate the Competition between Conflicting Feature-Based and Rule-Based Generalization Processes

    ERIC Educational Resources Information Center

    Cobos, Pedro L.; Gutiérrez-Cobo, María J.; Morís, Joaquín; Luque, David

    2017-01-01

    In our study, we tested the hypothesis that feature-based and rule-based generalization involve different types of processes that may affect each other producing different results depending on time constraints and on how generalization is measured. For this purpose, participants in our experiments learned cue-outcome relationships that followed…

  16. Test Design and Speededness

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    2011-01-01

    A critical component of test speededness is the distribution of the test taker's total time on the test. A simple set of constraints on the item parameters in the lognormal model for response times is derived that can be used to control the distribution when assembling a new test form. As the constraints are linear in the item parameters, they can…

  17. The Timing of Island Effects in Nonnative Sentence Processing

    ERIC Educational Resources Information Center

    Felser, Claudia; Cunnings, Ian; Batterham, Claire; Clahsen, Harald

    2012-01-01

    Using the eye-movement monitoring technique in two reading comprehension experiments, this study investigated the timing of constraints on wh-dependencies (so-called island constraints) in first- and second-language (L1 and L2) sentence processing. The results show that both L1 and L2 speakers of English are sensitive to extraction islands during…

  18. Optimization-based power management of hybrid power systems with applications in advanced hybrid electric vehicles and wind farms with battery storage

    NASA Astrophysics Data System (ADS)

    Borhan, Hoseinali

    Modern hybrid electric vehicles and many stationary renewable power generation systems combine multiple power generating and energy storage devices to achieve an overall system-level efficiency and flexibility which is higher than their individual components. The power or energy management control, "brain" of these "hybrid" systems, determines adaptively and based on the power demand the power split between multiple subsystems and plays a critical role in overall system-level efficiency. This dissertation proposes that a receding horizon optimal control (aka Model Predictive Control) approach can be a natural and systematic framework for formulating this type of power management controls. More importantly the dissertation develops new results based on the classical theory of optimal control that allow solving the resulting optimal control problem in real-time, in spite of the complexities that arise due to several system nonlinearities and constraints. The dissertation focus is on two classes of hybrid systems: hybrid electric vehicles in the first part and wind farms with battery storage in the second part. The first part of the dissertation proposes and fully develops a real-time optimization-based power management strategy for hybrid electric vehicles. Current industry practice uses rule-based control techniques with "else-then-if" logic and look-up maps and tables in the power management of production hybrid vehicles. These algorithms are not guaranteed to result in the best possible fuel economy and there exists a gap between their performance and a minimum possible fuel economy benchmark. Furthermore, considerable time and effort are spent calibrating the control system in the vehicle development phase, and there is little flexibility in real-time handling of constraints and re-optimization of the system operation in the event of changing operating conditions and varying parameters. In addition, a proliferation of different powertrain configurations may result in the need for repeated control system redesign. To address these shortcomings, we formulate the power management problem as a nonlinear and constrained optimal control problem. Solution of this optimal control problem in real-time on chronometric- and memory-constrained automotive microcontrollers is quite challenging; this computational complexity is due to the highly nonlinear dynamics of the powertrain subsystems, mixed-integer switching modes of their operation, and time-varying and nonlinear hard constraints that system variables should satisfy. The main contribution of the first part of the dissertation is that it establishes methods for systematic and step-by step improvements in fuel economy while maintaining the algorithmic computational requirements in a real-time implementable framework. More specifically a linear time-varying model predictive control approach is employed first which uses sequential quadratic programming to find sub-optimal solutions to the power management problem. Next the objective function is further refined and broken into a short and a long horizon segments; the latter approximated as a function of the state using the connection between the Pontryagin minimum principle and Hamilton-Jacobi-Bellman equations. The power management problem is then solved using a nonlinear MPC framework with a dynamic programming solver and the fuel economy is further improved. Typical simplifying academic assumptions are minimal throughout this work, thanks to close collaboration with research scientists at Ford research labs and their stringent requirement that the proposed solutions be tested on high-fidelity production models. Simulation results on a high-fidelity model of a hybrid electric vehicle over multiple standard driving cycles reveal the potential for substantial fuel economy gains. To address the control calibration challenges, we also present a novel and fast calibration technique utilizing parallel computing techniques. ^ The second part of this dissertation presents an optimization-based control strategy for the power management of a wind farm with battery storage. The strategy seeks to minimize the error between the power delivered by the wind farm with battery storage and the power demand from an operator. In addition, the strategy attempts to maximize battery life. The control strategy has two main stages. The first stage produces a family of control solutions that minimize the power error subject to the battery constraints over an optimization horizon. These solutions are parameterized by a given value for the state of charge at the end of the optimization horizon. The second stage screens the family of control solutions to select one attaining an optimal balance between power error and battery life. The battery life model used in this stage is a weighted Amp-hour (Ah) throughput model. The control strategy is modular, allowing for more sophisticated optimization models in the first stage, or more elaborate battery life models in the second stage. The strategy is implemented in real-time in the framework of Model Predictive Control (MPC).

  19. Ranking network of a captive rhesus macaque society: a sophisticated corporative kingdom.

    PubMed

    Fushing, Hsieh; McAssey, Michael P; Beisner, Brianne; McCowan, Brenda

    2011-03-15

    We develop a three-step computing approach to explore a hierarchical ranking network for a society of captive rhesus macaques. The computed network is sufficiently informative to address the question: Is the ranking network for a rhesus macaque society more like a kingdom or a corporation? Our computations are based on a three-step approach. These steps are devised to deal with the tremendous challenges stemming from the transitivity of dominance as a necessary constraint on the ranking relations among all individual macaques, and the very high sampling heterogeneity in the behavioral conflict data. The first step simultaneously infers the ranking potentials among all network members, which requires accommodation of heterogeneous measurement error inherent in behavioral data. Our second step estimates the social rank for all individuals by minimizing the network-wide errors in the ranking potentials. The third step provides a way to compute confidence bounds for selected empirical features in the social ranking. We apply this approach to two sets of conflict data pertaining to two captive societies of adult rhesus macaques. The resultant ranking network for each society is found to be a sophisticated mixture of both a kingdom and a corporation. Also, for validation purposes, we reanalyze conflict data from twenty longhorn sheep and demonstrate that our three-step approach is capable of correctly computing a ranking network by eliminating all ranking error.

  20. A review on design of experiments and surrogate models in aircraft real-time and many-query aerodynamic analyses

    NASA Astrophysics Data System (ADS)

    Yondo, Raul; Andrés, Esther; Valero, Eusebio

    2018-01-01

    Full scale aerodynamic wind tunnel testing, numerical simulation of high dimensional (full-order) aerodynamic models or flight testing are some of the fundamental but complex steps in the various design phases of recent civil transport aircrafts. Current aircraft aerodynamic designs have increase in complexity (multidisciplinary, multi-objective or multi-fidelity) and need to address the challenges posed by the nonlinearity of the objective functions and constraints, uncertainty quantification in aerodynamic problems or the restrained computational budgets. With the aim to reduce the computational burden and generate low-cost but accurate models that mimic those full order models at different values of the design variables, Recent progresses have witnessed the introduction, in real-time and many-query analyses, of surrogate-based approaches as rapid and cheaper to simulate models. In this paper, a comprehensive and state-of-the art survey on common surrogate modeling techniques and surrogate-based optimization methods is given, with an emphasis on models selection and validation, dimensionality reduction, sensitivity analyses, constraints handling or infill and stopping criteria. Benefits, drawbacks and comparative discussions in applying those methods are described. Furthermore, the paper familiarizes the readers with surrogate models that have been successfully applied to the general field of fluid dynamics, but not yet in the aerospace industry. Additionally, the review revisits the most popular sampling strategies used in conducting physical and simulation-based experiments in aircraft aerodynamic design. Attractive or smart designs infrequently used in the field and discussions on advanced sampling methodologies are presented, to give a glance on the various efficient possibilities to a priori sample the parameter space. Closing remarks foster on future perspectives, challenges and shortcomings associated with the use of surrogate models by aircraft industrial aerodynamicists, despite their increased interest among the research communities.

Top