Sample records for constrained minimum cost

  1. Minimum-Cost Aircraft Descent Trajectories with a Constrained Altitude Profile

    NASA Technical Reports Server (NTRS)

    Wu, Minghong G.; Sadovsky, Alexander V.

    2015-01-01

    An analytical formula for solving the speed profile that accrues minimum cost during an aircraft descent with a constrained altitude profile is derived. The optimal speed profile first reaches a certain speed, called the minimum-cost speed, as quickly as possible using an appropriate extreme value of thrust. The speed profile then stays on the minimum-cost speed as long as possible, before switching to an extreme value of thrust for the rest of the descent. The formula is applied to an actual arrival route and its sensitivity to winds and airlines' business objectives is analyzed.

  2. Diameter-Constrained Steiner Tree

    NASA Astrophysics Data System (ADS)

    Ding, Wei; Lin, Guohui; Xue, Guoliang

    Given an edge-weighted undirected graph G = (V,E,c,w), where each edge e ∈ E has a cost c(e) and a weight w(e), a set S ⊆ V of terminals and a positive constant D 0, we seek a minimum cost Steiner tree where all terminals appear as leaves and its diameter is bounded by D 0. Note that the diameter of a tree represents the maximum weight of path connecting two different leaves in the tree. Such problem is called the minimum cost diameter-constrained Steiner tree problem. This problem is NP-hard even when the topology of Steiner tree is fixed. In present paper we focus on this restricted version and present a fully polynomial time approximation scheme (FPTAS) for computing a minimum cost diameter-constrained Steiner tree under a fixed topology.

  3. Determining the Optimal Solution for Quadratically Constrained Quadratic Programming (QCQP) on Energy-Saving Generation Dispatch Problem

    NASA Astrophysics Data System (ADS)

    Lesmana, E.; Chaerani, D.; Khansa, H. N.

    2018-03-01

    Energy-Saving Generation Dispatch (ESGD) is a scheme made by Chinese Government in attempt to minimize CO2 emission produced by power plant. This scheme is made related to global warming which is primarily caused by too much CO2 in earth’s atmosphere, and while the need of electricity is something absolute, the power plants producing it are mostly thermal-power plant which produced many CO2. Many approach to fulfill this scheme has been made, one of them came through Minimum Cost Flow in which resulted in a Quadratically Constrained Quadratic Programming (QCQP) form. In this paper, ESGD problem with Minimum Cost Flow in QCQP form will be solved using Lagrange’s Multiplier Method

  4. Economic optimization of the energy transport component of a large distributed solar power plant

    NASA Technical Reports Server (NTRS)

    Turner, R. H.

    1976-01-01

    A solar thermal power plant with a field of collectors, each locally heating some transport fluid, requires a pipe network system for eventual delivery of energy power generation equipment. For a given collector distribution and pipe network geometry, a technique is herein developed which manipulates basic cost information and physical data in order to design an energy transport system consistent with minimized cost constrained by a calculated technical performance. For a given transport fluid and collector conditions, the method determines the network pipe diameter and pipe thickness distribution and also insulation thickness distribution associated with minimum system cost; these relative distributions are unique. Transport losses, including pump work and heat leak, are calculated operating expenses and impact the total system cost. The minimum cost system is readily selected. The technique is demonstrated on six candidate transport fluids to emphasize which parameters dominate the system cost and to provide basic decision data. Three different power plant output sizes are evaluated in each case to determine severity of diseconomy of scale.

  5. Balancing building and maintenance costs in growing transport networks

    NASA Astrophysics Data System (ADS)

    Bottinelli, Arianna; Louf, Rémi; Gherardi, Marco

    2017-09-01

    The costs associated to the length of links impose unavoidable constraints to the growth of natural and artificial transport networks. When future network developments cannot be predicted, the costs of building and maintaining connections cannot be minimized simultaneously, requiring competing optimization mechanisms. Here, we study a one-parameter nonequilibrium model driven by an optimization functional, defined as the convex combination of building cost and maintenance cost. By varying the coefficient of the combination, the model interpolates between global and local length minimization, i.e., between minimum spanning trees and a local version known as dynamical minimum spanning trees. We show that cost balance within this ensemble of dynamical networks is a sufficient ingredient for the emergence of tradeoffs between the network's total length and transport efficiency, and of optimal strategies of construction. At the transition between two qualitatively different regimes, the dynamics builds up power-law distributed waiting times between global rearrangements, indicating a point of nonoptimality. Finally, we use our model as a framework to analyze empirical ant trail networks, showing its relevance as a null model for cost-constrained network formation.

  6. A robust approach to chance constrained optimal power flow with renewable generation

    DOE PAGES

    Lubin, Miles; Dvorkin, Yury; Backhaus, Scott N.

    2016-09-01

    Optimal Power Flow (OPF) dispatches controllable generation at minimum cost subject to operational constraints on generation and transmission assets. The uncertainty and variability of intermittent renewable generation is challenging current deterministic OPF approaches. Recent formulations of OPF use chance constraints to limit the risk from renewable generation uncertainty, however, these new approaches typically assume the probability distributions which characterize the uncertainty and variability are known exactly. We formulate a robust chance constrained (RCC) OPF that accounts for uncertainty in the parameters of these probability distributions by allowing them to be within an uncertainty set. The RCC OPF is solved usingmore » a cutting-plane algorithm that scales to large power systems. We demonstrate the RRC OPF on a modified model of the Bonneville Power Administration network, which includes 2209 buses and 176 controllable generators. In conclusion, deterministic, chance constrained (CC), and RCC OPF formulations are compared using several metrics including cost of generation, area control error, ramping of controllable generators, and occurrence of transmission line overloads as well as the respective computational performance.« less

  7. Constrained Optimization of Average Arrival Time via a Probabilistic Approach to Transport Reliability

    PubMed Central

    Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam

    2015-01-01

    To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902

  8. Impact of longitudinal flying qualities upon the design of a transport with active controls

    NASA Technical Reports Server (NTRS)

    Sliwa, S. M.

    1980-01-01

    Direct constrained parameter optimization was used to optimally size a medium range transport for minimum direct operating cost. Several stability and control constraints were varied to study the sensitivity of the configuration to specifying the unaugmented flying qualities of transports designed with relaxed static stability. Additionally, a number of handling quality related design constants were studied with respect to their impact to the design.

  9. From diets to foods: using linear programming to formulate a nutritious, minimum-cost porridge mix for children aged 1 to 2 years.

    PubMed

    De Carvalho, Irene Stuart Torrié; Granfeldt, Yvonne; Dejmek, Petr; Håkansson, Andreas

    2015-03-01

    Linear programming has been used extensively as a tool for nutritional recommendations. Extending the methodology to food formulation presents new challenges, since not all combinations of nutritious ingredients will produce an acceptable food. Furthermore, it would help in implementation and in ensuring the feasibility of the suggested recommendations. To extend the previously used linear programming methodology from diet optimization to food formulation using consistency constraints. In addition, to exemplify usability using the case of a porridge mix formulation for emergency situations in rural Mozambique. The linear programming method was extended with a consistency constraint based on previously published empirical studies on swelling of starch in soft porridges. The new method was exemplified using the formulation of a nutritious, minimum-cost porridge mix for children aged 1 to 2 years for use as a complete relief food, based primarily on local ingredients, in rural Mozambique. A nutritious porridge fulfilling the consistency constraints was found; however, the minimum cost was unfeasible with local ingredients only. This illustrates the challenges in formulating nutritious yet economically feasible foods from local ingredients. The high cost was caused by the high cost of mineral-rich foods. A nutritious, low-cost porridge that fulfills the consistency constraints was obtained by including supplements of zinc and calcium salts as ingredients. The optimizations were successful in fulfilling all constraints and provided a feasible porridge, showing that the extended constrained linear programming methodology provides a systematic tool for designing nutritious foods.

  10. Digital robust active control law synthesis for large order systems using constrained optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1987-01-01

    This paper presents a direct digital control law synthesis procedure for a large order, sampled data, linear feedback system using constrained optimization techniques to meet multiple design requirements. A linear quadratic Gaussian type cost function is minimized while satisfying a set of constraints on the design loads and responses. General expressions for gradients of the cost function and constraints, with respect to the digital control law design variables are derived analytically and computed by solving a set of discrete Liapunov equations. The designer can choose the structure of the control law and the design variables, hence a stable classical control law as well as an estimator-based full or reduced order control law can be used as an initial starting point. Selected design responses can be treated as constraints instead of lumping them into the cost function. This feature can be used to modify a control law, to meet individual root mean square response limitations as well as minimum single value restrictions. Low order, robust digital control laws were synthesized for gust load alleviation of a flexible remotely piloted drone aircraft.

  11. Can re-regulation reservoirs and batteries cost-effectively mitigate sub-daily hydropeaking?

    NASA Astrophysics Data System (ADS)

    Haas, J.; Nowak, W.; Anindito, Y.; Olivares, M. A.

    2017-12-01

    To compensate for mismatches between generation and load, hydropower plants frequently operate in strong hydropeaking schemes, which is harmful to the downstream ecosystem. Furthermore, new power market structures and variable renewable systems may exacerbate this behavior. Ecological constraints (minimum flows, maximum ramps) are frequently used to mitigate hydropeaking, but these stand in direct tradeoff with the operational flexibility required for integrating renewable technologies. Fortunately, there are also physical methods (i.e. re-regulation reservoirs and batteries) but to date, there are no studies about their cost-effectiveness for hydropeaking mitigation. This study aims to fill that gap. For this, we formulate an hourly mixed-integer linear optimization model to plan the weekly operation of a hydro-thermal-renewable power system from southern Chile. The opportunity cost of water (needed for this weekly scheduling) is obtained from a mid-term programming solved with dynamic programming. We compare the current (unconstrained) hydropower operation with an ecologically constrained operation. The resulting cost increase is then contrasted with the annual payments necessary for the physical hydropeaking mitigation options. For highly constrained operations, both re-regulation reservoirs and batteries show to be economically attractive for hydropeaking mitigation. For intermediate constrained scenarios, re-regulation reservoirs are still economic, whereas batteries can be a viable solution only if they become cheaper in future. Given current cost projections, their break-even point (for hydropeaking mitigation) is expected within the next ten years. Finally, less stringent hydropeaking constraints do not justify physical mitigation measures, as the necessary flexibility can be provided by other power plants of the system.

  12. Economic evaluation of flying-qualities design criteria for a transport configured with relaxed static stability

    NASA Technical Reports Server (NTRS)

    Sliwa, S. M.

    1980-01-01

    Direct constrained parameter optimization was used to optimally size a medium range transport for minimum direct operating cost. Several stability and control constraints were varied to study the sensitivity of the configuration to specifying the unaugmented flying qualities of transports designed to take maximum advantage of relaxed static stability augmentation systems. Additionally, a number of handling qualities related design constants were studied with respect to their impact on the design.

  13. Constrained minimization of smooth functions using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.; Pamadi, Bandu N.

    1994-01-01

    The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.

  14. On size-constrained minimum s–t cut problems and size-constrained dense subgraph problems

    DOE PAGES

    Chen, Wenbin; Samatova, Nagiza F.; Stallmann, Matthias F.; ...

    2015-10-30

    In some application cases, the solutions of combinatorial optimization problems on graphs should satisfy an additional vertex size constraint. In this paper, we consider size-constrained minimum s–t cut problems and size-constrained dense subgraph problems. We introduce the minimum s–t cut with at-least-k vertices problem, the minimum s–t cut with at-most-k vertices problem, and the minimum s–t cut with exactly k vertices problem. We prove that they are NP-complete. Thus, they are not polynomially solvable unless P = NP. On the other hand, we also study the densest at-least-k-subgraph problem (DalkS) and the densest at-most-k-subgraph problem (DamkS) introduced by Andersen andmore » Chellapilla [1]. We present a polynomial time algorithm for DalkS when k is bounded by some constant c. We also present two approximation algorithms for DamkS. In conclusion, the first approximation algorithm for DamkS has an approximation ratio of n-1/k-1, where n is the number of vertices in the input graph. The second approximation algorithm for DamkS has an approximation ratio of O (n δ), for some δ < 1/3.« less

  15. Tabu search algorithm for the distance-constrained vehicle routing problem with split deliveries by order.

    PubMed

    Xia, Yangkun; Fu, Zhuo; Pan, Lijun; Duan, Fenghua

    2018-01-01

    The vehicle routing problem (VRP) has a wide range of applications in the field of logistics distribution. In order to reduce the cost of logistics distribution, the distance-constrained and capacitated VRP with split deliveries by order (DCVRPSDO) was studied. We show that the customer demand, which can't be split in the classical VRP model, can only be discrete split deliveries by order. A model of double objective programming is constructed by taking the minimum number of vehicles used and minimum vehicle traveling cost as the first and the second objective, respectively. This approach contains a series of constraints, such as single depot, single vehicle type, distance-constrained and load capacity limit, split delivery by order, etc. DCVRPSDO is a new type of VRP. A new tabu search algorithm is designed to solve the problem and the examples testing show the efficiency of the proposed algorithm. This paper focuses on constructing a double objective mathematical programming model for DCVRPSDO and designing an adaptive tabu search algorithm (ATSA) with good performance to solving the problem. The performance of the ATSA is improved by adding some strategies into the search process, including: (a) a strategy of discrete split deliveries by order is used to split the customer demand; (b) a multi-neighborhood structure is designed to enhance the ability of global optimization; (c) two levels of evaluation objectives are set to select the current solution and the best solution; (d) a discriminating strategy of that the best solution must be feasible and the current solution can accept some infeasible solution, helps to balance the performance of the solution and the diversity of the neighborhood solution; (e) an adaptive penalty mechanism will help the candidate solution be closer to the neighborhood of feasible solution; (f) a strategy of tabu releasing is used to transfer the current solution into a new neighborhood of the better solution.

  16. Tabu search algorithm for the distance-constrained vehicle routing problem with split deliveries by order

    PubMed Central

    Xia, Yangkun; Pan, Lijun; Duan, Fenghua

    2018-01-01

    The vehicle routing problem (VRP) has a wide range of applications in the field of logistics distribution. In order to reduce the cost of logistics distribution, the distance-constrained and capacitated VRP with split deliveries by order (DCVRPSDO) was studied. We show that the customer demand, which can’t be split in the classical VRP model, can only be discrete split deliveries by order. A model of double objective programming is constructed by taking the minimum number of vehicles used and minimum vehicle traveling cost as the first and the second objective, respectively. This approach contains a series of constraints, such as single depot, single vehicle type, distance-constrained and load capacity limit, split delivery by order, etc. DCVRPSDO is a new type of VRP. A new tabu search algorithm is designed to solve the problem and the examples testing show the efficiency of the proposed algorithm. This paper focuses on constructing a double objective mathematical programming model for DCVRPSDO and designing an adaptive tabu search algorithm (ATSA) with good performance to solving the problem. The performance of the ATSA is improved by adding some strategies into the search process, including: (a) a strategy of discrete split deliveries by order is used to split the customer demand; (b) a multi-neighborhood structure is designed to enhance the ability of global optimization; (c) two levels of evaluation objectives are set to select the current solution and the best solution; (d) a discriminating strategy of that the best solution must be feasible and the current solution can accept some infeasible solution, helps to balance the performance of the solution and the diversity of the neighborhood solution; (e) an adaptive penalty mechanism will help the candidate solution be closer to the neighborhood of feasible solution; (f) a strategy of tabu releasing is used to transfer the current solution into a new neighborhood of the better solution. PMID:29763419

  17. Aircraft Optimization for Minimum Environmental Impact

    NASA Technical Reports Server (NTRS)

    Antoine, Nicolas; Kroo, Ilan M.

    2001-01-01

    The objective of this research is to investigate the tradeoff between operating cost and environmental acceptability of commercial aircraft. This involves optimizing the aircraft design and mission to minimize operating cost while constraining exterior noise and emissions. Growth in air traffic and airport neighboring communities has resulted in increased pressure to severely penalize airlines that do not meet strict local noise and emissions requirements. As a result, environmental concerns have become potent driving forces in commercial aviation. Traditionally, aircraft have been first designed to meet performance and cost goals, and adjusted to satisfy the environmental requirements at given airports. The focus of the present study is to determine the feasibility of including noise and emissions constraints in the early design of the aircraft and mission. This paper introduces the design tool and results from a case study involving a 250-passenger airliner.

  18. When Dijkstra Meets Vanishing Point: A Stereo Vision Approach for Road Detection.

    PubMed

    Zhang, Yigong; Su, Yingna; Yang, Jian; Ponce, Jean; Kong, Hui

    2018-05-01

    In this paper, we propose a vanishing-point constrained Dijkstra road model for road detection in a stereo-vision paradigm. First, the stereo-camera is used to generate the u- and v-disparity maps of road image, from which the horizon can be extracted. With the horizon and ground region constraints, we can robustly locate the vanishing point of road region. Second, a weighted graph is constructed using all pixels of the image, and the detected vanishing point is treated as the source node of the graph. By computing a vanishing-point constrained Dijkstra minimum-cost map, where both disparity and gradient of gray image are used to calculate cost between two neighbor pixels, the problem of detecting road borders in image is transformed into that of finding two shortest paths that originate from the vanishing point to two pixels in the last row of image. The proposed approach has been implemented and tested over 2600 grayscale images of different road scenes in the KITTI data set. The experimental results demonstrate that this training-free approach can detect horizon, vanishing point, and road regions very accurately and robustly. It can achieve promising performance.

  19. Blind Channel Equalization Using Constrained Generalized Pattern Search Optimization and Reinitialization Strategy

    NASA Astrophysics Data System (ADS)

    Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles

    2008-12-01

    We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.

  20. Solving constrained minimum-time robot problems using the sequential gradient restoration algorithm

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.

    1991-01-01

    Three constrained minimum-time control problems of a two-link manipulator are solved using the Sequential Gradient and Restoration Algorithm (SGRA). The inequality constraints considered are reduced via Valentine-type transformations to nondifferential path equality constraints. The SGRA is then used to solve these transformed problems with equality constraints. The results obtained indicate that at least one of the two controls is at its limits at any instant in time. The remaining control then adjusts itself so that none of the system constraints is violated. Hence, the minimum-time control is either a pure bang-bang control or a combined bang-bang/singular control.

  1. Modern Optimization Methods in Minimum Weight Design of Elastic Annular Rotating Disk with Variable Thickness

    NASA Astrophysics Data System (ADS)

    Jafari, S.; Hojjati, M. H.

    2011-12-01

    Rotating disks work mostly at high angular velocity and this results a large centrifugal force and consequently induce large stresses and deformations. Minimizing weight of such disks yields to benefits such as low dead weights and lower costs. This paper aims at finding an optimal disk thickness profile for minimum weight design using the simulated annealing (SA) and particle swarm optimization (PSO) as two modern optimization techniques. In using semi-analytical the radial domain of the disk is divided into some virtual sub-domains as rings where the weight of each rings must be minimized. Inequality constrain equation used in optimization is to make sure that maximum von Mises stress is always less than yielding strength of the material of the disk and rotating disk does not fail. The results show that the minimum weight obtained for all two methods is almost identical. The PSO method gives a profile with slightly less weight (6.9% less than SA) while the implementation of both PSO and SA methods are easy and provide more flexibility compared with classical methods.

  2. Constrained binary classification using ensemble learning: an application to cost-efficient targeted PrEP strategies.

    PubMed

    Zheng, Wenjing; Balzer, Laura; van der Laan, Mark; Petersen, Maya

    2018-01-30

    Binary classification problems are ubiquitous in health and social sciences. In many cases, one wishes to balance two competing optimality considerations for a binary classifier. For instance, in resource-limited settings, an human immunodeficiency virus prevention program based on offering pre-exposure prophylaxis (PrEP) to select high-risk individuals must balance the sensitivity of the binary classifier in detecting future seroconverters (and hence offering them PrEP regimens) with the total number of PrEP regimens that is financially and logistically feasible for the program. In this article, we consider a general class of constrained binary classification problems wherein the objective function and the constraint are both monotonic with respect to a threshold. These include the minimization of the rate of positive predictions subject to a minimum sensitivity, the maximization of sensitivity subject to a maximum rate of positive predictions, and the Neyman-Pearson paradigm, which minimizes the type II error subject to an upper bound on the type I error. We propose an ensemble approach to these binary classification problems based on the Super Learner methodology. This approach linearly combines a user-supplied library of scoring algorithms, with combination weights and a discriminating threshold chosen to minimize the constrained optimality criterion. We then illustrate the application of the proposed classifier to develop an individualized PrEP targeting strategy in a resource-limited setting, with the goal of minimizing the number of PrEP offerings while achieving a minimum required sensitivity. This proof of concept data analysis uses baseline data from the ongoing Sustainable East Africa Research in Community Health study. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Survival of primary condylar-constrained total knee arthroplasty at a minimum of 7 years.

    PubMed

    Maynard, Lance M; Sauber, Timothy J; Kostopoulos, Vasileios K; Lavigne, Gregory S; Sewecke, Jeffrey J; Sotereanos, Nicholas G

    2014-06-01

    The purpose of the present study is to retrospectively analyze clinical and radiographic outcomes in primary constrained condylar knee arthroplasty at a minimum follow-up of 7 years. Given the concern for early aseptic loosening in constrained implants, we focused on this outcome. Our cohort consists of 127 constrained condylar knees. The mean age of patients in the study was 68.3 years, with a mean follow-up of 110.7 months. The diagnosis was primary osteoarthritis in 92%. There were four periprosthetic distal femur fractures, with a rate of revision of 0.8%. No implants were revised for aseptic loosening. Kaplan-Meier survivorship analysis with removal of any component as the end point revealed that the 10-year rate of survival of the primary CCK was 97.6% (95% CI, 94%-100%). Copyright © 2014. Published by Elsevier Inc.

  4. Spectrum and orbit conservation as a factor in future mobile satellite system design

    NASA Technical Reports Server (NTRS)

    Bowen, Robert R.

    1990-01-01

    Access to the radio spectrum and geostationary orbit is essential to current and future mobile satellite systems. This access is difficult to obtain for current systems, and may be even more so for larger future systems. In this environment, satellite systems that minimize the amount of spectrum orbit resource required to meet a specific traffic requirement are essential. Several spectrum conservation techniques are discussed, some of which are complementary to designing the system at minimum cost. All may need to be implemented to the limits of technological feasibility if network growth is not to be constrained because of the lack of available spectrum-orbit resource.

  5. Design of optimally normal minimum gain controllers by continuation method

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Juang, J.-N.; Kim, Z. C.

    1989-01-01

    A measure of the departure from normality is investigated for system robustness. An attractive feature of the normality index is its simplicity for pole placement designs. To allow a tradeoff between system robustness and control effort, a cost function consisting of the sum of a norm of weighted gain matrix and a normality index is minimized. First- and second-order necessary conditions for the constrained optimization problem are derived and solved by a Newton-Raphson algorithm imbedded into a one-parameter family of neighboring zero problems. The method presented allows the direct computation of optimal gains in terms of robustness and control effort for pole placement problems.

  6. Energetic tradeoffs control the size distribution of aquatic mammals

    NASA Astrophysics Data System (ADS)

    Gearty, William; McClain, Craig R.; Payne, Jonathan L.

    2018-04-01

    Four extant lineages of mammals have invaded and diversified in the water: Sirenia, Cetacea, Pinnipedia, and Lutrinae. Most of these aquatic clades are larger bodied, on average, than their closest land-dwelling relatives, but the extent to which potential ecological, biomechanical, and physiological controls contributed to this pattern remains untested quantitatively. Here, we use previously published data on the body masses of 3,859 living and 2,999 fossil mammal species to examine the evolutionary trajectories of body size in aquatic mammals through both comparative phylogenetic analysis and examination of the fossil record. Both methods indicate that the evolution of an aquatic lifestyle is driving three of the four extant aquatic mammal clades toward a size attractor at ˜500 kg. The existence of this body size attractor and the relatively rapid selection toward, and limited deviation from, this attractor rule out most hypothesized drivers of size increase. These three independent body size increases and a shared aquatic optimum size are consistent with control by differences in the scaling of energetic intake and cost functions with body size between the terrestrial and aquatic realms. Under this energetic model, thermoregulatory costs constrain minimum size, whereas limitations on feeding efficiency constrain maximum size. The optimum size occurs at an intermediate value where thermoregulatory costs are low but feeding efficiency remains high. Rather than being released from size pressures, water-dwelling mammals are driven and confined to larger body sizes by the strict energetic demands of the aquatic medium.

  7. Toward Overcoming the Local Minimum Trap in MFBD

    DTIC Science & Technology

    2015-07-14

    the first two years of this grant: • A. Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind Deconvolution...Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Numerical Optimization Meth- ods for Blind Deconvolution, Numerical Algorithms, volume 65, issue 1...Publications (published) during reporting period: A. Cornelio, E. Loli Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind

  8. Proportional Topology Optimization: A New Non-Sensitivity Method for Solving Stress Constrained and Minimum Compliance Problems and Its Implementation in MATLAB

    PubMed Central

    Biyikli, Emre; To, Albert C.

    2015-01-01

    A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849

  9. Approximate error conjugation gradient minimization methods

    DOEpatents

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  10. Minimum energy control and optimal-satisfactory control of Boolean control network

    NASA Astrophysics Data System (ADS)

    Li, Fangfei; Lu, Xiwen

    2013-12-01

    In the literatures, to transfer the Boolean control network from the initial state to the desired state, the expenditure of energy has been rarely considered. Motivated by this, this Letter investigates the minimum energy control and optimal-satisfactory control of Boolean control network. Based on the semi-tensor product of matrices and Floyd's algorithm, minimum energy, constrained minimum energy and optimal-satisfactory control design for Boolean control network are given respectively. A numerical example is presented to illustrate the efficiency of the obtained results.

  11. An indirect method for numerical optimization using the Kreisselmeir-Steinhauser function

    NASA Technical Reports Server (NTRS)

    Wrenn, Gregory A.

    1989-01-01

    A technique is described for converting a constrained optimization problem into an unconstrained problem. The technique transforms one of more objective functions into reduced objective functions, which are analogous to goal constraints used in the goal programming method. These reduced objective functions are appended to the set of constraints and an envelope of the entire function set is computed using the Kreisselmeir-Steinhauser function. This envelope function is then searched for an unconstrained minimum. The technique may be categorized as a SUMT algorithm. Advantages of this approach are the use of unconstrained optimization methods to find a constrained minimum without the draw down factor typical of penalty function methods, and that the technique may be started from the feasible or infeasible design space. In multiobjective applications, the approach has the advantage of locating a compromise minimum design without the need to optimize for each individual objective function separately.

  12. Dispatching power system for preventive and corrective voltage collapse problem in a deregulated power system

    NASA Astrophysics Data System (ADS)

    Alemadi, Nasser Ahmed

    Deregulation has brought opportunities for increasing efficiency of production and delivery and reduced costs to customers. Deregulation has also bought great challenges to provide the reliability and security customers have come to expect and demand from the electrical delivery system. One of the challenges in the deregulated power system is voltage instability. Voltage instability has become the principal constraint on power system operation for many utilities. Voltage instability is a unique problem because it can produce an uncontrollable, cascading instability that results in blackout for a large region or an entire country. In this work we define a system of advanced analytical methods and tools for secure and efficient operation of the power system in the deregulated environment. The work consists of two modules; (a) contingency selection module and (b) a Security Constrained Optimization module. The contingency selection module to be used for voltage instability is the Voltage Stability Security Assessment and Diagnosis (VSSAD). VSSAD shows that each voltage control area and its reactive reserve basin describe a subsystem or agent that has a unique voltage instability problem. VSSAD identifies each such agent. VS SAD is to assess proximity to voltage instability for each agent and rank voltage instability agents for each contingency simulated. Contingency selection and ranking for each agent is also performed. Diagnosis of where, why, when, and what can be done to cure voltage instability for each equipment outage and transaction change combination that has no load flow solution is also performed. A security constrained optimization module developed solves a minimum control solvability problem. A minimum control solvability problem obtains the reactive reserves through action of voltage control devices that VSSAD determines are needed in each agent to obtain solution of the load flow. VSSAD makes a physically impossible recommendation of adding reactive generation capability to specific generators to allow a load flow solution to be obtained. The minimum control solvability problem can also obtain solution of the load flow without curtailing transactions that shed load and generation as recommended by VSSAD. A minimum control solvability problem will be implemented as a corrective control, that will achieve the above objectives by using minimum control changes. The control includes; (1) voltage setpoint on generator bus voltage terminals; (2) under load tap changer tap positions and switchable shunt capacitors; and (3) active generation at generator buses. The minimum control solvability problem uses the VSSAD recommendation to obtain the feasible stable starting point but completely eliminates the impossible or onerous recommendation made by VSSAD. This thesis reviews the capabilities of Voltage Stability Security Assessment and Diagnosis and how it can be used to implement a contingency selection module for the Open Access System Dispatch (OASYDIS). The OASYDIS will also use the corrective control computed by Security Constrained Dispatch. The corrective control would be computed off line and stored for each contingency that produces voltage instability. The control is triggered and implemented to correct the voltage instability in the agent experiencing voltage instability only after the equipment outage or operating changes predicted to produce voltage instability have occurred. The advantages and the requirements to implement the corrective control are also discussed.

  13. Routing Algorithm based on Minimum Spanning Tree and Minimum Cost Flow for Hybrid Wireless-optical Broadband Access Network

    NASA Astrophysics Data System (ADS)

    Le, Zichun; Suo, Kaihua; Fu, Minglei; Jiang, Ling; Dong, Wen

    2012-03-01

    In order to minimize the average end to end delay for data transporting in hybrid wireless optical broadband access network, a novel routing algorithm named MSTMCF (minimum spanning tree and minimum cost flow) is devised. The routing problem is described as a minimum spanning tree and minimum cost flow model and corresponding algorithm procedures are given. To verify the effectiveness of MSTMCF algorithm, extensively simulations based on OWNS have been done under different types of traffic source.

  14. Minimum cost to control bovine tuberculosis in cow-calf herds

    PubMed Central

    Smith, Rebecca L.; Tauer, Loren W.; Sanderson, Michael W.; Grohn, Yrjo T.

    2014-01-01

    Bovine tuberculosis (bTB) outbreaks in US cattle herds, while rare, are expensive to control. A stochastic model for bTB control in US cattle herds was adapted to more accurately represent cow-calf herd dynamics and was validated by comparison to 2 reported outbreaks. Control cost calculations were added to the model, which was then optimized to minimize costs for either the farm or the government. The results of the optimization showed that test-and-removal costs were minimized for both farms and the government if only 2 negative whole-herd tests were required to declare a herd free of infection, with a 2–3 month testing interval. However, the optimal testing interval for governments was increased to 2–4 months if the model was constrained to reject control programs leading to an infected herd being declared free of infection. Although farms always preferred test-and-removal to depopulation from a cost standpoint, government costs were lower with depopulation more than half the time in 2 of 8 regions. Global sensitivity analysis showed that indemnity costs were significantly associated with a rise in the cost to the government, and that low replacement rates were responsible for the long time to detection predicted by the model, but that improving the sensitivity of slaughterhouse screening and the probability that a slaughtered animal’s herd of origin can be identified would result in faster detection times. PMID:24703601

  15. Minimum cost to control bovine tuberculosis in cow-calf herds.

    PubMed

    Smith, Rebecca L; Tauer, Loren W; Sanderson, Michael W; Gröhn, Yrjo T

    2014-07-01

    Bovine tuberculosis (bTB) outbreaks in US cattle herds, while rare, are expensive to control. A stochastic model for bTB control in US cattle herds was adapted to more accurately represent cow-calf herd dynamics and was validated by comparison to 2 reported outbreaks. Control cost calculations were added to the model, which was then optimized to minimize costs for either the farm or the government. The results of the optimization showed that test-and-removal costs were minimized for both farms and the government if only 2 negative whole-herd tests were required to declare a herd free of infection, with a 2-3 month testing interval. However, the optimal testing interval for governments was increased to 2-4 months if the model was constrained to reject control programs leading to an infected herd being declared free of infection. Although farms always preferred test-and-removal to depopulation from a cost standpoint, government costs were lower with depopulation more than half the time in 2 of 8 regions. Global sensitivity analysis showed that indemnity costs were significantly associated with a rise in the cost to the government, and that low replacement rates were responsible for the long time to detection predicted by the model, but that improving the sensitivity of slaughterhouse screening and the probability that a slaughtered animal's herd of origin can be identified would result in faster detection times. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Spatial optimization of cropping pattern for sustainable food and biofuel production with minimal downstream pollution.

    PubMed

    Femeena, P V; Sudheer, K P; Cibin, R; Chaubey, I

    2018-04-15

    Biofuel has emerged as a substantial source of energy in many countries. In order to avoid the 'food versus fuel competition', arising from grain-based ethanol production, the United States has passed regulations that require second generation or cellulosic biofeedstocks to be used for majority of the biofuel production by 2022. Agricultural residue, such as corn stover, is currently the largest source of cellulosic feedstock. However, increased harvesting of crops residue may lead to increased application of fertilizers in order to recover the soil nutrients lost from the residue removal. Alternatively, introduction of less-fertilizer intensive perennial grasses such as switchgrass (Panicum virgatum L.) and Miscanthus (Miscanthus x giganteus Greef et Deu.) can be a viable source for biofuel production. Even though these grasses are shown to reduce nutrient loads to a great extent, high production cost have constrained their wide adoptability to be used as a viable feedstock. Nonetheless, there is an opportunity to optimize feedstock production to meet bioenergy demand while improving water quality. This study presents a multi-objective simulation optimization framework using Soil and Water Assessment Tool (SWAT) and Multi Algorithm Genetically Adaptive Method (AMALGAM) to develop optimal cropping pattern with minimum nutrient delivery and minimum biomass production cost. Computational time required for optimization was significantly reduced by loose coupling SWAT with an external in-stream solute transport model. Optimization was constrained by food security and biofuel production targets that ensured not more than 10% reduction in grain yield and at least 100 million gallons of ethanol production. A case study was carried out in St. Joseph River Watershed that covers 280,000 ha area in the Midwest U.S. Results of the study indicated that introduction of corn stover removal and perennial grass production reduce nitrate and total phosphorus loads without compromising on food and biofuel production. Optimization runs yielded an optimal cropping pattern with 32% of watershed area in stover removal, 15% in switchgrass and 2% in Miscanthus. The optimal scenario resulted in 14% reduction in nitrate and 22% reduction in total phosphorus from the baseline. This framework can be used as an effective tool to take decisions regarding environmentally and economically sustainable strategies to minimize the nutrient delivery at minimal biomass production cost, while simultaneously meeting food and biofuel production targets. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Missing Title

    NASA Astrophysics Data System (ADS)

    Cook, T. A.; Chakrabarti, S.; Bifano, T. G.; Lane, B.; Levine, B. M.; Shao, M.

    2004-05-01

    The study of extrasolar planets is one of the most exciting research endeavors of modern astrophysics. While the list of known planets continues to grow, no direct image of any extrasolar planet has been obtained to date. Ground-breaking radial velocity measurements have identified many potential targets but other measurements are needed to obtain physical parameters of the extrasolar planets. For example, for most extrasolar giant planets we only know their minimum projected mass (M sin i). Even a single image of one extrasolar planet will fully determine its orbital parameters and thus its true mass. A single image would also provide albedo information which would begin to constrain their atmospheric properties. This is the objective of PICTURE, a low-cost space mission specifically designed to obtain the first direct image of extrasolar giant planets.

  18. Information dynamics in living systems: prokaryotes, eukaryotes, and cancer.

    PubMed

    Frieden, B Roy; Gatenby, Robert A

    2011-01-01

    Living systems use information and energy to maintain stable entropy while far from thermodynamic equilibrium. The underlying first principles have not been established. We propose that stable entropy in living systems, in the absence of thermodynamic equilibrium, requires an information extremum (maximum or minimum), which is invariant to first order perturbations. Proliferation and death represent key feedback mechanisms that promote stability even in a non-equilibrium state. A system moves to low or high information depending on its energy status, as the benefit of information in maintaining and increasing order is balanced against its energy cost. Prokaryotes, which lack specialized energy-producing organelles (mitochondria), are energy-limited and constrained to an information minimum. Acquisition of mitochondria is viewed as a critical evolutionary step that, by allowing eukaryotes to achieve a sufficiently high energy state, permitted a phase transition to an information maximum. This state, in contrast to the prokaryote minima, allowed evolution of complex, multicellular organisms. A special case is a malignant cell, which is modeled as a phase transition from a maximum to minimum information state. The minimum leads to a predicted power-law governing the in situ growth that is confirmed by studies measuring growth of small breast cancers. We find living systems achieve a stable entropic state by maintaining an extreme level of information. The evolutionary divergence of prokaryotes and eukaryotes resulted from acquisition of specialized energy organelles that allowed transition from information minima to maxima, respectively. Carcinogenesis represents a reverse transition: of an information maximum to minimum. The progressive information loss is evident in accumulating mutations, disordered morphology, and functional decline characteristics of human cancers. The findings suggest energy restriction is a critical first step that triggers the genetic mutations that drive somatic evolution of the malignant phenotype.

  19. Optimizing conceptual aircraft designs for minimum life cycle cost

    NASA Technical Reports Server (NTRS)

    Johnson, Vicki S.

    1989-01-01

    A life cycle cost (LCC) module has been added to the FLight Optimization System (FLOPS), allowing the additional optimization variables of life cycle cost, direct operating cost, and acquisition cost. Extensive use of the methodology on short-, medium-, and medium-to-long range aircraft has demonstrated that the system works well. Results from the study show that optimization parameter has a definite effect on the aircraft, and that optimizing an aircraft for minimum LCC results in a different airplane than when optimizing for minimum take-off gross weight (TOGW), fuel burned, direct operation cost (DOC), or acquisition cost. Additionally, the economic assumptions can have a strong impact on the configurations optimized for minimum LCC or DOC. Also, results show that advanced technology can be worthwhile, even if it results in higher manufacturing and operating costs. Examining the number of engines a configuration should have demonstrated a real payoff of including life cycle cost in the conceptual design process: the minimum TOGW of fuel aircraft did not always have the lowest life cycle cost when considering the number of engines.

  20. 7 CFR 701.10 - Qualifying minimum cost of restoration.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Qualifying minimum cost of restoration. 701.10 Section..., DEPARTMENT OF AGRICULTURE AGRICULTURAL CONSERVATION PROGRAM EMERGENCY CONSERVATION PROGRAM AND CERTAIN RELATED PROGRAMS PREVIOUSLY ADMINISTERED UNDER THIS PART § 701.10 Qualifying minimum cost of restoration...

  1. Pseudo-time methods for constrained optimization problems governed by PDE

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1995-01-01

    In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.

  2. Wildfire Suppression Costs for Canada under a Changing Climate

    PubMed Central

    Stocks, Brian J.; Gauthier, Sylvie

    2016-01-01

    Climate-influenced changes in fire regimes in northern temperate and boreal regions will have both ecological and economic ramifications. We examine possible future wildfire area burned and suppression costs using a recently compiled historical (i.e., 1980–2009) fire management cost database for Canada and several Intergovernmental Panel on Climate Change (IPCC) climate projections. Area burned was modelled as a function of a climate moisture index (CMI), and fire suppression costs then estimated as a function of area burned. Future estimates of area burned were generated from projections of the CMI under two emissions pathways for four General Circulation Models (GCMs); these estimates were constrained to ecologically reasonable values by incorporating a minimum fire return interval of 20 years. Total average annual national fire management costs are projected to increase to just under $1 billion (a 60% real increase from the 1980–2009 period) under the low greenhouse gas emissions pathway and $1.4 billion (119% real increase from the base period) under the high emissions pathway by the end of the century. For many provinces, annual costs that are currently considered extreme (i.e., occur once every ten years) are projected to become commonplace (i.e., occur once every two years or more often) as the century progresses. It is highly likely that evaluations of current wildland fire management paradigms will be necessary to avoid drastic and untenable cost increases as the century progresses. PMID:27513660

  3. A Bootstrap Approach to an Affordable Exploration Program

    NASA Technical Reports Server (NTRS)

    Oeftering, Richard C.

    2011-01-01

    This paper examines the potential to build an affordable sustainable exploration program by adopting an approach that requires investing in technologies that can be used to build a space infrastructure from very modest initial capabilities. Human exploration has had a history of flight programs that have high development and operational costs. Since Apollo, human exploration has had very constrained budgets and they are expected be constrained in the future. Due to their high operations costs it becomes necessary to consider retiring established space facilities in order to move on to the next exploration challenge. This practice may save cost in the near term but it does so by sacrificing part of the program s future architecture. Human exploration also has a history of sacrificing fully functional flight hardware to achieve mission objectives. An affordable exploration program cannot be built when it involves billions of dollars of discarded space flight hardware, instead, the program must emphasize preserving its high value space assets and building a suitable permanent infrastructure. Further this infrastructure must reduce operational and logistics cost. The paper examines the importance of achieving a high level of logistics independence by minimizing resource consumption, minimizing the dependency on external logistics, and maximizing the utility of resources available. The approach involves the development and deployment of a core suite of technologies that have minimum initial needs yet are able expand upon initial capability in an incremental bootstrap fashion. The bootstrap approach incrementally creates an infrastructure that grows and becomes self sustaining and eventually begins producing the energy, products and consumable propellants that support human exploration. The bootstrap technologies involve new methods of delivering and manipulating energy and materials. These technologies will exploit the space environment, minimize dependencies, and minimize the need for imported resources. They will provide the widest range of utility in a resource scarce environment and pave the way to an affordable exploration program.

  4. Advanced General Aviation Turbine Engine (GATE) concepts

    NASA Technical Reports Server (NTRS)

    Lays, E. J.; Murray, G. L.

    1979-01-01

    Concepts are discussed that project turbine engine cost savings through use of geometrically constrained components designed for low rotational speeds and low stress to permit manufacturing economies. Aerodynamic development of geometrically constrained components is recommended to maximize component efficiency. Conceptual engines, airplane applications, airplane performance, engine cost, and engine-related life cycle costs are presented. The powerplants proposed offer encouragement with respect to fuel efficiency and life cycle costs, and make possible remarkable airplane performance gains.

  5. Coding for Communication Channels with Dead-Time Constraints

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Hamkins, Jon

    2004-01-01

    Coding schemes have been designed and investigated specifically for optical and electronic data-communication channels in which information is conveyed via pulse-position modulation (PPM) subject to dead-time constraints. These schemes involve the use of error-correcting codes concatenated with codes denoted constrained codes. These codes are decoded using an interactive method. In pulse-position modulation, time is partitioned into frames of Mslots of equal duration. Each frame contains one pulsed slot (all others are non-pulsed). For a given channel, the dead-time constraints are defined as a maximum and a minimum on the allowable time between pulses. For example, if a Q-switched laser is used to transmit the pulses, then the minimum allowable dead time is the time needed to recharge the laser for the next pulse. In the case of bits recorded on a magnetic medium, the minimum allowable time between pulses depends on the recording/playback speed and the minimum distance between pulses needed to prevent interference between adjacent bits during readout. The maximum allowable dead time for a given channel is the maximum time for which it is possible to satisfy the requirement to synchronize slots. In mathematical shorthand, the dead-time constraints for a given channel are represented by the pair of integers (d,k), where d is the minimum allowable number of zeroes between ones and k is the maximum allowable number of zeroes between ones. A system of the type to which the present schemes apply is represented by a binary- input, real-valued-output channel model illustrated in the figure. At the transmitting end, information bits are first encoded by use of an error-correcting code, then further encoded by use of a constrained code. Several constrained codes for channels subject to constraints of (d,infinity) have been investigated theoretically and computationally. The baseline codes chosen for purposes of comparison were simple PPM codes characterized by M-slot PPM frames separated by d-slot dead times.

  6. An algorithm for minimum-cost set-point ordering in a cryogenic wind tunnel

    NASA Technical Reports Server (NTRS)

    Tripp, J. S.

    1981-01-01

    An algorithm for minimum cost ordering of set points in a cryogenic wind tunnel is developed. The procedure generates a matrix of dynamic state transition costs, which is evaluated by means of a single-volume lumped model of the cryogenic wind tunnel and the use of some idealized minimum-costs, which is evaluated by means of a single-volume lumped model of the cryogenic wind tunnel and the use of some idealized minimum-cost state-transition control strategies. A branch and bound algorithm is employed to determine the least costly sequence of state transitions from the transition-cost matrix. Some numerical results based on data for the National Transonic Facility are presented which show a strong preference for state transitions that consume to coolant. Results also show that the choice of the terminal set point in an open odering can produce a wide variation in total cost.

  7. An historical survey of computational methods in optimal control.

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es

    We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.

  9. The Effect of Shorter Treatment Regimens for Hepatitis C on Population Health and Under Fixed Budgets.

    PubMed

    Morgan, Jake R; Kim, Arthur Y; Naggie, Susanna; Linas, Benjamin P

    2018-01-01

    Direct acting antiviral hepatitis C virus (HCV) therapies are highly effective but costly. Wider adoption of an 8-week ledipasvir/sofosbuvir treatment regimen could result in significant savings, but may be less efficacious compared with a 12-week regimen. We evaluated outcomes under a constrained budget and cost-effectiveness of 8 vs 12 weeks of therapy in treatment-naïve, noncirrhotic, genotype 1 HCV-infected black and nonblack individuals and considered scenarios of IL28B and NS5A resistance testing to determine treatment duration in sensitivity analyses. We developed a decision tree to use in conjunction with Monte Carlo simulation to investigate the cost-effectiveness of recommended treatment durations and the population health effect of these strategies given a constrained budget. Outcomes included the total number of individuals treated and attaining sustained virologic response (SVR) given a constrained budget and incremental cost-effectiveness ratios. We found that treating eligible (treatment-naïve, noncirrhotic, HCV-RNA <6 million copies) individuals with 8 weeks rather than 12 weeks of therapy was cost-effective and allowed for 50% more individuals to attain SVR given a constrained budget among both black and nonblack individuals, and our results suggested that NS5A resistance testing is cost-effective. Eight-week therapy provides good value, and wider adoption of shorter treatment could allow more individuals to attain SVR on the population level given a constrained budget. This analysis provides an evidence base to justify movement of the 8-week regimen to the preferred regimen list for appropriate patients in the HCV treatment guidelines and suggests expanding that recommendation to black patients in settings where cost and relapse trade-offs are considered.

  10. Satellite broadcasting system study

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The study to develop a system model and computer program representative of broadcasting satellite systems employing community-type receiving terminals is reported. The program provides a user-oriented tool for evaluating performance/cost tradeoffs, synthesizing minimum cost systems for a given set of system requirements, and performing sensitivity analyses to identify critical parameters and technology. The performance/ costing philosophy and what is meant by a minimum cost system is shown graphically. Topics discussed include: main line control program, ground segment model, space segment model, cost models and launch vehicle selection. Several examples of minimum cost systems resulting from the computer program are presented. A listing of the computer program is also included.

  11. Estimating Stresses, Fault Friction and Fluid Pressure from Topography and Coseismic Slip Models

    NASA Astrophysics Data System (ADS)

    Styron, R. H.; Hetland, E. A.

    2014-12-01

    Stress is a first-order control on the deformation state of the earth. However, stress is notoriously hard to measure, and researchers typically only estimate the directions and relative magnitudes of principal stresses, with little quantification of the uncertainties or absolute magnitude. To improve upon this, we have developed methods to constrain the full stress tensor field in a region surrounding a fault, including tectonic, topographic, and lithostatic components, as well as static friction and pore fluid pressure on the fault. Our methods are based on elastic halfspace techniques for estimating topographic stresses from a DEM, and we use a Bayesian approach to estimate accumulated tectonic stress, fluid pressure, and friction from fault geometry and slip rake, assuming Mohr-Coulomb fault mechanics. The nature of the tectonic stress inversion is such that either the stress maximum or minimum is better constrained, depending on the topography and fault deformation style. Our results from the 2008 Wenchuan event yield shear stresses from topography up to 20 MPa (normal-sinistral shear sense) and topographic normal stresses up to 80 MPa on the faults; tectonic stress had to be large enough to overcome topography to produce the observed reverse-dextral slip. Maximum tectonic stress is constrained to be >0.3 * lithostatic stress (depth-increasing), with a most likely value around 0.8, trending 90-110°E. Minimum tectonic stress is about half of maximum. Static fault friction is constrained at 0.1-0.4, and fluid pressure at 0-0.6 * total pressure on the fault. Additionally, the patterns of topographic stress and slip suggest that topographic normal stress may limit fault slip once failure has occurred. Preliminary results from the 2013 Balochistan earthquake are similar, but yield stronger constraints on the upper limits of maximum tectonic stress, as well as tight constraints on the magnitude of minimum tectonic stress and stress orientation. Work in progress on the Wasatch fault suggests that maximum tectonic stress may also be able to be constrained, and that some of the shallow rupture segmentation may be due in part to localized topographic loading. Future directions of this work include regions where high relief influences fault kinematics (such as Tibet).

  12. Cutting planes for the multistage stochastic unit commitment problem

    DOE PAGES

    Jiang, Ruiwei; Guan, Yongpei; Watson, Jean -Paul

    2016-04-20

    As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program.more » Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Lastly, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.« less

  13. Algorithm to solve a chance-constrained network capacity design problem with stochastic demands and finite support

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schumacher, Kathryn M.; Chen, Richard Li-Yang; Cohn, Amy E. M.

    2016-04-15

    Here, we consider the problem of determining the capacity to assign to each arc in a given network, subject to uncertainty in the supply and/or demand of each node. This design problem underlies many real-world applications, such as the design of power transmission and telecommunications networks. We first consider the case where a set of supply/demand scenarios are provided, and we must determine the minimum-cost set of arc capacities such that a feasible flow exists for each scenario. We briefly review existing theoretical approaches to solving this problem and explore implementation strategies to reduce run times. With this as amore » foundation, our primary focus is on a chance-constrained version of the problem in which α% of the scenarios must be feasible under the chosen capacity, where α is a user-defined parameter and the specific scenarios to be satisfied are not predetermined. We describe an algorithm which utilizes a separation routine for identifying violated cut-sets which can solve the problem to optimality, and we present computational results. We also present a novel greedy algorithm, our primary contribution, which can be used to solve for a high quality heuristic solution. We present computational analysis to evaluate the performance of our proposed approaches.« less

  14. Cutting planes for the multistage stochastic unit commitment problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Ruiwei; Guan, Yongpei; Watson, Jean -Paul

    As renewable energy penetration rates continue to increase in power systems worldwide, new challenges arise for system operators in both regulated and deregulated electricity markets to solve the security-constrained coal-fired unit commitment problem with intermittent generation (due to renewables) and uncertain load, in order to ensure system reliability and maintain cost effectiveness. In this paper, we study a security-constrained coal-fired stochastic unit commitment model, which we use to enhance the reliability unit commitment process for day-ahead power system operations. In our approach, we first develop a deterministic equivalent formulation for the problem, which leads to a large-scale mixed-integer linear program.more » Then, we verify that the turn on/off inequalities provide a convex hull representation of the minimum-up/down time polytope under the stochastic setting. Next, we develop several families of strong valid inequalities mainly through lifting schemes. In particular, by exploring sequence independent lifting and subadditive approximation lifting properties for the lifting schemes, we obtain strong valid inequalities for the ramping and general load balance polytopes. Lastly, branch-and-cut algorithms are developed to employ these valid inequalities as cutting planes to solve the problem. Our computational results verify the effectiveness of the proposed approach.« less

  15. Constraining Forest Certificate’s Market to Improve Cost-Effectiveness of Biodiversity Conservation in São Paulo State, Brazil

    PubMed Central

    Blumentrath, Stefan; Barton, David N.; Rusch, Graciela M.; Romeiro, Ademar R.

    2016-01-01

    The recently launched Brazilian “forest certificates” market is expected to reduce environmental compliance costs for landowners through an offset mechanism, after a long history of conservation laws based in command-and-control and strict rules. In this paper we assessed potential costs and evaluated the cost-effectiveness of the instrument when introducing to this market constraints that aim to address conservation objectives more specifically. Using the conservation planning software Marxan with Zones we simulated different scopes for the “forest certificates” market, and compared their cost-effectiveness with that of existing command-and-control (C&C), i.e. compliance to the Legal Reserve on own property, in the state of São Paulo. The simulations showed a clear potential of the constrained “forest certificates” market to improve conservation effectiveness and increase cost-effectiveness on allocation of Legal Reserves. Although the inclusion of an additional constraint of targeting the BIOTA Conservation Priority Areas doubled the cost (+95%) compared with a “free trade” scenario constrained only by biome, this option was still 50% less costly than the baseline scenario of compliance with Legal Reserve at the property. PMID:27780220

  16. Constraining Forest Certificate's Market to Improve Cost-Effectiveness of Biodiversity Conservation in São Paulo State, Brazil.

    PubMed

    Bernasconi, Paula; Blumentrath, Stefan; Barton, David N; Rusch, Graciela M; Romeiro, Ademar R

    2016-01-01

    The recently launched Brazilian "forest certificates" market is expected to reduce environmental compliance costs for landowners through an offset mechanism, after a long history of conservation laws based in command-and-control and strict rules. In this paper we assessed potential costs and evaluated the cost-effectiveness of the instrument when introducing to this market constraints that aim to address conservation objectives more specifically. Using the conservation planning software Marxan with Zones we simulated different scopes for the "forest certificates" market, and compared their cost-effectiveness with that of existing command-and-control (C&C), i.e. compliance to the Legal Reserve on own property, in the state of São Paulo. The simulations showed a clear potential of the constrained "forest certificates" market to improve conservation effectiveness and increase cost-effectiveness on allocation of Legal Reserves. Although the inclusion of an additional constraint of targeting the BIOTA Conservation Priority Areas doubled the cost (+95%) compared with a "free trade" scenario constrained only by biome, this option was still 50% less costly than the baseline scenario of compliance with Legal Reserve at the property.

  17. Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Venter, Gerhard; Sobieszczanski-Sobieski Jaroslaw

    2002-01-01

    The purpose of this paper is to show how the search algorithm known as particle swarm optimization performs. Here, particle swarm optimization is applied to structural design problems, but the method has a much wider range of possible applications. The paper's new contributions are improvements to the particle swarm optimization algorithm and conclusions and recommendations as to the utility of the algorithm, Results of numerical experiments for both continuous and discrete applications are presented in the paper. The results indicate that the particle swarm optimization algorithm does locate the constrained minimum design in continuous applications with very good precision, albeit at a much higher computational cost than that of a typical gradient based optimizer. However, the true potential of particle swarm optimization is primarily in applications with discrete and/or discontinuous functions and variables. Additionally, particle swarm optimization has the potential of efficient computation with very large numbers of concurrently operating processors.

  18. Guidance and Control Architecture Design and Demonstration for Low Ballistic Coefficient Atmospheric Entry

    NASA Technical Reports Server (NTRS)

    Swei, Sean

    2014-01-01

    We propose to develop a robust guidance and control system for the ADEPT (Adaptable Deployable Entry and Placement Technology) entry vehicle. A control-centric model of ADEPT will be developed to quantify the performance of candidate guidance and control architectures for both aerocapture and precision landing missions. The evaluation will be based on recent breakthroughs in constrained controllability/reachability analysis of control systems and constrained-based energy-minimum trajectory optimization for guidance development operating in complex environments.

  19. Constrained coding for the deep-space optical channel

    NASA Technical Reports Server (NTRS)

    Moision, B. E.; Hamkins, J.

    2002-01-01

    We investigate methods of coding for a channel subject to a large dead-time constraint, i.e. a constraint on the minimum spacing between transmitted pulses, with the deep-space optical channel as the motivating example.

  20. Can households earning minimum wage in Nova Scotia afford a nutritious diet?

    PubMed

    Williams, Patricia L; Johnson, Christine P; Kratzmann, Meredith L V; Johnson, C Shanthi Jacob; Anderson, Barbara J; Chenhall, Cathy

    2006-01-01

    To assess the affordability of a nutritious diet for households earning minimum wage in Nova Scotia. Food costing data were collected in 43 randomly selected grocery stores throughout NS in 2002 using the National Nutritious Food Basket (NNFB). To estimate the affordability of a nutritious diet for households earning minimum wage, average monthly costs for essential expenses were subtracted from overall income to see if enough money remained for the cost of the NNFB. This was calculated for three types of household: 1) two parents and two children; 2) lone parent and two children; and 3) single male. Calculations were also made for the proposed 2006 minimum wage increase with expenses adjusted using the Consumer Price Index (CPI). The monthly cost of the NNFB priced in 2002 for the three types of household was 572.90 dollars, 351.68 dollars, and 198.73 dollars, respectively. Put into the context of basic living, these data showed that Nova Scotians relying on minimum wage could not afford to purchase a nutritious diet and meet their basic needs, placing their health at risk. These basic expenses do not include other routine costs, such as personal hygiene products, household and laundry cleaners, and prescriptions and costs associated with physical activity, education or savings for unexpected expenses. People working at minimum wage in Nova Scotia have not had adequate income to meet basic needs, including a nutritious diet. The 2006 increase in minimum wage to 7.15 dollars/hr is inadequate to ensure that Nova Scotians working at minimum wage are able to meet these basic needs. Wage increases and supplements, along with supports for expenses such as childcare and transportation, are indicated to address this public health problem.

  1. Child-Care Provider Survey Reveals Cost Constrains Quality. Research Brief. Volume 96, Number 5

    ERIC Educational Resources Information Center

    Public Policy Forum, 2008

    2008-01-01

    A survey of 414 child care providers in southeastern Wisconsin reveals that cost as well as low wages and lack of benefits for workers can constrain providers from pursuing improvements to child-care quality. Of survey respondents, approximately half of whom are home-based and half center-based, 13% have at least three of five structural factors…

  2. A simulation-optimization model for water-resources management, Santa Barbara, California

    USGS Publications Warehouse

    Nishikawa, Tracy

    1998-01-01

    In times of drought, the local water supplies of the city of Santa Barbara, California, are insufficient to satisfy water demand. In response, the city has built a seawater desalination plant and gained access to imported water in 1997. Of primary concern to the city is delivering water from the various sources at a minimum cost while satisfying water demand and controlling seawater intrusion that might result from the overpumping of ground water. A simulation-optimization model has been developed for the optimal management of Santa Barbara?s water resources. The objective is to minimize the cost of water supply while satisfying various physical and institutional constraints such as meeting water demand, maintaining minimum hydraulic heads at selected sites, and not exceeding water-delivery or pumping capacities. The model is formulated as a linear programming problem with monthly management periods and a total planning horizon of 5 years. The decision variables are water deliveries from surface water (Gibraltar Reservoir, Cachuma Reservoir, Cachuma Reservoir cumulative annual carryover, Mission Tunnel, State Water Project, and desalinated seawater) and ground water (13 production wells). The state variables are hydraulic heads. Basic assumptions for all simulations are that (1) the cost of water varies with source but is fixed over time, and (2) only existing or planned city wells are considered; that is, the construction of new wells is not allowed. The drought of 1947?51 is Santa Barbara?s worst drought on record, and simulated surface-water supplies for this period were used as a basis for testing optimal management of current water resources under drought conditions. Assumptions that were made for this base case include a head constraint equal to sea level at the coastal nodes; Cachuma Reservoir carryover of 3,000 acre-feet per year, with a maximum carryover of 8,277 acre-feet; a maximum annual demand of 15,000 acre-feet; and average monthly capacities for the Cachuma and the Gibraltar Reservoirs. The base-case results indicate that water demands can be met, with little water required from the most expensive water source (desalinated seawater), at a total cost of $5.56 million over the 5-year planning horizon. The simulation model has drains, which operate as nonlinear functions of heads and could affect the model solutions. However, numerical tests show that the drains have little effect on the optimal solution. Sensitivity analyses on the base case yield the following results: If allowable Cachuma Reservoir carryover is decreased by about 50 percent, then costs increase by about 14 percent; if the peak demand is decreased by 7 percent, then costs will decrease by about 14 percent; if the head constraints are loosened to -30 feet, then the costs decrease by about 18 percent; if the heads are constrained such that a zero hydraulic gradient condition occurs at the ocean boundary, then the optimization problem does not have a solution; if the capacity of the desalination plant is constrained to zero acre-feet, then the cost increases by about 2 percent; and if the carryover of State Water Project water is implemented, then the cost decreases by about 0.5 percent. Four additional monthly diversion distribution scenarios for the reservoirs were tested: average monthly Cachuma Reservoir deliveries with the actual (scenario 1) and proposed (scenario 2) monthly distributions of Gibraltar Reservoir water, and variable monthly Cachuma Reservoir deliveries with the actual (scenario 3) and proposed (scenario 4) monthly distributions of Gibraltar Reservoir water. Scenario 1 resulted in a total cost of about $7.55 million, scenario 2 resulted in a total cost of about $5.07 million, and scenarios 3 and 4 resulted in a total cost of about $4.53 million. Sensitivities of the scenarios 1 and 2 to desalination-plant capacity and State Water Project water carryover were tested. The scenario 1 sensitivity analysis indicated that incorpo

  3. Evidence for behavioural thermoregulation by the world's largest fish

    PubMed Central

    Thums, Michele; Meekan, Mark; Stevens, John; Wilson, Steven; Polovina, Jeff

    2013-01-01

    Many fishes make frequent ascents to surface waters and often show prolonged surface swimming following descents to deep water. This affinity for the surface is thought to be related to the recovery of body heat lost at depth. We tested this hypothesis using data from time–depth recorders deployed on four whale sharks (Rhincodon typus). We summarized vertical movements into bouts of dives and classified these into three main types, using cluster analysis. In addition to day and night ‘bounce’ dives where sharks rapidly descended and ascended, we found a third type: single deep (mean: 340 m), long (mean: 169 min) dives, occurring in daytime with extremely long post-dive surface durations (mean: 146 min). Only sharks that were not constrained by shallow bathymetry performed these dives. We found a negative relationship between the mean surface duration of dives in the bout and the mean minimum temperature of dives in the bout that is consistent with the hypothesis that thermoregulation was a major factor driving use of the surface. The relationship broke down when sharks were diving in mean minimum temperatures around 25°C, suggesting that warmer waters did not incur a large metabolic cost for diving and that other factors may also influence surface use. PMID:23075547

  4. Ignition threshold for non-Maxwellian plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hay, Michael J., E-mail: hay@princeton.edu; Fisch, Nathaniel J.; Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543

    2015-11-15

    An optically thin p-{sup 11}B plasma loses more energy to bremsstrahlung than it gains from fusion reactions, unless the ion temperature can be elevated above the electron temperature. In thermal plasmas, the temperature differences required are possible in small Coulomb logarithm regimes, characterized by high density and low temperature. Ignition could be reached more easily if the fusion reactivity can be improved with nonthermal ion distributions. To establish an upper bound for the potential utility of a nonthermal distribution, we consider a monoenergetic beam with particle energy selected to maximize the beam-thermal reactivity. Comparing deuterium-tritium (DT) and p-{sup 11}B, themore » minimum Lawson criteria and minimum ρR required for inertial confinement fusion (ICF) volume ignition are calculated with and without the nonthermal feature. It turns out that channeling fusion alpha energy to maintain such a beam facilitates ignition at lower densities and ρR, improves reactivity at constant pressure, and could be used to remove helium ash. On the other hand, the reactivity gains that could be realized in DT plasmas are significant, the excess electron density in p-{sup 11}B plasmas increases the recirculated power cost to maintain a nonthermal feature and thereby constrains its utility to ash removal.« less

  5. Multi-objective trajectory optimization for the space exploration vehicle

    NASA Astrophysics Data System (ADS)

    Qin, Xiaoli; Xiao, Zhen

    2016-07-01

    The research determines temperature-constrained optimal trajectory for the space exploration vehicle by developing an optimal control formulation and solving it using a variable order quadrature collocation method with a Non-linear Programming(NLP) solver. The vehicle is assumed to be the space reconnaissance aircraft that has specified takeoff/landing locations, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom aircraft model is adapted from previous work and includes flight dynamics, and thermal constraints.Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and exploration of space targets. In addition, the vehicle models include the environmental models(gravity and atmosphere). How these models are appropriately employed is key to gaining confidence in the results and conclusions of the research. Optimal trajectories are developed using several performance costs in the optimal control formation,minimum time,minimum time with control penalties,and maximum distance.The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for large-scale space exloration.

  6. Software Development Cost Estimation Executive Summary

    NASA Technical Reports Server (NTRS)

    Hihn, Jairus M.; Menzies, Tim

    2006-01-01

    Identify simple fully validated cost models that provide estimation uncertainty with cost estimate. Based on COCOMO variable set. Use machine learning techniques to determine: a) Minimum number of cost drivers required for NASA domain based cost models; b) Minimum number of data records required and c) Estimation Uncertainty. Build a repository of software cost estimation information. Coordinating tool development and data collection with: a) Tasks funded by PA&E Cost Analysis; b) IV&V Effort Estimation Task and c) NASA SEPG activities.

  7. A cost-constrained model of strategic service quality emphasis in nursing homes.

    PubMed

    Davis, M A; Provan, K G

    1996-02-01

    This study employed structural equation modeling to test the relationship between three aspects of the environmental context of nursing homes; Medicaid dependence, ownership status, and market demand, and two basic strategic orientations: low cost and differentiation based on service quality emphasis. Hypotheses were proposed and tested against data collected from a sample of nursing homes operating in a single state. Because of the overwhelming importance of cost control in the nursing home industry, a cost constrained strategy perspective was supported. Specifically, while the three contextual variables had no direct effect on service quality emphasis, the entire model was supported when cost control orientation was introduced as a mediating variable.

  8. Secondary electric power generation with minimum engine bleed

    NASA Technical Reports Server (NTRS)

    Tagge, G. E.

    1983-01-01

    Secondary electric power generation with minimum engine bleed is discussed. Present and future jet engine systems are compared. The role of auxiliary power units is evaluated. Details of secondary electric power generation systems with and without auxiliary power units are given. Advanced bleed systems are compared with minimum bleed systems. A cost model of ownership is given. The difference in the cost of ownership between a minimum bleed system and an advanced bleed system is given.

  9. Automated design of minimum drag light aircraft fuselages and nacelles

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.; Fox, S. R.; Karlin, B. E.

    1982-01-01

    The constrained minimization algorithm of Vanderplaats is applied to the problem of designing minimum drag faired bodies such as fuselages and nacelles. Body drag is computed by a variation of the Hess-Smith code. This variation includes a boundary layer computation. The encased payload provides arbitrary geometric constraints, specified a priori by the designer, below which the fairing cannot shrink. The optimization may include engine cooling air flows entering and exhausting through specific port locations on the body.

  10. Understanding the Benefits and Limitations of Increasing Maximum Rotor Tip Speed for Utility-Scale Wind Turbines

    NASA Astrophysics Data System (ADS)

    Ning, A.; Dykes, K.

    2014-06-01

    For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent.

  11. Minimum entropy deconvolution and blind equalisation

    NASA Technical Reports Server (NTRS)

    Satorius, E. H.; Mulligan, J. J.

    1992-01-01

    Relationships between minimum entropy deconvolution, developed primarily for geophysics applications, and blind equalization are pointed out. It is seen that a large class of existing blind equalization algorithms are directly related to the scale-invariant cost functions used in minimum entropy deconvolution. Thus the extensive analyses of these cost functions can be directly applied to blind equalization, including the important asymptotic results of Donoho.

  12. Optimization of memory use of fragment extension-based protein-ligand docking with an original fast minimum cost flow algorithm.

    PubMed

    Yanagisawa, Keisuke; Komine, Shunta; Kubota, Rikuto; Ohue, Masahito; Akiyama, Yutaka

    2018-06-01

    The need to accelerate large-scale protein-ligand docking in virtual screening against a huge compound database led researchers to propose a strategy that entails memorizing the evaluation result of the partial structure of a compound and reusing it to evaluate other compounds. However, the previous method required frequent disk accesses, resulting in insufficient acceleration. Thus, more efficient memory usage can be expected to lead to further acceleration, and optimal memory usage could be achieved by solving the minimum cost flow problem. In this research, we propose a fast algorithm for the minimum cost flow problem utilizing the characteristics of the graph generated for this problem as constraints. The proposed algorithm, which optimized memory usage, was approximately seven times faster compared to existing minimum cost flow algorithms. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Magnetic Footpoint Velocities: A Combination Of Minimum Energy Fit AndLocal Correlation Tracking

    NASA Astrophysics Data System (ADS)

    Belur, Ravindra; Longcope, D.

    2006-06-01

    Many numerical and time dependent MHD simulations of the solar atmosphererequire the underlying velocity fields which should be consistent with theinduction equation. Recently, Longcope (2004) introduced a new techniqueto infer the photospheric velocity field from sequence of vector magnetogramswhich are in agreement with the induction equation. The method, the Minimum Energy Fit (MEF), determines a set of velocities and selects the velocity which is smallest overall flow speed by minimizing an energy functional. The inferred velocity can be further constrained by information aboutthe velocity inferred from other techniques. With this adopted techniquewe would expect that the inferred velocity will be close to the photospheric velocity of magnetic footpoints. Here, we demonstrate that the inferred horizontal velocities from LCT can be used to constrain the MEFvelocities. We also apply this technique to actual vector magnetogramsequences and compare these velocities with velocities from LCT alone.This work is supported by DoD MURI and NSF SHINE programs.

  14. Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods

    NASA Astrophysics Data System (ADS)

    Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong

    2008-12-01

    Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.

  15. Planning maximally smooth hand movements constrained to nonplanar workspaces.

    PubMed

    Liebermann, Dario G; Krasovsky, Tal; Berman, Sigal

    2008-11-01

    The article characterizes hand paths and speed profiles for movements performed in a nonplanar, 2-dimensional workspace (a hemisphere of constant curvature). The authors assessed endpoint kinematics (i.e., paths and speeds) under the minimum-jerk model assumptions and calculated minimal amplitude paths (geodesics) and the corresponding speed profiles. The authors also calculated hand speeds using the 2/3 power law. They then compared modeled results with the empirical observations. In all, 10 participants moved their hands forward and backward from a common starting position toward 3 targets located within a hemispheric workspace of small or large curvature. Comparisons of modeled observed differences using 2-way RM-ANOVAs showed that movement direction had no clear influence on hand kinetics (p < .05). Workspace curvature affected the hand paths, which seldom followed geodesic lines. Constraining the paths to different curvatures did not affect the hand speed profiles. Minimum-jerk speed profiles closely matched the observations and were superior to those predicted by 2/3 power law (p < .001). The authors conclude that speed and path cannot be unambiguously linked under the minimum-jerk assumption when individuals move the hand in a nonplanar 2-dimensional workspace. In such a case, the hands do not follow geodesic paths, but they preserve the speed profile, regardless of the geometric features of the workspace.

  16. A method of minimum volume simplex analysis constrained unmixing for hyperspectral image

    NASA Astrophysics Data System (ADS)

    Zou, Jinlin; Lan, Jinhui; Zeng, Yiliang; Wu, Hongtao

    2017-07-01

    The signal recorded by a low resolution hyperspectral remote sensor from a given pixel, letting alone the effects of the complex terrain, is a mixture of substances. To improve the accuracy of classification and sub-pixel object detection, hyperspectral unmixing(HU) is a frontier-line in remote sensing area. Unmixing algorithm based on geometric has become popular since the hyperspectral image possesses abundant spectral information and the mixed model is easy to understand. However, most of the algorithms are based on pure pixel assumption, and since the non-linear mixed model is complex, it is hard to obtain the optimal endmembers especially under a highly mixed spectral data. To provide a simple but accurate method, we propose a minimum volume simplex analysis constrained (MVSAC) unmixing algorithm. The proposed approach combines the algebraic constraints that are inherent to the convex minimum volume with abundance soft constraint. While considering abundance fraction, we can obtain the pure endmember set and abundance fraction correspondingly, and the final unmixing result is closer to reality and has better accuracy. We illustrate the performance of the proposed algorithm in unmixing simulated data and real hyperspectral data, and the result indicates that the proposed method can obtain the distinct signatures correctly without redundant endmember and yields much better performance than the pure pixel based algorithm.

  17. Minimum target prices for production of direct-acting antivirals and associated diagnostics to combat hepatitis C virus

    PubMed Central

    van de Ven, Nikolien; Fortunak, Joe; Simmons, Bryony; Ford, Nathan; Cooke, Graham S; Khoo, Saye; Hill, Andrew

    2015-01-01

    Combinations of direct-acting antivirals (DAAs) can cure hepatitis C virus (HCV) in the majority of treatment-naïve patients. Mass treatment programs to cure HCV in developing countries are only feasible if the costs of treatment and laboratory diagnostics are very low. This analysis aimed to estimate minimum costs of DAA treatment and associated diagnostic monitoring. Clinical trials of HCV DAAs were reviewed to identify combinations with consistently high rates of sustained virological response across hepatitis C genotypes. For each DAA, molecular structures, doses, treatment duration, and components of retrosynthesis were used to estimate costs of large-scale, generic production. Manufacturing costs per gram of DAA were based upon treating at least 5 million patients per year and a 40% margin for formulation. Costs of diagnostic support were estimated based on published minimum prices of genotyping, HCV antigen tests plus full blood count/clinical chemistry tests. Predicted minimum costs for 12-week courses of combination DAAs with the most consistent efficacy results were: US$122 per person for sofosbuvir+daclatasvir; US$152 for sofosbuvir+ribavirin; US$192 for sofosbuvir+ledipasvir; and US$115 for MK-8742+MK-5172. Diagnostic testing costs were estimated at US$90 for genotyping US$34 for two HCV antigen tests and US$22 for two full blood count/clinical chemistry tests. Conclusions: Minimum costs of treatment and diagnostics to cure hepatitis C virus infection were estimated at US$171-360 per person without genotyping or US$261-450 per person with genotyping. These cost estimates assume that existing large-scale treatment programs can be established. (Hepatology 2015;61:1174–1182) PMID:25482139

  18. Minimum target prices for production of direct-acting antivirals and associated diagnostics to combat hepatitis C virus.

    PubMed

    van de Ven, Nikolien; Fortunak, Joe; Simmons, Bryony; Ford, Nathan; Cooke, Graham S; Khoo, Saye; Hill, Andrew

    2015-04-01

    Combinations of direct-acting antivirals (DAAs) can cure hepatitis C virus (HCV) in the majority of treatment-naïve patients. Mass treatment programs to cure HCV in developing countries are only feasible if the costs of treatment and laboratory diagnostics are very low. This analysis aimed to estimate minimum costs of DAA treatment and associated diagnostic monitoring. Clinical trials of HCV DAAs were reviewed to identify combinations with consistently high rates of sustained virological response across hepatitis C genotypes. For each DAA, molecular structures, doses, treatment duration, and components of retrosynthesis were used to estimate costs of large-scale, generic production. Manufacturing costs per gram of DAA were based upon treating at least 5 million patients per year and a 40% margin for formulation. Costs of diagnostic support were estimated based on published minimum prices of genotyping, HCV antigen tests plus full blood count/clinical chemistry tests. Predicted minimum costs for 12-week courses of combination DAAs with the most consistent efficacy results were: US$122 per person for sofosbuvir+daclatasvir; US$152 for sofosbuvir+ribavirin; US$192 for sofosbuvir+ledipasvir; and US$115 for MK-8742+MK-5172. Diagnostic testing costs were estimated at US$90 for genotyping US$34 for two HCV antigen tests and US$22 for two full blood count/clinical chemistry tests. Minimum costs of treatment and diagnostics to cure hepatitis C virus infection were estimated at US$171-360 per person without genotyping or US$261-450 per person with genotyping. These cost estimates assume that existing large-scale treatment programs can be established. © 2014 The Authors. Hepatology published by Wiley Periodicals, Inc., on behalf of the American Association for the Study of Liver Diseases.

  19. A chance-constrained stochastic approach to intermodal container routing problems.

    PubMed

    Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony

    2018-01-01

    We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.

  20. A chance-constrained stochastic approach to intermodal container routing problems

    PubMed Central

    Zhao, Yi; Zhang, Xi; Whiteing, Anthony

    2018-01-01

    We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost. PMID:29438389

  1. On the constrained minimization of smooth Kurdyka—Łojasiewicz functions with the scaled gradient projection method

    NASA Astrophysics Data System (ADS)

    Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone

    2016-10-01

    The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.

  2. Extending the Aircraft Availability Model to a Constrained Depot Environment Using Activity-Based Costing and the Theory of Constraints

    DTIC Science & Technology

    2004-06-01

    Overselling Activity-Based Concepts,” Management Accounting , September 1992:26-35. Kaplan, Robert S . and Robin Cooper. Cost & Effect. Using... Accounting .............................................14 Increasing Need for Cost Information...15 Implications for Costs Accounting Systems ....................................................17 Section 2 – Costs and Resources

  3. Globally optimal superconducting magnets part I: minimum stored energy (MSE) current density map.

    PubMed

    Tieng, Quang M; Vegh, Viktor; Brereton, Ian M

    2009-01-01

    An optimal current density map is crucial in magnet design to provide the initial values within search spaces in an optimization process for determining the final coil arrangement of the magnet. A strategy for obtaining globally optimal current density maps for the purpose of designing magnets with coaxial cylindrical coils in which the stored energy is minimized within a constrained domain is outlined. The current density maps obtained utilising the proposed method suggests that peak current densities occur around the perimeter of the magnet domain, where the adjacent peaks have alternating current directions for the most compact designs. As the dimensions of the domain are increased, the current density maps yield traditional magnet designs of positive current alone. These unique current density maps are obtained by minimizing the stored magnetic energy cost function and therefore suggest magnet coil designs of minimal system energy. Current density maps are provided for a number of different domain arrangements to illustrate the flexibility of the method and the quality of the achievable designs.

  4. Profile negotiation - A concept for integrating airborne and ground-based automation for managing arrival traffic

    NASA Technical Reports Server (NTRS)

    Green, Steven M.; Den Braven, Wim; Williams, David H.

    1991-01-01

    The profile negotiation process (PNP) concept as applied to the management of arrival traffic within the extended terminal area is presented, focusing on functional issues from the ground-based perspective. The PNP is an interactive process between an aircraft and air traffic control (ATC) which combines airborne and ground-based automation capabilities to determine conflict-free trajectories that are as close to an aircraft's preference as possible. Preliminary results from a real-time simulation study show that the controller teams are able to consistently and effectively negotiate conflict-free vertical profiles with 4D-equipped aircraft. The ability of the airborne 4D flight management system to adapt to ATC specified 4D trajectory constraints is found to be a requirement for successful execution of the PNP. It is recommended that the conventional method of cost index iteration for obtaining the minimum fuel 4D trajectory be supplemented by a method which constrains the profile speeds to those desired by ATC.

  5. A comparison of Frequency Domain Multiple Access (FDMA) and Time Domain Multiple Access (TDMA) approaches to satellite service for low data rate Earth stations

    NASA Technical Reports Server (NTRS)

    Stevens, G.

    1983-01-01

    A technological and economic assessment is made of providing low data rate service to small earth stations by satellite at Ka-band. Various Frequency Domain Multiple Access (FDMA) and Time Domain Multiple Access (TDMA) scenarios are examined and compared on the basis of cost to the end user. Very small stations (1 to 2 meters in diameter) are found not to be viable alternatives to available terrestrial services. However, medium size (3 to 5 meters) earth stations appear to be very competitive if a minimum throughput of about 1.5 Mbs is maintained. This constrains the use of such terminals to large users and shared use by smaller users. No advantage was found to the use of FDMA. TDMA had a slight advantage from a total system viewpoint and a very significant advantage in the space segment (about 1/3 the required payload weight for an equivalent capacity).

  6. Cost of and soil loss on "minimum-standard" forest truck roads constructed in the central Appalachians

    Treesearch

    J. N. Kochenderfer; G. W. Wendel; H. Clay Smith

    1984-01-01

    A "minimum-standard" forest truck road that provides efficient and environmentally acceptable access for several forest activities is described. Cost data are presented for eight of these roads constructed in the central Appalachians. The average cost per mile excluding gravel was $8,119. The range was $5,048 to $14,424. Soil loss was measured from several...

  7. Constrained signal reconstruction from wavelet transform coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, C.M.

    1991-12-31

    A new method is introduced for reconstructing a signal from an incomplete sampling of its Discrete Wavelet Transform (DWT). The algorithm yields a minimum-norm estimate satisfying a priori upper and lower bounds on the signal. The method is based on a finite-dimensional representation theory for minimum-norm estimates of bounded signals developed by R.E. Cole. Cole`s work has its origins in earlier techniques of maximum-entropy spectral estimation due to Lang and McClellan, which were adapted by Steinhardt, Goodrich and Roberts for minimum-norm spectral estimation. Cole`s extension of their work provides a representation for minimum-norm estimates of a class of generalized transformsmore » in terms of general correlation data (not just DFT`s of autocorrelation lags, as in spectral estimation). One virtue of this great generality is that it includes the inverse DWT. 20 refs.« less

  8. Using Parameter Constraints to Choose State Structures in Cost-Effectiveness Modelling.

    PubMed

    Thom, Howard; Jackson, Chris; Welton, Nicky; Sharples, Linda

    2017-09-01

    This article addresses the choice of state structure in a cost-effectiveness multi-state model. Key model outputs, such as treatment recommendations and prioritisation of future research, may be sensitive to state structure choice. For example, it may be uncertain whether to consider similar disease severities or similar clinical events as the same state or as separate states. Standard statistical methods for comparing models require a common reference dataset but merging states in a model aggregates the data, rendering these methods invalid. We propose a method that involves re-expressing a model with merged states as a model on the larger state space in which particular transition probabilities, costs and utilities are constrained to be equal between states. This produces a model that gives identical estimates of cost effectiveness to the model with merged states, while leaving the data unchanged. The comparison of state structures can be achieved by comparing maximised likelihoods or information criteria between constrained and unconstrained models. We can thus test whether the costs and/or health consequences for a patient in two states are the same, and hence if the states can be merged. We note that different structures can be used for rates, costs and utilities, as appropriate. We illustrate our method with applications to two recent models evaluating the cost effectiveness of prescribing anti-depressant medications by depression severity and the cost effectiveness of diagnostic tests for coronary artery disease. State structures in cost-effectiveness models can be compared using standard methods to compare constrained and unconstrained models.

  9. Menu Plans: Maximum Nutrition for Minimum Cost.

    ERIC Educational Resources Information Center

    Texas Child Care, 1995

    1995-01-01

    Suggests that menu planning is the key to getting maximum nutrition in day care meals and snacks for minimum cost. Explores United States Department of Agriculture food pyramid guidelines for children and tips for planning menus and grocery shopping. Includes suggested meal patterns and portion sizes. (HTH)

  10. Key node selection in minimum-cost control of complex networks

    NASA Astrophysics Data System (ADS)

    Ding, Jie; Wen, Changyun; Li, Guoqi

    2017-11-01

    Finding the key node set that is connected with a given number of external control sources for driving complex networks from initial state to any predefined state with minimum cost, known as minimum-cost control problem, is critically important but remains largely open. By defining an importance index for each node, we propose revisited projected gradient method extension (R-PGME) in Monte-Carlo scenario to determine key node set. It is found that the importance index of a node is strongly correlated to occurrence rate of that node to be selected as a key node in Monte-Carlo realizations for three elementary topologies, Erdős-Rényi and scale-free networks. We also discover the distribution patterns of key nodes when the control cost reaches its minimum. Specifically, the importance indices of all nodes in an elementary stem show a quasi-periodic distribution with high peak values in the beginning and end of a quasi-period while they approach to a uniform distribution in an elementary cycle. We further point out that an elementary dilation can be regarded as two elementary stems whose lengths are the closest, and the importance indices in each stem present similar distribution as in an elementary stem. Our results provide a better understanding and deep insight of locating the key nodes in different topologies with minimum control cost.

  11. Energy Efficiency Building Code for Commercial Buildings in Sri Lanka

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Busch, John; Greenberg, Steve; Rubinstein, Francis

    2000-09-30

    1.1.1 To encourage energy efficient design or retrofit of commercial buildings so that they may be constructed, operated, and maintained in a manner that reduces the use of energy without constraining the building function, the comfort, health, or the productivity of the occupants and with appropriate regard for economic considerations. 1.1.2 To provide criterion and minimum standards for energy efficiency in the design or retrofit of commercial buildings and provide methods for determining compliance with them. 1.1.3 To encourage energy efficient designs that exceed these criterion and minimum standards.

  12. A New Control Paradigm for Stochastic Differential Equations

    NASA Astrophysics Data System (ADS)

    Schmid, Matthias J. A.

    This study presents a novel comprehensive approach to the control of dynamic systems under uncertainty governed by stochastic differential equations (SDEs). Large Deviations (LD) techniques are employed to arrive at a control law for a large class of nonlinear systems minimizing sample path deviations. Thereby, a paradigm shift is suggested from point-in-time to sample path statistics on function spaces. A suitable formal control framework which leverages embedded Freidlin-Wentzell theory is proposed and described in detail. This includes the precise definition of the control objective and comprises an accurate discussion of the adaptation of the Freidlin-Wentzell theorem to the particular situation. The new control design is enabled by the transformation of an ill-posed control objective into a well-conditioned sequential optimization problem. A direct numerical solution process is presented using quadratic programming, but the emphasis is on the development of a closed-form expression reflecting the asymptotic deviation probability of a particular nominal path. This is identified as the key factor in the success of the new paradigm. An approach employing the second variation and the differential curvature of the effective action is suggested for small deviation channels leading to the Jacobi field of the rate function and the subsequently introduced Jacobi field performance measure. This closed-form solution is utilized in combination with the supplied parametrization of the objective space. For the first time, this allows for an LD based control design applicable to a large class of nonlinear systems. Thus, Minimum Large Deviations (MLD) control is effectively established in a comprehensive structured framework. The construction of the new paradigm is completed by an optimality proof for the Jacobi field performance measure, an interpretive discussion, and a suggestion for efficient implementation. The potential of the new approach is exhibited by its extension to scalar systems subject to state-dependent noise and to systems of higher order. The suggested control paradigm is further advanced when a sequential application of MLD control is considered. This technique yields a nominal path corresponding to the minimum total deviation probability on the entire time domain. It is demonstrated that this sequential optimization concept can be unified in a single objective function which is revealed to be the Jacobi field performance index on the entire domain subject to an endpoint deviation. The emerging closed-form term replaces the previously required nested optimization and, thus, results in a highly efficient application-ready control design. This effectively substantiates Minimum Path Deviation (MPD) control. The proposed control paradigm allows the specific problem of stochastic cost control to be addressed as a special case. This new technique is employed within this study for the stochastic cost problem giving rise to Cost Constrained MPD (CCMPD) as well as to Minimum Quadratic Cost Deviation (MQCD) control. An exemplary treatment of a generic scalar nonlinear system subject to quadratic costs is performed for MQCD control to demonstrate the elementary expandability of the new control paradigm. This work concludes with a numerical evaluation of both MPD and CCMPD control for three exemplary benchmark problems. Numerical issues associated with the simulation of SDEs are briefly discussed and illustrated. The numerical examples furnish proof of the successful design. This study is complemented by a thorough review of statistical control methods, stochastic processes, Large Deviations techniques and the Freidlin-Wentzell theory, providing a comprehensive, self-contained account. The presentation of the mathematical tools and concepts is of a unique character, specifically addressing an engineering audience.

  13. 42 CFR 412.348 - Exception payments.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... beginning on or after October 1, 1991 and before October 1, 2001. (c) Minimum payment level by class of hospital. (1) CMS establishes a minimum payment level by class of hospital. The minimum payment level for a hospital will equal a fixed percentage of the hospital's capital-related costs. The minimum payment levels...

  14. Sexual harassment induces a temporary fitness cost but does not constrain the acquisition of environmental information in fruit flies.

    PubMed

    Teseo, Serafino; Veerus, Liisa; Moreno, Céline; Mery, Frédéric

    2016-01-01

    Across animals, sexual harassment induces fitness costs for females and males. However, little is known about the cognitive costs involved, i.e. whether it constrains learning processes, which could ultimately affect an individual's fitness. Here we evaluate the acquisition of environmental information in groups of fruit flies challenged with various levels of male sexual harassment. We show that, although high sexual harassment induces a temporary fitness cost for females, all fly groups of both sexes exhibit similar levels of learning. This suggests that, in fruit flies, the fitness benefits of acquiring environmental information are not affected by the fitness costs of sexual harassment, and that selection may favour cognition even in unfavourable social contexts. Our study provides novel insights into the relationship between sexual conflicts and cognition and the evolution of female counterstrategies against male sexual harassment. © 2016 The Author(s).

  15. Constraint elimination in dynamical systems

    NASA Technical Reports Server (NTRS)

    Singh, R. P.; Likins, P. W.

    1989-01-01

    Large space structures (LSSs) and other dynamical systems of current interest are often extremely complex assemblies of rigid and flexible bodies subjected to kinematical constraints. A formulation is presented for the governing equations of constrained multibody systems via the application of singular value decomposition (SVD). The resulting equations of motion are shown to be of minimum dimension.

  16. Local pursuit strategy-inspired cooperative trajectory planning algorithm for a class of nonlinear constrained dynamical systems

    NASA Astrophysics Data System (ADS)

    Xu, Yunjun; Remeikas, Charles; Pham, Khanh

    2014-03-01

    Cooperative trajectory planning is crucial for networked vehicles to respond rapidly in cluttered environments and has a significant impact on many applications such as air traffic or border security monitoring and assessment. One of the challenges in cooperative planning is to find a computationally efficient algorithm that can accommodate both the complexity of the environment and real hardware and configuration constraints of vehicles in the formation. Inspired by a local pursuit strategy observed in foraging ants, feasible and optimal trajectory planning algorithms are proposed in this paper for a class of nonlinear constrained cooperative vehicles in environments with densely populated obstacles. In an iterative hierarchical approach, the local behaviours, such as the formation stability, obstacle avoidance, and individual vehicle's constraints, are considered in each vehicle's (i.e. follower's) decentralised optimisation. The cooperative-level behaviours, such as the inter-vehicle collision avoidance, are considered in the virtual leader's centralised optimisation. Early termination conditions are derived to reduce the computational cost by not wasting time in the local-level optimisation if the virtual leader trajectory does not satisfy those conditions. The expected advantages of the proposed algorithms are (1) the formation can be globally asymptotically maintained in a decentralised manner; (2) each vehicle decides its local trajectory using only the virtual leader and its own information; (3) the formation convergence speed is controlled by one single parameter, which makes it attractive for many practical applications; (4) nonlinear dynamics and many realistic constraints, such as the speed limitation and obstacle avoidance, can be easily considered; (5) inter-vehicle collision avoidance can be guaranteed in both the formation transient stage and the formation steady stage; and (6) the computational cost in finding both the feasible and optimal solutions is low. In particular, the feasible solution can be computed in a very quick fashion. The minimum energy trajectory planning for a group of robots in an obstacle-laden environment is simulated to showcase the advantages of the proposed algorithms.

  17. Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization

    NASA Astrophysics Data System (ADS)

    Fyodorov, Yan V.; Le Doussal, Pierre

    2014-01-01

    Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.

  18. Potential benefits of minimum unit pricing for alcohol versus a ban on below cost selling in England 2014: modelling study.

    PubMed

    Brennan, Alan; Meng, Yang; Holmes, John; Hill-McManus, Daniel; Meier, Petra S

    2014-09-30

    To evaluate the potential impact of two alcohol control policies under consideration in England: banning below cost selling of alcohol and minimum unit pricing. Modelling study using the Sheffield Alcohol Policy Model version 2.5. England 2014-15. Adults and young people aged 16 or more, including subgroups of moderate, hazardous, and harmful drinkers. Policy to ban below cost selling, which means that the selling price to consumers could not be lower than tax payable on the product, compared with policies of minimum unit pricing at £0.40 (€0.57; $0.75), 45 p, and 50 p per unit (7.9 g/10 mL) of pure alcohol. Changes in mean consumption in terms of units of alcohol, drinkers' expenditure, and reductions in deaths, illnesses, admissions to hospital, and quality adjusted life years. The proportion of the market affected is a key driver of impact, with just 0.7% of all units estimated to be sold below the duty plus value added tax threshold implied by a ban on below cost selling, compared with 23.2% of units for a 45 p minimum unit price. Below cost selling is estimated to reduce harmful drinkers' mean annual consumption by just 0.08%, around 3 units per year, compared with 3.7% or 137 units per year for a 45 p minimum unit price (an approximately 45 times greater effect). The ban on below cost selling has a small effect on population health-saving an estimated 14 deaths and 500 admissions to hospital per annum. In contrast, a 45 p minimum unit price is estimated to save 624 deaths and 23,700 hospital admissions. Most of the harm reductions (for example, 89% of estimated deaths saved per annum) are estimated to occur in the 5.3% of people who are harmful drinkers. The ban on below cost selling, implemented in the England in May 2014, is estimated to have small effects on consumption and health harm. The previously announced policy of a minimum unit price, if set at expected levels between 40 p and 50 p per unit, is estimated to have an approximately 40-50 times greater effect. © Brennan et al 2014.

  19. Potential benefits of minimum unit pricing for alcohol versus a ban on below cost selling in England 2014: modelling study

    PubMed Central

    Meng, Yang; Holmes, John; Hill-McManus, Daniel; Meier, Petra S

    2014-01-01

    Objective To evaluate the potential impact of two alcohol control policies under consideration in England: banning below cost selling of alcohol and minimum unit pricing. Design Modelling study using the Sheffield Alcohol Policy Model version 2.5. Setting England 2014-15. Population Adults and young people aged 16 or more, including subgroups of moderate, hazardous, and harmful drinkers. Interventions Policy to ban below cost selling, which means that the selling price to consumers could not be lower than tax payable on the product, compared with policies of minimum unit pricing at £0.40 (€0.57; $0.75), 45p, and 50p per unit (7.9 g/10 mL) of pure alcohol. Main outcome measures Changes in mean consumption in terms of units of alcohol, drinkers’ expenditure, and reductions in deaths, illnesses, admissions to hospital, and quality adjusted life years. Results The proportion of the market affected is a key driver of impact, with just 0.7% of all units estimated to be sold below the duty plus value added tax threshold implied by a ban on below cost selling, compared with 23.2% of units for a 45p minimum unit price. Below cost selling is estimated to reduce harmful drinkers’ mean annual consumption by just 0.08%, around 3 units per year, compared with 3.7% or 137 units per year for a 45p minimum unit price (an approximately 45 times greater effect). The ban on below cost selling has a small effect on population health—saving an estimated 14 deaths and 500 admissions to hospital per annum. In contrast, a 45p minimum unit price is estimated to save 624 deaths and 23 700 hospital admissions. Most of the harm reductions (for example, 89% of estimated deaths saved per annum) are estimated to occur in the 5.3% of people who are harmful drinkers. Conclusions The ban on below cost selling, implemented in the England in May 2014, is estimated to have small effects on consumption and health harm. The previously announced policy of a minimum unit price, if set at expected levels between 40p and 50p per unit, is estimated to have an approximately 40-50 times greater effect. PMID:25270743

  20. Shuttle payload minimum cost vibroacoustic tests

    NASA Technical Reports Server (NTRS)

    Stahle, C. V.; Gongloff, H. R.; Young, J. P.; Keegan, W. B.

    1977-01-01

    This paper is directed toward the development of the methodology needed to evaluate cost effective vibroacoustic test plans for Shuttle Spacelab payloads. Statistical decision theory is used to quantitatively evaluate seven alternate test plans by deriving optimum test levels and the expected cost for each multiple mission payload considered. The results indicate that minimum costs can vary by as much as $6 million for the various test plans. The lowest cost approach eliminates component testing and maintains flight vibration reliability by performing subassembly tests at a relatively high acoustic level. Test plans using system testing or combinations of component and assembly level testing are attractive alternatives. Component testing alone is shown not to be cost effective.

  1. Stress-Constrained Structural Topology Optimization with Design-Dependent Loads

    NASA Astrophysics Data System (ADS)

    Lee, Edmund

    Topology optimization is commonly used to distribute a given amount of material to obtain the stiffest structure, with predefined fixed loads. The present work investigates the result of applying stress constraints to topology optimization, for problems with design-depending loading, such as self-weight and pressure. In order to apply pressure loading, a material boundary identification scheme is proposed, iteratively connecting points of equal density. In previous research, design-dependent loading problems have been limited to compliance minimization. The present study employs a more practical approach by minimizing mass subject to failure constraints, and uses a stress relaxation technique to avoid stress constraint singularities. The results show that these design dependent loading problems may converge to a local minimum when stress constraints are enforced. Comparisons between compliance minimization solutions and stress-constrained solutions are also given. The resulting topologies of these two solutions are usually vastly different, demonstrating the need for stress-constrained topology optimization.

  2. Necessary conditions for the optimality of variable rate residual vector quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.

  3. An economic analysis of selected strategies for dissolved-oxygen management; Chattahoochee River, Georgia

    USGS Publications Warehouse

    Schefter, John E.; Hirsch, Robert M.

    1980-01-01

    A method for evaluating the cost-effectiveness of alternative strategies for dissolved-oxygen (DO) management is demonstrated, using the Chattahoochee River, GA., as an example. The conceptual framework for the analysis is suggested by the economic theory of production. The minimum flow of the river and the percentage of the total waste inflow receiving nitrification are considered to be two variable inputs to be used in the production of given minimum concentration of DO in the river. Each of the inputs has a cost: the loss of dependable peak hydroelectric generating capacity at Buford Dam associated with flow augmentation and the cost associated with nitrification of wastes. The least-cost combination of minimum flow and waste treatment necessary to achieve a prescribed minimum DO concentration is identified. Results of the study indicate that, in some instances, the waste-assimilation capacity of the Chattahoochee River can be substituted for increased waste treatment; the associated savings in waste-treatment costs more than offset the benefits foregone because of the loss of peak generating capacity at Buford Dam. The sensitivity of the results to the estimates of the cost of replacing peak generating capacity is examined. It is also demonstrated that a flexible approach to the management of DO in the Chattahoochee River may be much more cost effective than a more rigid, institutional approach wherein constraints are placed on the flow of the river and(or) on waste-treatment practices. (USGS)

  4. Mobility based multicast routing in wireless mesh networks

    NASA Astrophysics Data System (ADS)

    Jain, Sanjeev; Tripathi, Vijay S.; Tiwari, Sudarshan

    2013-01-01

    There exist two fundamental approaches to multicast routing namely minimum cost trees and shortest path trees. The (MCT's) minimum cost tree is one which connects receiver and sources by providing a minimum number of transmissions (MNTs) the MNTs approach is generally used for energy constraint sensor and mobile ad hoc networks. In this paper we have considered node mobility and try to find out simulation based comparison of the (SPT's) shortest path tree, (MST's) minimum steiner trees and minimum number of transmission trees in wireless mesh networks by using the performance metrics like as an end to end delay, average jitter, throughput and packet delivery ratio, average unicast packet delivery ratio, etc. We have also evaluated multicast performance in the small and large wireless mesh networks. In case of multicast performance in the small networks we have found that when the traffic load is moderate or high the SPTs outperform the MSTs and MNTs in all cases. The SPTs have lowest end to end delay and average jitter in almost all cases. In case of multicast performance in the large network we have seen that the MSTs provide minimum total edge cost and minimum number of transmissions. We have also found that the one drawback of SPTs, when the group size is large and rate of multicast sending is high SPTs causes more packet losses to other flows as MCTs.

  5. Constraining Basin Depth and Fault Displacement in the Malombe Basin Using Potential Field Methods

    NASA Astrophysics Data System (ADS)

    Beresh, S. C. M.; Elifritz, E. A.; Méndez, K.; Johnson, S.; Mynatt, W. G.; Mayle, M.; Atekwana, E. A.; Laó-Dávila, D. A.; Chindandali, P. R. N.; Chisenga, C.; Gondwe, S.; Mkumbwa, M.; Kalaguluka, D.; Kalindekafe, L.; Salima, J.

    2017-12-01

    The Malombe Basin is part of the Malawi Rift which forms the southern part of the Western Branch of the East African Rift System. At its southern end, the Malawi Rift bifurcates into the Bilila-Mtakataka and Chirobwe-Ntcheu fault systems and the Lake Malombe Rift Basin around the Shire Horst, a competent block under the Nankumba Peninsula. The Malombe Basin is approximately 70km from north to south and 35km at its widest point from east to west, bounded by reversing-polarity border faults. We aim to constrain the depth of the basin to better understand displacement of each border fault. Our work utilizes two east-west gravity profiles across the basin coupled with Source Parameter Imaging (SPI) derived from a high-resolution aeromagnetic survey. The first gravity profile was done across the northern portion of the basin and the second across the southern portion. Gravity and magnetic data will be used to constrain basement depths and the thickness of the sedimentary cover. Additionally, Shuttle Radar Topography Mission (SRTM) data is used to understand the topographic expression of the fault scarps. Estimates for minimum displacement of the border faults on either side of the basin were made by adding the elevation of the scarps to the deepest SPI basement estimates at the basin borders. Our preliminary results using SPI and SRTM data show a minimum displacement of approximately 1.3km for the western border fault; the minimum displacement for the eastern border fault is 740m. However, SPI merely shows the depth to the first significantly magnetic layer in the subsurface, which may or may not be the actual basement layer. Gravimetric readings are based on subsurface density and thus circumvent issues arising from magnetic layers located above the basement; therefore expected results for our work will be to constrain more accurate basin depth by integrating the gravity profiles. Through more accurate basement depth estimates we also gain more accurate displacement estimates for the Basin's faults. Not only do the improved depth estimates serve as a proxy to the viability of hydrocarbon exploration efforts in the region, but the improved displacement estimates also provide a better understanding of extension accommodation within the Malawi Rift.

  6. Advanced electric propulsion system concept for electric vehicles. Addendum 1: Voltage considerations

    NASA Technical Reports Server (NTRS)

    Raynard, A. E.; Forbes, F. E.

    1980-01-01

    The two electric vehicle propulsion systems that best met cost and performance goals were examined to assess the effect of battery pack voltage on system performance and cost. A voltage range of 54 to 540 V was considered for a typical battery pack capacity of 24 k W-hr. The highest battery specific energy (W-hr/kg) and the lowest cost ($/kW-hr) were obtained at the minimum voltage level. The flywheel system traction motor is a dc, mechanically commutated with shunt field control, and due to the flywheel the traction motor and the battery are not subject to extreme peaks of power demand. The basic system uses a permanent-magnet motor with electronic commutation supplied by an ac power control unit. In both systems battery cost were the major factor in system voltage selection, and a battery pack with the minimum voltage of 54 V produced the lowest life-cycle cost. The minimum life-cycle cost for the basic system with lead-acid batteries was $0.057/km and for the flywheel system was $0.037/km.

  7. Research on configuration of railway self-equipped tanker based on minimum cost maximum flow model

    NASA Astrophysics Data System (ADS)

    Yang, Yuefang; Gan, Chunhui; Shen, Tingting

    2017-05-01

    In the study of the configuration of the tanker of chemical logistics park, the minimum cost maximum flow model is adopted. Firstly, the transport capacity of the park loading and unloading area and the transportation demand of the dangerous goods are taken as the constraint condition of the model; then the transport arc capacity, the transport arc flow and the transport arc edge weight are determined in the transportation network diagram; finally, the software calculations. The calculation results show that the configuration issue of the tankers can be effectively solved by the minimum cost maximum flow model, which has theoretical and practical application value for tanker management of railway transportation of dangerous goods in the chemical logistics park.

  8. Interconnection-wide hour-ahead scheduling in the presence of intermittent renewables and demand response: A surplus maximizing approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behboodi, Sahand; Chassin, David P.; Djilali, Ned

    This study describes a new approach for solving the multi-area electricity resource allocation problem when considering both intermittent renewables and demand response. The method determines the hourly inter-area export/import set that maximizes the interconnection (global) surplus satisfying transmission, generation and load constraints. The optimal inter-area transfer set effectively makes the electricity price uniform over the interconnection apart from constrained areas, which overall increases the consumer surplus more than it decreases the producer surplus. The method is computationally efficient and suitable for use in simulations that depend on optimal scheduling models. The method is demonstrated on a system that represents Northmore » America Western Interconnection for the planning year of 2024. Simulation results indicate that effective use of interties reduces the system operation cost substantially. Excluding demand response, both the unconstrained and the constrained scheduling solutions decrease the global production cost (and equivalently increase the global economic surplus) by 12.30B and 10.67B per year, respectively, when compared to the standalone case in which each control area relies only on its local supply resources. This cost saving is equal to 25% and 22% of the annual production cost. Including 5% demand response, the constrained solution decreases the annual production cost by 10.70B, while increases the annual surplus by 9.32B in comparison to the standalone case.« less

  9. Interconnection-wide hour-ahead scheduling in the presence of intermittent renewables and demand response: A surplus maximizing approach

    DOE PAGES

    Behboodi, Sahand; Chassin, David P.; Djilali, Ned; ...

    2016-12-23

    This study describes a new approach for solving the multi-area electricity resource allocation problem when considering both intermittent renewables and demand response. The method determines the hourly inter-area export/import set that maximizes the interconnection (global) surplus satisfying transmission, generation and load constraints. The optimal inter-area transfer set effectively makes the electricity price uniform over the interconnection apart from constrained areas, which overall increases the consumer surplus more than it decreases the producer surplus. The method is computationally efficient and suitable for use in simulations that depend on optimal scheduling models. The method is demonstrated on a system that represents Northmore » America Western Interconnection for the planning year of 2024. Simulation results indicate that effective use of interties reduces the system operation cost substantially. Excluding demand response, both the unconstrained and the constrained scheduling solutions decrease the global production cost (and equivalently increase the global economic surplus) by 12.30B and 10.67B per year, respectively, when compared to the standalone case in which each control area relies only on its local supply resources. This cost saving is equal to 25% and 22% of the annual production cost. Including 5% demand response, the constrained solution decreases the annual production cost by 10.70B, while increases the annual surplus by 9.32B in comparison to the standalone case.« less

  10. Dendritic nonlinearities reduce network size requirements and mediate ON and OFF states of persistent activity in a PFC microcircuit model.

    PubMed

    Papoutsi, Athanasia; Sidiropoulou, Kyriaki; Poirazi, Panayiota

    2014-07-01

    Technological advances have unraveled the existence of small clusters of co-active neurons in the neocortex. The functional implications of these microcircuits are in large part unexplored. Using a heavily constrained biophysical model of a L5 PFC microcircuit, we recently showed that these structures act as tunable modules of persistent activity, the cellular correlate of working memory. Here, we investigate the mechanisms that underlie persistent activity emergence (ON) and termination (OFF) and search for the minimum network size required for expressing these states within physiological regimes. We show that (a) NMDA-mediated dendritic spikes gate the induction of persistent firing in the microcircuit. (b) The minimum network size required for persistent activity induction is inversely proportional to the synaptic drive of each excitatory neuron. (c) Relaxation of connectivity and synaptic delay constraints eliminates the gating effect of NMDA spikes, albeit at a cost of much larger networks. (d) Persistent activity termination by increased inhibition depends on the strength of the synaptic input and is negatively modulated by dADP. (e) Slow synaptic mechanisms and network activity contain predictive information regarding the ability of a given stimulus to turn ON and/or OFF persistent firing in the microcircuit model. Overall, this study zooms out from dendrites to cell assemblies and suggests a tight interaction between dendritic non-linearities and network properties (size/connectivity) that may facilitate the short-memory function of the PFC.

  11. SLR2000: a microlaser-based single photoelectron satellite laser ranging system

    NASA Technical Reports Server (NTRS)

    Degnan, John J.; McGarry, Jan F.

    1998-01-01

    SLR2000 is an autonomous and eyesafe satellite laser ranging (SLR) station with an expected single shot range precision of about one centimeter and a normal point (time-averaged) precision better than 3 mm. The system wil provide continuous 24 hour tracking coverage for a constellation of over twenty artificial satellites. Replication costs are expected to be roughly an order of magnitude less than current operational systems, and the system will be about 75% less expensive to operate and maintain relative to manned systems. Computer simulations have predicted a daylight tracking capability to GPS and lower satellites with telescope apertures of 40 cm and have demonstrated the ability of our current autotracking algorithm to extract mean signal strengths below .001 photoelectrons per pulse from daytime background noise. The dominant cost driver in present SLR systems is the onsite and central infrastructure manpower required to operate the system, to service and maintain the complex subsystems, and to ensure that the transmitted laser beam is not a hazard to onsite personnel or to overflying aircraft. To keep development, fabrication, and maintenance costs at a minimum, we adopted the following design philosophies: (1) use off the shelf commercial components wherever possible; this allows rapid component replacement and "outsourcing" of engineering support; (2) use smaller telescopes (less than 50 cm) since this constrains the cost, size, and weight of the telescope and tracking mount; and (3) for low maintenance and failsafe reliability, choose simple versus complex technical approaches and, where possible, use passive techniques and components rather than active ones. Adherence to these philosophies has led to the SLR2000 design described here.

  12. On the Path to SunShot. Emerging Opportunities and Challenges in U.S. Solar Manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, Donald; Horowitz, Kelsey; Kurup, Parthiv

    This report provides insights into photovoltaic (PV) and concentrating solar power (CSP) manufacturing in the context of the U.S. Department of Energy's SunShot Initiative. Although global PV price reductions and deployment have been strong recently, PV manufacturing faces challenges. Slowing rates of manufacturing cost reductions, combined with the relatively low price of incumbent electricity generating sources in most large global PV markets, may constrain profit opportunities for firms and poses a potential challenge to the sustainable operation and growth of the global PV manufacturing base. In the United States, manufacturers also face a factors-of-production cost disadvantage compared with competing nations.more » However, the United States is one of the world's most competitive and innovative countries as well as one of the best locations for PV manufacturing. In conjunction with strong projected PV demand in the United States and across the Americas, these advantages could increase the share of PV technologies produced by U.S. manufacturers as the importance of innovation-driven PV cost reductions increases. Compared with PV, CSP systems are much more complex and require a much larger minimum effective scale, resulting in much higher total CAPEX requirements for system construction, lengthier development cycles, and ultimately higher costs of energy produced. The global lack of consistent CSP project development creates challenges for companies that manufacture specialty CSP components, and the potential lack of a near-term U.S. market could hinder domestic CSP manufacturers. However, global and U.S. CSP deployment is expected to expand beyond 2020, and U.S. CSP manufacturers could benefit from U.S. innovation advantages similar to those associated with PV. Expansion of PV and CSP manufacturing also presents U.S. job-growth opportunities.« less

  13. Dynamic remedial action scheme using online transient stability analysis

    NASA Astrophysics Data System (ADS)

    Shrestha, Arun

    Economic pressure and environmental factors have forced the modern power systems to operate closer to their stability limits. However, maintaining transient stability is a fundamental requirement for the operation of interconnected power systems. In North America, power systems are planned and operated to withstand the loss of any single or multiple elements without violating North American Electric Reliability Corporation (NERC) system performance criteria. For a contingency resulting in the loss of multiple elements (Category C), emergency transient stability controls may be necessary to stabilize the power system. Emergency control is designed to sense abnormal conditions and subsequently take pre-determined remedial actions to prevent instability. Commonly known as either Remedial Action Schemes (RAS) or as Special/System Protection Schemes (SPS), these emergency control approaches have been extensively adopted by utilities. RAS are designed to address specific problems, e.g. to increase power transfer, to provide reactive support, to address generator instability, to limit thermal overloads, etc. Possible remedial actions include generator tripping, load shedding, capacitor and reactor switching, static VAR control, etc. Among various RAS types, generation shedding is the most effective and widely used emergency control means for maintaining system stability. In this dissertation, an optimal power flow (OPF)-based generation-shedding RAS is proposed. This scheme uses online transient stability calculation and generator cost function to determine appropriate remedial actions. For transient stability calculation, SIngle Machine Equivalent (SIME) technique is used, which reduces the multimachine power system model to a One-Machine Infinite Bus (OMIB) equivalent and identifies critical machines. Unlike conventional RAS, which are designed using offline simulations, online stability calculations make the proposed RAS dynamic and adapting to any power system configuration and operating state. The generation-shedding cost is calculated using pre-RAS and post-RAS OPF costs. The criteria for selecting generators to trip is based on the minimum cost rather than minimum amount of generation to shed. For an unstable Category C contingency, the RAS control action that results in stable system with minimum generation shedding cost is selected among possible candidate solutions. The RAS control actions update whenever there is a change in operating condition, system configuration, or cost functions. The effectiveness of the proposed technique is demonstrated by simulations on the IEEE 9-bus system, the IEEE 39-bus system, and IEEE 145-bus system. This dissertation also proposes an improved, yet relatively simple, technique for solving Transient Stability-Constrained Optimal Power Flow (TSC-OPF) problem. Using the SIME method, the sets of dynamic and transient stability constraints are reduced to a single stability constraint, decreasing the overall size of the optimization problem. The transient stability constraint is formulated using the critical machines' power at the initial time step, rather than using the machine rotor angles. This avoids the addition of machine steady state stator algebraic equations in the conventional OPF algorithm. A systematic approach to reach an optimal solution is developed by exploring the quasi-linear behavior of critical machine power and stability margin. The proposed method shifts critical machines active power based on generator costs using an OPF algorithm. Moreover, the transient stability limit is based on stability margin, and not on a heuristically set limit on OMIB rotor angle. As a result, the proposed TSC-OPF solution is more economical and transparent. The proposed technique enables the use of fast and robust commercial OPF tool and time-domain simulation software for solving large scale TSC-OPF problem, which makes the proposed method also suitable for real-time application.

  14. Using Structural Equation Modeling to Assess Functional Connectivity in the Brain: Power and Sample Size Considerations

    ERIC Educational Resources Information Center

    Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack

    2014-01-01

    The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first…

  15. Employment Effects of Minimum and Subminimum Wages. Recent Evidence.

    ERIC Educational Resources Information Center

    Neumark, David

    Using a specially constructed panel data set on state minimum wage laws and labor market conditions, Neumark and Wascher (1992) presented evidence that countered the claim that minimum wages could be raised with no cost to employment. They concluded that estimates indicating that minimum wages reduced employment on the order of 1-2 percent for a…

  16. Fuzzy robust credibility-constrained programming for environmental management and planning.

    PubMed

    Zhang, Yimei; Hang, Guohe

    2010-06-01

    In this study, a fuzzy robust credibility-constrained programming (FRCCP) is developed and applied to the planning for waste management systems. It incorporates the concepts of credibility-based chance-constrained programming and robust programming within an optimization framework. The developed method can reflect uncertainties presented as possibility-density by fuzzy-membership functions. Fuzzy credibility constraints are transformed to the crisp equivalents with different credibility levels, and ordinary fuzzy inclusion constraints are determined by their robust deterministic constraints by setting a-cut levels. The FRCCP method can provide different system costs under different credibility levels (lambda). From the results of sensitivity analyses, the operation cost of the landfill is a critical parameter. For the management, any factors that would induce cost fluctuation during landfilling operation would deserve serious observation and analysis. By FRCCP, useful solutions can be obtained to provide decision-making support for long-term planning of solid waste management systems. It could be further enhanced through incorporating methods of inexact analysis into its framework. It can also be applied to other environmental management problems.

  17. Military Occupational Speciality Training Cost Handbook (MOSB)

    DTIC Science & Technology

    1983-10-01

    FINANCE & ACCOUNTING CENTER "JUN 1 i 184’, l APPROVED:_ . A " W. M. ALLENi T - DIRECTOR OF COST ANALYSIS _ "OFFICE OF THE COMPTROLLER OF THE ARMYj 84 04 16 ...COVERED j14. DATE OF REPORT (Yr.. Alo.. Day) 15 AE COUNT 16 . SUPPLEMENTARY NOTAOMION IFROM TO 83/10 43o Supersedes Volumes I and II, MOSB, dated 81... depreciation of equipment, minimum consumption of utilities, pay of minimum grounds staff, etc. Of course, per capita fixed cost will rise with a decreasing

  18. Design optimization and probabilistic analysis of a hydrodynamic journal bearing

    NASA Technical Reports Server (NTRS)

    Liniecki, Alexander G.

    1990-01-01

    A nonlinear constrained optimization of a hydrodynamic bearing was performed yielding three main variables: radial clearance, bearing length to diameter ratio, and lubricating oil viscosity. As an objective function a combined model of temperature rise and oil supply has been adopted. The optimized model of the bearing has been simulated for population of 1000 cases using Monte Carlo statistical method. It appeared that the so called 'optimal solution' generated more than 50 percent of failed bearings, because their minimum oil film thickness violated stipulated minimum constraint value. As a remedy change of oil viscosity is suggested after several sensitivities of variables have been investigated.

  19. A Comparison of Trajectory Optimization Methods for the Impulsive Minimum Fuel Rendezvous Problem

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.; Mailhe, Laurie M.; Guzman, Jose J.

    2002-01-01

    In this paper we present a comparison of optimization approaches to the minimum fuel rendezvous problem. Both indirect and direct methods are compared for a variety of test cases. The indirect approach is based on primer vector theory. The direct approaches are implemented numerically and include Sequential Quadratic Programming (SQP), Quasi-Newton, Simplex, Genetic Algorithms, and Simulated Annealing. Each method is applied to a variety of test cases including, circular to circular coplanar orbits, LEO to GEO, and orbit phasing in highly elliptic orbits. We also compare different constrained optimization routines on complex orbit rendezvous problems with complicated, highly nonlinear constraints.

  20. Density and lithospheric structure at Tyrrhena Patera, Mars, from gravity and topography data

    NASA Astrophysics Data System (ADS)

    Grott, M.; Wieczorek, M. A.

    2012-09-01

    The Tyrrhena Patera highland volcano, Mars, is associated with a relatively well localized gravity anomaly and we have carried out a localized admittance analysis in the region to constrain the density of the volcanic load, the load thickness, and the elastic thickness at the time of load emplacement. The employed admittance model considers loading of an initially spherical surface, and surface as well as subsurface loading is taken into account. Our results indicate that the gravity and topography data available at Tyrrhena Patera is consistent with the absence of subsurface loading, but the presence of a small subsurface load cannot be ruled out. We obtain minimum load densities of 2960 kg m-3, minimum load thicknesses of 5 km, and minimum load volumes of 0.6 × 106 km3. Photogeological evidence suggests that pyroclastic deposits make up at most 30% of this volume, such that the bulk of Tyrrhena Patera is likely composed of competent basalt. Best fitting model parameters are a load density of 3343 kg m-3, a load thickness of 10.8 km, and a load volume of 1.7 × 106 km3. These relatively large load densities indicate that lava compositions are comparable to those at other martian volcanoes, and densities are comparable to those of the martian meteorites. The elastic thickness in the region is constrained to be smaller than 27.5 km at the time of loading, indicating surface heat flows in excess of 24 mW m-2.

  1. CONSTRAINTS ON HYBRID METRIC-PALATINI GRAVITY FROM BACKGROUND EVOLUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lima, N. A.; Barreto, V. S., E-mail: ndal@roe.ac.uk, E-mail: vsm@roe.ac.uk

    2016-02-20

    In this work, we introduce two models of the hybrid metric-Palatini theory of gravitation. We explore their background evolution, showing explicitly that one recovers standard General Relativity with an effective cosmological constant at late times. This happens because the Palatini Ricci scalar evolves toward and asymptotically settles at the minimum of its effective potential during cosmological evolution. We then use a combination of cosmic microwave background, supernovae, and baryonic accoustic oscillations background data to constrain the models’ free parameters. For both models, we are able to constrain the maximum deviation from the gravitational constant G one can have at earlymore » times to be around 1%.« less

  2. Consistent description of kinetic equation with triangle anomaly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pu Shi; Gao Jianhua; Wang Qun

    2011-05-01

    We provide a consistent description of the kinetic equation with a triangle anomaly which is compatible with the entropy principle of the second law of thermodynamics and the charge/energy-momentum conservation equations. In general an anomalous source term is necessary to ensure that the equations for the charge and energy-momentum conservation are satisfied and that the correction terms of distribution functions are compatible to these equations. The constraining equations from the entropy principle are derived for the anomaly-induced leading order corrections to the particle distribution functions. The correction terms can be determined for the minimum number of unknown coefficients in onemore » charge and two charge cases by solving the constraining equations.« less

  3. On the formation of granulites

    USGS Publications Warehouse

    Bohlen, S.R.

    1991-01-01

    The tectonic settings for the formation and evolution of regional granulite terranes and the lowermost continental crust can be deduced from pressure-temperature-time (P-T-time) paths and constrained by petrological and geophysical considerations. P-T conditions deduced for regional granulites require transient, average geothermal gradients of greater than 35??C km-1, implying minimum heat flow in excess of 100 mW m-2. Such high heat flow is probably caused by magmatic heating. Tectonic settings wherein such conditions are found include convergent plate margins, continental rifts, hot spots and at the margins of large, deep-seated batholiths. Cooling paths can be constrained by solid-solid and devolatilization equilibria and geophysical modelling. -from Author

  4. Constrained Surface-Level Gateway Placement for Underwater Acoustic Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Li, Deying; Li, Zheng; Ma, Wenkai; Chen, Hong

    One approach to guarantee the performance of underwater acoustic sensor networks is to deploy multiple Surface-level Gateways (SGs) at the surface. This paper addresses the connected (or survivable) Constrained Surface-level Gateway Placement (C-SGP) problem for 3-D underwater acoustic sensor networks. Given a set of candidate locations where SGs can be placed, our objective is to place minimum number of SGs at a subset of candidate locations such that it is connected (or 2-connected) from any USN to the base station. We propose a polynomial time approximation algorithm for the connected C-SGP problem and survivable C-SGP problem, respectively. Simulations are conducted to verify our algorithms' efficiency.

  5. Impulsive noise suppression in color images based on the geodesic digital paths

    NASA Astrophysics Data System (ADS)

    Smolka, Bogdan; Cyganek, Boguslaw

    2015-02-01

    In the paper a novel filtering design based on the concept of exploration of the pixel neighborhood by digital paths is presented. The paths start from the boundary of a filtering window and reach its center. The cost of transitions between adjacent pixels is defined in the hybrid spatial-color space. Then, an optimal path of minimum total cost, leading from pixels of the window's boundary to its center is determined. The cost of an optimal path serves as a degree of similarity of the central pixel to the samples from the local processing window. If a pixel is an outlier, then all the paths starting from the window's boundary will have high costs and the minimum one will also be high. The filter output is calculated as a weighted mean of the central pixel and an estimate constructed using the information on the minimum cost assigned to each image pixel. So, first the costs of optimal paths are used to build a smoothed image and in the second step the minimum cost of the central pixel is utilized for construction of the weights of a soft-switching scheme. The experiments performed on a set of standard color images, revealed that the efficiency of the proposed algorithm is superior to the state-of-the-art filtering techniques in terms of the objective restoration quality measures, especially for high noise contamination ratios. The proposed filter, due to its low computational complexity, can be applied for real time image denoising and also for the enhancement of video streams.

  6. Optimal Guaranteed Cost Sliding Mode Control for Constrained-Input Nonlinear Systems With Matched and Unmatched Disturbances.

    PubMed

    Zhang, Huaguang; Qu, Qiuxia; Xiao, Geyang; Cui, Yang

    2018-06-01

    Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN) is applied to obtain the approximate optimal control law for the auxiliary system. Lyapunov techniques are used to demonstrate the convergence of the NN weight errors. In addition, the derived approximate optimal control is verified to guarantee the sliding mode dynamics system to be stable in the sense of uniform ultimate boundedness. Some simulation results are presented to verify the feasibility of the proposed control scheme.

  7. Cost benefit analysis of the transfer of NASA remote sensing technology to the state of Georgia

    NASA Technical Reports Server (NTRS)

    Zimmer, R. P. (Principal Investigator); Wilkins, R. D.; Kelly, D. L.; Brown, D. M.

    1977-01-01

    The author has identified the following significant results. First order benefits can generally be quantified, thus allowing quantitative comparisons of candidate land cover data systems. A meaningful dollar evaluation of LANDSAT can be made by a cost comparison with equally effective data systems. Users of LANDSAT data can be usefully categorized as performing three general functions: planning, permitting, and enforcing. The value of LANDSAT data to the State of Georgia is most sensitive to the parameters: discount rate, digitization cost, and photo acquisition cost. Under a constrained budget, LANDSAT could provide digitized land cover information roughly seven times more frequently than could otherwise be obtained. Thus on one hand, while the services derived from LANDSAT data in comparison to the baseline system has a positive net present value, on the other hand if the budget were constrained, more frequent information could be provided using the LANDSAT system than otherwise be obtained.

  8. Competition versus regulation: constraining hospital discharge costs.

    PubMed

    Weil, T P

    1996-01-01

    A fundamental choice many states now face when implementing their cost containment efforts for the health field is to weigh the extent to which they should rely on either competitive or regulatory strategies. To study the efficacy of America's current market-driven approaches to constrain health expenditures, an analysis was undertaken of 1993 hospital discharge costs and related data of the 15 states in the United States with the highest percent of health maintenance organization (HMO) market penetration. The study's major finding was that a facility operating with a lesser number of paid hours was more critical in reducing average expense per discharge than whether the hospital was located in a "competitive" or a "regulated" state. What is proposed herein to enhance hospital cost containment efforts is for a state to almost simultaneously use both market-driven and regulatory strategies similar to what was implemented in California over the last three decades and in Germany for the last 100 years.

  9. Prepositioning emergency supplies under uncertainty: a parametric optimization method

    NASA Astrophysics Data System (ADS)

    Bai, Xuejie; Gao, Jinwu; Liu, Yankui

    2018-07-01

    Prepositioning of emergency supplies is an effective method for increasing preparedness for disasters and has received much attention in recent years. In this article, the prepositioning problem is studied by a robust parametric optimization method. The transportation cost, supply, demand and capacity are unknown prior to the extraordinary event, which are represented as fuzzy parameters with variable possibility distributions. The variable possibility distributions are obtained through the credibility critical value reduction method for type-2 fuzzy variables. The prepositioning problem is formulated as a fuzzy value-at-risk model to achieve a minimum total cost incurred in the whole process. The key difficulty in solving the proposed optimization model is to evaluate the quantile of the fuzzy function in the objective and the credibility in the constraints. The objective function and constraints can be turned into their equivalent parametric forms through chance constrained programming under the different confidence levels. Taking advantage of the structural characteristics of the equivalent optimization model, a parameter-based domain decomposition method is developed to divide the original optimization problem into six mixed-integer parametric submodels, which can be solved by standard optimization solvers. Finally, to explore the viability of the developed model and the solution approach, some computational experiments are performed on realistic scale case problems. The computational results reported in the numerical example show the credibility and superiority of the proposed parametric optimization method.

  10. Electro-Optical Design for Efficient Visual Communication

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.; Jobson, Daniel J.; Rahman, Zia-Ur

    1995-01-01

    Visual communication, in the form of telephotography and television, for example, can be regarded as efficient only if the amount of information that it conveys about the scene to the observer approaches the maximum possible and the associated cost approaches the minimum possible. Elsewhere we have addressed the problem of assessing the end to end performance of visual communication systems in terms of their efficiency in this sense by integrating the critical limiting factors that constrain image gathering into classical communications theory. We use this approach to assess the electro-optical design of image gathering devices as a function of the f number and apodization of the objective lens and the aperture size and sampling geometry of the phot-detection mechanism. Results show that an image gathering device that is designed to optimize information capacity performs similarly to the human eye. For both, the performance approaches the maximum possible, in terms of the efficiency with which the acquired information can be transmitted as decorrelated data, and the fidelity, sharpness, and clearity with which fine detail can be restored.

  11. Electro-optical design for efficient visual communication

    NASA Astrophysics Data System (ADS)

    Huck, Friedrich O.; Fales, Carl L.; Jobson, Daniel J.; Rahman, Zia-ur

    1995-03-01

    Visual communication, in the form of telephotography and television, for example, can be regarded as efficient only if the amount of information that it conveys about the scene to the observer approaches the maximum possible and the associated cost approaches the minimum possible. Elsewhere we have addressed the problem of assessing the end-to-end performance of visual communication systems in terms of their efficiency in this sense by integrating the critical limiting factors that constrain image gathering into classical communication theory. We use this approach to assess the electro-optical design of image-gathering devices as a function of the f number and apodization of the objective lens and the aperture size and sampling geometry of the photodetection mechanism. Results show that an image-gathering device that is designed to optimize information capacity performs similarly to the human eye. For both, the performance approaches the maximum possible, in terms of the efficiency with which the acquired information can be transmitted as decorrelated data, and the fidelity, sharpness, and clarity with which fine detail can be restored.

  12. A bat algorithm with mutation for UCAV path planning.

    PubMed

    Wang, Gaige; Guo, Lihong; Duan, Hong; Liu, Luo; Wang, Heqi

    2012-01-01

    Path planning for uninhabited combat air vehicle (UCAV) is a complicated high dimension optimization problem, which mainly centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. Original bat algorithm (BA) is used to solve the UCAV path planning problem. Furthermore, a new bat algorithm with mutation (BAM) is proposed to solve the UCAV path planning problem, and a modification is applied to mutate between bats during the process of the new solutions updating. Then, the UCAV can find the safe path by connecting the chosen nodes of the coordinates while avoiding the threat areas and costing minimum fuel. This new approach can accelerate the global convergence speed while preserving the strong robustness of the basic BA. The realization procedure for original BA and this improved metaheuristic approach BAM is also presented. To prove the performance of this proposed metaheuristic method, BAM is compared with BA and other population-based optimization methods, such as ACO, BBO, DE, ES, GA, PBIL, PSO, and SGA. The experiment shows that the proposed approach is more effective and feasible in UCAV path planning than the other models.

  13. Statistical modelling as an aid to the design of retail sampling plans for mycotoxins in food.

    PubMed

    MacArthur, Roy; MacDonald, Susan; Brereton, Paul; Murray, Alistair

    2006-01-01

    A study has been carried out to assess appropriate statistical models for use in evaluating retail sampling plans for the determination of mycotoxins in food. A compound gamma model was found to be a suitable fit. A simulation model based on the compound gamma model was used to produce operating characteristic curves for a range of parameters relevant to retail sampling. The model was also used to estimate the minimum number of increments necessary to minimize the overall measurement uncertainty. Simulation results showed that measurements based on retail samples (for which the maximum number of increments is constrained by cost) may produce fit-for-purpose results for the measurement of ochratoxin A in dried fruit, but are unlikely to do so for the measurement of aflatoxin B1 in pistachio nuts. In order to produce a more accurate simulation, further work is required to determine the degree of heterogeneity associated with batches of food products. With appropriate parameterization in terms of physical and biological characteristics, the systems developed in this study could be applied to other analyte/matrix combinations.

  14. The Goal of Adequate Nutrition: Can It Be Made Affordable, Sustainable, and Universal?

    PubMed

    McFarlane, Ian

    2016-11-30

    Until about 1900, large proportions of the world population endured hunger and poverty. The 20th century saw world population increase from 1.6 to 6.1 billion, accompanied and to some extent made possible by rapid improvements in health standards and food supply, with associated advances in agricultural and nutrition sciences. In this paper, I use the application of linear programming (LP) in preparation of rations for farm animals to illustrate a method of calculating the lowest cost of a human diet selected from locally available food items, constrained to provide recommended levels of food energy and nutrients; then, to find a realistic minimum cost, I apply the further constraint that the main sources of food energy in the costed diet are weighted in proportion to the actual reported consumption of food items in that area. Worldwide variations in dietary preferences raise the issue as to the sustainability of popular dietary regimes, and the paper reviews the factors associated with satisfying requirements for adequate nutrition within those regimes. The ultimate physical constraints on food supply are described, together with the ways in which climate change may affect those constraints. During the 20th century, food supply increased sufficiently in most areas to keep pace with the rapid increase in world population. Many challenges will need to be overcome if food supply is to continue to meet demand, and those challenges are made more severe by rising expectations of quality of life in the developing world, as well as by the impacts of climate change on agriculture and aquaculture.

  15. Considering a Cost Analysis Project? A Planned Approach

    ERIC Educational Resources Information Center

    Parish, Mina; Teetor, Travis

    2006-01-01

    As resources become more constrained in the library community, many organizations are finding that they need to have a better understanding of their costs. To this end, this article will present one approach to conducting a cost analysis (including questions to ask yourself, project team makeup, organizational support, and data organization). We…

  16. Uncertainty, imprecision, and the precautionary principle in climate change assessment.

    PubMed

    Borsuk, M E; Tomassini, L

    2005-01-01

    Statistical decision theory can provide useful support for climate change decisions made under conditions of uncertainty. However, the probability distributions used to calculate expected costs in decision theory are themselves subject to uncertainty, disagreement, or ambiguity in their specification. This imprecision can be described using sets of probability measures, from which upper and lower bounds on expectations can be calculated. However, many representations, or classes, of probability measures are possible. We describe six of the more useful classes and demonstrate how each may be used to represent climate change uncertainties. When expected costs are specified by bounds, rather than precise values, the conventional decision criterion of minimum expected cost is insufficient to reach a unique decision. Alternative criteria are required, and the criterion of minimum upper expected cost may be desirable because it is consistent with the precautionary principle. Using simple climate and economics models as an example, we determine the carbon dioxide emissions levels that have minimum upper expected cost for each of the selected classes. There can be wide differences in these emissions levels and their associated costs, emphasizing the need for care when selecting an appropriate class.

  17. 45 CFR 2521.60 - To what extent must my share of program costs increase over time?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...(percent) Year 7(percent) Year 8(percent) Year 9(percent) Year 10(percent) Minimum member support 15 15 15...(percent) Year 6(percent) Year 7(percent) Year 8(percent) Year 9(percent) Year 10(percent) Minimum member... 45 Public Welfare 4 2010-10-01 2010-10-01 false To what extent must my share of program costs...

  18. 45 CFR 2521.60 - To what extent must my share of program costs increase over time?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...(percent) Year 7(percent) Year 8(percent) Year 9(percent) Year 10(percent) Minimum member support 15 15 15...(percent) Year 6(percent) Year 7(percent) Year 8(percent) Year 9(percent) Year 10(percent) Minimum member... 45 Public Welfare 4 2011-10-01 2011-10-01 false To what extent must my share of program costs...

  19. U and Th isotope constraints on the duration of Heinrich events H0-H4 in the southeastern Labrador Sea

    NASA Astrophysics Data System (ADS)

    Veiga-Pires, C. C.; Hillaire-Marcel, C.

    1999-04-01

    The duration and sequence of events recorded in Heinrich layers at sites near the Hudson Strait source area for ice-rafted material are still poorly constrained, notably because of the limit and uncertainties of the 14C chronology. Here we use high-resolution 230Th-excess measurements, in a 6 m sequence raised from Orphan Knoll (southern Labrador Sea), to constrain the duration of the deposition of the five most recent Heinrich (H) layers. On the basis of maximum/minimum estimates for the mean glacial 230Th-excess flux at the studied site a minimum/maximum duration of 1.0/0.6, 1.4/0.8, 1.3/0.8, 1.5/0.9, and 2.1/1.3 kyr is obtained for H0 (˜Younger Dryas), Hl, H2, H3, and H4, respectively. Thorium-230-excess inventories and other sedimentological features indicate a reduced but still significant lateral sedimentary supply by the Western Boundary Undercurrent during the glacial interval. U and Th series systematics also provide insights into source rocks of H layer sediments (i.e., into distal Irminger Basin/local Labrador Sea supplies).

  20. An Algorithmic, Pie-Crusting Medial Soft Tissue Release Reduces the Need for Constrained Inserts Patients With Severe Varus Deformity Undergoing Total Knee Arthroplasty.

    PubMed

    Goudarz Mehdikhani, Kaveh; Morales Moreno, Beatriz; Reid, Jeremy J; de Paz Nieves, Ana; Lee, Yuo-Yu; González Della Valle, Alejandro

    2016-07-01

    We studied the need to use a constrained insert for residual intraoperative instability and the 1-year result of patients undergoing total knee arthroplasty (TKA) for a varus deformity. In a control group, a "classic" subperiosteal release of the medial soft tissue sleeve was performed as popularized by pioneers of TKA. In the study group, an algorithmic approach that selectively releases and pie-crusts posteromedial structures in extension and anteromedial structures in flexion was used. All surgeries were performed by a single surgeon using measured resection technique, and posterior-stabilized, cemented implants. There were 228 TKAs in the control group and 188 in the study group. Outcome variables included the use of a constrained insert, and the Knee Society Score at 6 weeks, 4 months, and 1 year postoperatively. The effect of the release technique on use of constrained inserts and clinical outcomes were analyzed in a multivariate model controlling for age, sex, body mass index, and severity of deformity. The use of constrained inserts was significantly lower in study than in control patients (8% vs 18%; P = .002). There was no difference in the Knee Society Score and range of motion between the groups at last follow-up. No patient developed postoperative medial instability. This algorithmic, pie-crusting release technique resulted in a significant reduction in the use of constrained inserts with no detrimental effects in clinical results, joint function, and stability. As constrained TKA implants are more costly than nonconstrained ones, if the adopted technique proves to be safe in the long term, it may cause a positive shift in value for hospitals and cost savings in the health care system. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Cyber warfare and electronic warfare integration in the operational environment of the future: cyber electronic warfare

    NASA Astrophysics Data System (ADS)

    Askin, Osman; Irmak, Riza; Avsever, Mustafa

    2015-05-01

    For the states with advanced technology, effective use of electronic warfare and cyber warfare will be the main determining factor of winning a war in the future's operational environment. The developed states will be able to finalize the struggles they have entered with a minimum of human casualties and minimum cost thanks to high-tech. Considering the increasing number of world economic problems, the development of human rights and humanitarian law it is easy to understand the importance of minimum cost and minimum loss of human. In this paper, cyber warfare and electronic warfare concepts are examined in conjunction with the historical development and the relationship between them is explained. Finally, assessments were carried out about the use of cyber electronic warfare in the coming years.

  2. Comparative evaluation of distributed-collector solar thermal electric power plants

    NASA Technical Reports Server (NTRS)

    Fujita, T.; El Gabalawi, N.; Herrera, G. G.; Caputo, R. S.

    1978-01-01

    Distributed-collector solar thermal-electric power plants are compared by projecting power plant economics of selected systems to the 1990-2000 timeframe. The approach taken is to evaluate the performance of the selected systems under the same weather conditions. Capital and operational costs are estimated for each system. Energy costs are calculated for different plant sizes based on the plant performance and the corresponding capital and maintenance costs. Optimum systems are then determined as the systems with the minimum energy costs for a given load factor. The optimum system is comprised of the best combination of subsystems which give the minimum energy cost for every plant size. Sensitivity analysis is done around the optimum point for various plant parameters.

  3. Variable Cultural Acquisition Costs Constrain Cumulative Cultural Evolution

    PubMed Central

    Mesoudi, Alex

    2011-01-01

    One of the hallmarks of the human species is our capacity for cumulative culture, in which beneficial knowledge and technology is accumulated over successive generations. Yet previous analyses of cumulative cultural change have failed to consider the possibility that as cultural complexity accumulates, it becomes increasingly costly for each new generation to acquire from the previous generation. In principle this may result in an upper limit on the cultural complexity that can be accumulated, at which point accumulated knowledge is so costly and time-consuming to acquire that further innovation is not possible. In this paper I first review existing empirical analyses of the history of science and technology that support the possibility that cultural acquisition costs may constrain cumulative cultural evolution. I then present macroscopic and individual-based models of cumulative cultural evolution that explore the consequences of this assumption of variable cultural acquisition costs, showing that making acquisition costs vary with cultural complexity causes the latter to reach an upper limit above which no further innovation can occur. These models further explore the consequences of different cultural transmission rules (directly biased, indirectly biased and unbiased transmission), population size, and cultural innovations that themselves reduce innovation or acquisition costs. PMID:21479170

  4. The resource utilization group system: its effect on nursing home case mix and costs.

    PubMed

    Thorpe, K E; Gertler, P J; Goldman, P

    1991-01-01

    Using data from 1985 and 1986, we examine how New York state's prospective payment system affected nursing homes. The system, called Resource Utilization Group (RUG-II), aimed to limit nursing home cost growth and improve access to nursing homes by "heavy-care" patients. As in Medicare's prospective hospital reimbursement system, payments to nursing homes were based on a "price," rather than facility-specific rates. With respect to cost growth, we observed considerable diversity among homes. Specifically, those nursing homes most financially constrained by the RUG-II methodology exhibited the slowest rates of cost growth; we observed higher cost growth among the homes least constrained. This higher rate of cost growth raises a question about the desirability of using a pricing methodology to determine nursing home payment rates. In addition to moderating cost growth, we also observed a significant change in the mix of patients admitted to nursing homes. During the first year of the RUG-II program, nursing homes admitted more heavy-care patients and reduced days of care to lighter-care patients. Thus, through 1986, the RUG-II program appeared to satisfy at least one of its major policy objectives.

  5. Hybrid Stochastic Search Technique based Suboptimal AGC Regulator Design for Power System using Constrained Feedback Control Strategy

    NASA Astrophysics Data System (ADS)

    Ibraheem, Omveer, Hasan, N.

    2010-10-01

    A new hybrid stochastic search technique is proposed to design of suboptimal AGC regulator for a two area interconnected non reheat thermal power system incorporating DC link in parallel with AC tie-line. In this technique, we are proposing the hybrid form of Genetic Algorithm (GA) and simulated annealing (SA) based regulator. GASA has been successfully applied to constrained feedback control problems where other PI based techniques have often failed. The main idea in this scheme is to seek a feasible PI based suboptimal solution at each sampling time. The feasible solution decreases the cost function rather than minimizing the cost function.

  6. Wavefield reconstruction inversion with a multiplicative cost function

    NASA Astrophysics Data System (ADS)

    da Silva, Nuno V.; Yao, Gang

    2018-01-01

    We present a method for the automatic estimation of the trade-off parameter in the context of wavefield reconstruction inversion (WRI). WRI formulates the inverse problem as an optimisation problem, minimising the data misfit while penalising with a wave equation constraining term. The trade-off between the two terms is balanced by a scaling factor that balances the contributions of the data-misfit term and the constraining term to the value of the objective function. If this parameter is too large then it implies penalizing for the wave equation imposing a hard constraint in the inversion. If it is too small, then this leads to a poorly constrained solution as it is essentially penalizing for the data misfit and not taking into account the physics that explains the data. This paper introduces a new approach for the formulation of WRI recasting its formulation into a multiplicative cost function. We demonstrate that the proposed method outperforms the additive cost function when the trade-off parameter is appropriately scaled in the latter, when adapting it throughout the iterations, and when the data is contaminated with Gaussian random noise. Thus this work contributes with a framework for a more automated application of WRI.

  7. Is the minimum enough? Affordability of a nutritious diet for minimum wage earners in Nova Scotia (2002-2012).

    PubMed

    Newell, Felicia D; Williams, Patricia L; Watt, Cynthia G

    2014-05-09

    This paper aims to assess the affordability of a nutritious diet for households earning minimum wage in Nova Scotia (NS) from 2002 to 2012 using an economic simulation that includes food costing and secondary data. The cost of the National Nutritious Food Basket (NNFB) was assessed with a stratified, random sample of grocery stores in NS during six time periods: 2002, 2004/2005, 2007, 2008, 2010 and 2012. The NNFB's cost was factored into affordability scenarios for three different household types relying on minimum wage earnings: a household of four; a lone mother with three children; and a lone man. Essential monthly living expenses were deducted from monthly net incomes using methods that were standardized from 2002 to 2012 to determine whether adequate funds remained to purchase a basic nutritious diet across the six time periods. A 79% increase to the minimum wage in NS has resulted in a decrease in the potential deficit faced by each household scenario in the period examined. However, the household of four and the lone mother with three children would still face monthly deficits ($44.89 and $496.77, respectively, in 2012) if they were to purchase a nutritiously sufficient diet. As a social determinant of health, risk of food insecurity is a critical public health issue for low wage earners. While it is essential to increase the minimum wage in the short term, adequately addressing income adequacy in NS and elsewhere requires a shift in thinking from a focus on minimum wage towards more comprehensive policies ensuring an adequate livable income for everyone.

  8. Mars Observer trajectory and orbit design

    NASA Technical Reports Server (NTRS)

    Beerer, Joseph G.; Roncoli, Ralph B.

    1991-01-01

    The Mars Observer launch, interplanetary, Mars orbit insertion, and mapping orbit designs are described. The design objective is to enable a near-maximum spacecraft mass to be placed in orbit about Mars. This is accomplished by keeping spacecraft propellant requirements to a minimum, selecting a minimum acceptable launch period, equalizing the spacecraft velocity change requirement at the beginning and end of the launch period, and constraining the orbit insertion maneuvers to be coplanar. The mapping orbit design objective is to provide the opportunity for global observation of the planet by the science instruments while facilitating the spacecraft design. This is realized with a sun-synchronous near-polar orbit whose ground-track pattern covers the planet at progressively finer resolution.

  9. Efficient Compressed Sensing Based MRI Reconstruction using Nonconvex Total Variation Penalties

    NASA Astrophysics Data System (ADS)

    Lazzaro, D.; Loli Piccolomini, E.; Zama, F.

    2016-10-01

    This work addresses the problem of Magnetic Resonance Image Reconstruction from highly sub-sampled measurements in the Fourier domain. It is modeled as a constrained minimization problem, where the objective function is a non-convex function of the gradient of the unknown image and the constraints are given by the data fidelity term. We propose an algorithm, Fast Non Convex Reweighted (FNCR), where the constrained problem is solved by a reweighting scheme, as a strategy to overcome the non-convexity of the objective function, with an adaptive adjustment of the penalization parameter. We propose a fast iterative algorithm and we can prove that it converges to a local minimum because the constrained problem satisfies the Kurdyka-Lojasiewicz property. Moreover the adaptation of non convex l0 approximation and penalization parameters, by means of a continuation technique, allows us to obtain good quality solutions, avoiding to get stuck in unwanted local minima. Some numerical experiments performed on MRI sub-sampled data show the efficiency of the algorithm and the accuracy of the solution.

  10. Optimization of fixed-range trajectories for supersonic transport aircraft

    NASA Astrophysics Data System (ADS)

    Windhorst, Robert Dennis

    1999-11-01

    This thesis develops near-optimal guidance laws that generate minimum fuel, time, or direct operating cost fixed-range trajectories for supersonic transport aircraft. The approach uses singular perturbation techniques to time-scale de-couple the equations of motion into three sets of dynamics, two of which are analyzed in the main body of this thesis and one of which is analyzed in the Appendix. The two-point-boundary-value-problems obtained by application of the maximum principle to the dynamic systems are solved using the method of matched asymptotic expansions. Finally, the two solutions are combined using the matching principle and an additive composition rule to form a uniformly valid approximation of the full fixed-range trajectory. The approach is used on two different time-scale formulations. The first holds weight constant, and the second allows weight and range dynamics to propagate on the same time-scale. Solutions for the first formulation are only carried out to zero order in the small parameter, while solutions for the second formulation are carried out to first order. Calculations for a HSCT design were made to illustrate the method. Results show that the minimum fuel trajectory consists of three segments: a minimum fuel energy-climb, a cruise-climb, and a minimum drag glide. The minimum time trajectory also has three segments: a maximum dynamic pressure ascent, a constant altitude cruise, and a maximum dynamic pressure glide. The minimum direct operating cost trajectory is an optimal combination of the two. For realistic costs of fuel and flight time, the minimum direct operating cost trajectory is very similar to the minimum fuel trajectory. Moreover, the HSCT has three local optimum cruise speeds, with the globally optimum cruise point at the highest allowable speed, if range is sufficiently long. The final range of the trajectory determines which locally optimal speed is best. Ranges of 500 to 6,000 nautical miles, subsonic and supersonic mixed flight, and varying fuel efficiency cases are analyzed. Finally, the payload-range curve of the HSCT design is determined.

  11. Resource Constrained Planning of Multiple Projects with Separable Activities

    NASA Astrophysics Data System (ADS)

    Fujii, Susumu; Morita, Hiroshi; Kanawa, Takuya

    In this study we consider a resource constrained planning problem of multiple projects with separable activities. This problem provides a plan to process the activities considering a resource availability with time window. We propose a solution algorithm based on the branch and bound method to obtain the optimal solution minimizing the completion time of all projects. We develop three methods for improvement of computational efficiency, that is, to obtain initial solution with minimum slack time rule, to estimate lower bound considering both time and resource constraints and to introduce an equivalence relation for bounding operation. The effectiveness of the proposed methods is demonstrated by numerical examples. Especially as the number of planning projects increases, the average computational time and the number of searched nodes are reduced.

  12. Low-Cost Virtual Laboratory Workbench for Electronic Engineering

    ERIC Educational Resources Information Center

    Achumba, Ifeyinwa E.; Azzi, Djamel; Stocker, James

    2010-01-01

    The laboratory component of undergraduate engineering education poses challenges in resource constrained engineering faculties. The cost, time, space and physical presence requirements of the traditional (real) laboratory approach are the contributory factors. These resource constraints may mitigate the acquisition of meaningful laboratory…

  13. 40 CFR 310.16 - What kind of cost documentation is necessary?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS REIMBURSEMENT TO LOCAL GOVERNMENTS FOR... cost documentation is necessary? Cost documentation must be adequate for an audit. At a minimum, you...

  14. Experimental joint quantum measurements with minimum uncertainty.

    PubMed

    Ringbauer, Martin; Biggerstaff, Devon N; Broome, Matthew A; Fedrizzi, Alessandro; Branciard, Cyril; White, Andrew G

    2014-01-17

    Quantum physics constrains the accuracy of joint measurements of incompatible observables. Here we test tight measurement-uncertainty relations using single photons. We implement two independent, idealized uncertainty-estimation methods, the three-state method and the weak-measurement method, and adapt them to realistic experimental conditions. Exceptional quantum state fidelities of up to 0.999 98(6) allow us to verge upon the fundamental limits of measurement uncertainty.

  15. Interpretation of Flow Logs from Nevada Test Site Boreholes to Estimate Hydraulic Conductivity Using Numerical Simulations Constrained by Single-Well Aquifer Tests

    USGS Publications Warehouse

    Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.

    2010-01-01

    Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative to volcanic-rock units is exemplified by the large difference in their estimated maximum hydraulic conductivity; 4,000 and 400 feet per day, respectively. Simulated minimum estimates of hydraulic conductivity are inexact and represent the lower detection limit of the method. Minimum thicknesses of lithologic intervals also were defined for comparing AnalyzeHOLE results to hydraulic properties in regional ground-water flow models.

  16. 7 CFR 4288.10 - Applicant eligibility.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... least minimum points for cost-effectiveness under § 4288.21(b)(1). (4) Percentage of reduction of fossil fuel use. The application must be awarded at least minimum points for percentage of reduction of fossil...

  17. 7 CFR 4288.10 - Applicant eligibility.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... least minimum points for cost-effectiveness under § 4288.21(b)(1). (4) Percentage of reduction of fossil fuel use. The application must be awarded at least minimum points for percentage of reduction of fossil...

  18. 7 CFR 4288.10 - Applicant eligibility.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... least minimum points for cost-effectiveness under § 4288.21(b)(1). (4) Percentage of reduction of fossil fuel use. The application must be awarded at least minimum points for percentage of reduction of fossil...

  19. Optimal dual-fuel propulsion for minimum inert weight or minimum fuel cost

    NASA Technical Reports Server (NTRS)

    Martin, J. A.

    1973-01-01

    An analytical investigation of single-stage vehicles with multiple propulsion phases has been conducted with the phasing optimized to minimize a general cost function. Some results are presented for linearized sizing relationships which indicate that single-stage-to-orbit, dual-fuel rocket vehicles can have lower inert weight than similar single-fuel rocket vehicles and that the advantage of dual-fuel vehicles can be increased if a dual-fuel engine is developed. The results also indicate that the optimum split can vary considerably with the choice of cost function to be minimized.

  20. Future trends which will influence waste disposal.

    PubMed Central

    Wolman, A

    1978-01-01

    The disposal and management of solid wastes are ancient problems. The evolution of practices naturally changed as populations grew and sites for disposal became less acceptable. The central search was for easy disposal at minimum costs. The methods changed from indiscriminate dumping to sanitary landfill, feeding to swine, reduction, incineration, and various forms of re-use and recycling. Virtually all procedures have disabilities and rising costs. Many methods once abandoned are being rediscovered. Promises for so-called innovations outstrip accomplishments. Markets for salvage vary widely or disappear completely. The search for conserving materials and energy at minimum cost must go on forever. PMID:570105

  1. Multi-Objective Trajectory Optimization of a Hypersonic Reconnaissance Vehicle with Temperature Constraints

    NASA Astrophysics Data System (ADS)

    Masternak, Tadeusz J.

    This research determines temperature-constrained optimal trajectories for a scramjet-based hypersonic reconnaissance vehicle by developing an optimal control formulation and solving it using a variable order Gauss-Radau quadrature collocation method with a Non-Linear Programming (NLP) solver. The vehicle is assumed to be an air-breathing reconnaissance aircraft that has specified takeoff/landing locations, airborne refueling constraints, specified no-fly zones, and specified targets for sensor data collections. A three degree of freedom scramjet aircraft model is adapted from previous work and includes flight dynamics, aerodynamics, and thermal constraints. Vehicle control is accomplished by controlling angle of attack, roll angle, and propellant mass flow rate. This model is incorporated into an optimal control formulation that includes constraints on both the vehicle and mission parameters, such as avoidance of no-fly zones and coverage of high-value targets. To solve the optimal control formulation, a MATLAB-based package called General Pseudospectral Optimal Control Software (GPOPS-II) is used, which transcribes continuous time optimal control problems into an NLP problem. In addition, since a mission profile can have varying vehicle dynamics and en-route imposed constraints, the optimal control problem formulation can be broken up into several "phases" with differing dynamics and/or varying initial/final constraints. Optimal trajectories are developed using several different performance costs in the optimal control formulation: minimum time, minimum time with control penalties, and maximum range. The resulting analysis demonstrates that optimal trajectories that meet specified mission parameters and constraints can be quickly determined and used for larger-scale operational and campaign planning and execution.

  2. C-semiring Frameworks for Minimum Spanning Tree Problems

    NASA Astrophysics Data System (ADS)

    Bistarelli, Stefano; Santini, Francesco

    In this paper we define general algebraic frameworks for the Minimum Spanning Tree problem based on the structure of c-semirings. We propose general algorithms that can compute such trees by following different cost criteria, which must be all specific instantiation of c-semirings. Our algorithms are extensions of well-known procedures, as Prim or Kruskal, and show the expressivity of these algebraic structures. They can deal also with partially-ordered costs on the edges.

  3. [Economic aspects of oncological esophageal surgery : Centralization is essential].

    PubMed

    von Dercks, N; Gockel, I; Mehdorn, M; Lorenz, D

    2017-01-01

    The incidence of esophageal carcinoma has increased in recent years in Germany. The aim of this article is a discussion of the economic aspects of oncological esophageal surgery within the German diagnosis-related groups (DRG) system focusing on the association between minimum caseload requirements and outcome quality as well as costs. The margins for the DRG classification G03A are low and quickly exhausted if complications determine the postoperative course. A current study using nationwide German hospital discharge data proved a significant difference in hospital mortality between clinics with and without achieving the minimum caseload requirements for esophagectomy. Data from the USA clearly showed that besides patient-relevant parameters, the caseload of a surgeon is relevant for the cost of treatment. Such cost-related analyses do not exist in Germany at present. Scientific validation of reliable minimum caseload numbers for oncological esophagectomy is desirable in the future.

  4. Memory and Energy Optimization Strategies for Multithreaded Operating System on the Resource-Constrained Wireless Sensor Node

    PubMed Central

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng

    2015-01-01

    Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264

  5. Cost Study of Educational Media Systems and Their Equipment Components. Volume III, A Supplementary Report: Computer Assisted Instruction. Final Report.

    ERIC Educational Resources Information Center

    General Learning Corp., Washington, DC.

    The COST-ED model (Costs of Schools, Training, and Education) of the instructional process encourages the recognition of management alternatives and potential cost-savings. It is used to calculate the minimum cost of performing specified instructional tasks. COST-ED components are presented as cost modules in a flowchart format for manpower,…

  6. Value, Cost, and Sharing: Open Issues in Constrained Clustering

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.

    2006-01-01

    Clustering is an important tool for data mining, since it can identify major patterns or trends without any supervision (labeled data). Over the past five years, semi-supervised (constrained) clustering methods have become very popular. These methods began with incorporating pairwise constraints and have developed into more general methods that can learn appropriate distance metrics. However, several important open questions have arisen about which constraints are most useful, how they can be actively acquired, and when and how they should be propagated to neighboring points. This position paper describes these open questions and suggests future directions for constrained clustering research.

  7. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  8. The economic burden of meningitis to households in Kassena-Nankana district of Northern Ghana.

    PubMed

    Akweongo, Patricia; Dalaba, Maxwell A; Hayden, Mary H; Awine, Timothy; Nyaaba, Gertrude N; Anaseba, Dominic; Hodgson, Abraham; Forgor, Abdulai A; Pandya, Rajul

    2013-01-01

    To estimate the direct and indirect costs of meningitis to households in the Kassena-Nankana District of Ghana. A Cost of illness (COI) survey was conducted between 2010 and 2011. The COI was computed from a retrospective review of 80 meningitis cases answers to questions about direct medical costs, direct non-medical costs incurred and productivity losses due to recent meningitis incident. The average direct and indirect costs of treating meningitis in the district was GH¢152.55 (US$101.7) per household. This is equivalent to about two months minimum wage earned by Ghanaians in unskilled paid jobs in 2009. Households lost 29 days of work per meningitis case and thus those in minimum wage paid jobs lost a monthly minimum wage of GH¢76.85 (US$51.23) due to the illness. Patients who were insured spent an average of GH¢38.5 (US$25.67) in direct medical costs whiles the uninsured patients spent as much as GH¢177.9 (US$118.6) per case. Patients with sequelae incurred additional costs of GH¢22.63 (US$15.08) per case. The least poor were more exposed to meningitis than the poorest. Meningitis is a debilitating but preventable disease that affects people living in the Sahel and in poorer conditions. The cost of meningitis treatment may further lead to impoverishment for these households. Widespread mass vaccination will save households' an equivalent of GH¢175.18 (US$117) and impairment due to meningitis.

  9. Minimum savings requirements in shared savings provider payment.

    PubMed

    Pope, Gregory C; Kautter, John

    2012-11-01

    Payer (insurer) sharing of savings is a way of motivating providers of medical services to reduce cost growth. A Medicare shared savings program is established for accountable care organizations in the 2010 Patient Protection and Affordable Care Act. However, savings created by providers cannot be distinguished from the normal (random) variation in medical claims costs, setting up a classic principal-agent problem. To lessen the likelihood of paying undeserved bonuses, payers may pay bonuses only if observed savings exceed minimum levels. We study the trade-off between two types of errors in setting minimum savings requirements: paying bonuses when providers do not create savings and not paying bonuses when providers create savings. Copyright © 2011 John Wiley & Sons, Ltd.

  10. Forage Harvest and Transport Costs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, J.; Downing, M.; Turhollow, A.

    An engineering-economic approach is used to calculate harvest, in-field transport, and over-the-road transport costs for hay as bales and modules, silage, and crop residues as bales and modules. Costs included are equipment depreciation interest; fuel, lube, and oil; repairs; insurance, housing, and taxes; and labor. Field preparation, pest control, fertilizer, land, and overhead are excluded from the costs calculated Equipment is constrained by power available, throughput or carrying capacity, and field speed.

  11. Cost Efficiency in Public Higher Education.

    ERIC Educational Resources Information Center

    Robst, John

    This study used the frontier cost function framework to examine cost efficiency in public higher education. The frontier cost function estimates the minimum predicted cost for producing a given amount of output. Data from the annual Almanac issues of the "Chronicle of Higher Education" were used to calculate state level enrollments at two-year and…

  12. NEO Targets for Biological In Situ Resource Utilization

    NASA Astrophysics Data System (ADS)

    Grace, J. M.; Ernst, S. M.; Navarrete, J. U.; Gentry, D.

    2014-12-01

    We are investigating a mission architecture concept for low-cost pre-processing of materials on long synodic period asteroids using bioengineered microbes delivered by small spacecraft. Space exploration opportunities, particularly those requiring a human presence, are sharply constrained by the high cost of launching resources such as fuel, construction materials, oxygen, water, and foodstuffs. Near-Earth asteroids (NEAs) have been proposed for supporting a human space presence. However, the combination of high initial investment requirements, delayed potential return, and uncertainty in resource payoff currently prevents their effective utilization.Biomining is the process in which microorganisms perform useful material reduction, sequestration or separation. It is commonly used in terrestrial copper extraction. Compared to physical and chemical methods of extraction it is slow, but very low cost, thus rendering economical even very poor ores. These advantages are potentially extensible to asteroid in situ resource utilization (ISRU).One of the first limiting factors for the use of biology in these environments is temperature. A survey of NEA data was conducted to identify those NEAs whose projected interior temperatures remained within both potential (-5 - 100 ºC) and preferred (15 - 45 ºC) ranges for the minimum projected time per synodic period without exceeding 100 ºC at any point. Approximately 2800 of the 11000 NEAs (25%) are predicted to remain within the potential range for at least 90 days, and 120 (1%) in the preferred range.A second major factor is water availability and stability. We have evaluated a design for a small-spacecraft-based injector which forces low-temperature fluid into the NEA interior, creating potentially habitable microniches. The fluid contains microbes genetically engineered to accelerate the degradation rates of a desired fraction of the native resources, allowing for more efficient material extraction upon a subsequent encounter.

  13. Guidance strategies and analysis for low thrust navigation

    NASA Technical Reports Server (NTRS)

    Jacobson, R. A.

    1973-01-01

    A low-thrust guidance algorithm suitable for operational use was formulated. A constrained linear feedback control law was obtained using a minimum terminal miss criterion and restricting control corrections to constant changes for specified time periods. Both fixed- and variable-time-of-arrival guidance were considered. The performance of the guidance law was evaluated by applying it to the approach phase of the 1980 rendezvous mission with the comet Encke.

  14. Concurrent schedules: Effects of time- and response-allocation constraints

    PubMed Central

    Davison, Michael

    1991-01-01

    Five pigeons were trained on concurrent variable-interval schedules arranged on two keys. In Part 1 of the experiment, the subjects responded under no constraints, and the ratios of reinforcers obtainable were varied over five levels. In Part 2, the conditions of the experiment were changed such that the time spent responding on the left key before a subsequent changeover to the right key determined the minimum time that must be spent responding on the right key before a changeover to the left key could occur. When the left key provided a higher reinforcer rate than the right key, this procedure ensured that the time allocated to the two keys was approximately equal. The data showed that such a time-allocation constraint only marginally constrained response allocation. In Part 3, the numbers of responses emitted on the left key before a changeover to the right key determined the minimum number of responses that had to be emitted on the right key before a changeover to the left key could occur. This response constraint completely constrained time allocation. These data are consistent with the view that response allocation is a fundamental process (and time allocation a derivative process), or that response and time allocation are independently controlled, in concurrent-schedule performance. PMID:16812632

  15. Finite element based stability-constrained weight minimization of sandwich composite ducts for airship applications

    NASA Astrophysics Data System (ADS)

    Khode, Urmi B.

    High Altitude Long Endurance (HALE) airships are platform of interest due to their persistent observation and persistent communication capabilities. A novel HALE airship design configuration incorporates a composite sandwich propulsive hull duct between the front and the back of the hull for significant drag reduction via blown wake effects. The sandwich composite shell duct is subjected to hull pressure on its outer walls and flow suction on its inner walls which result in in-plane wall compressive stress, which may cause duct buckling. An approach based upon finite element stability analysis combined with a ply layup and foam thickness determination weight minimization search algorithm is utilized. Its goal is to achieve an optimized solution for the configuration of the sandwich composite as a solution to a constrained minimum weight design problem, for which the shell duct remains stable with a prescribed margin of safety under prescribed loading. The stability analysis methodology is first verified by comparing published analytical results for a number of simple cylindrical shell configurations with FEM counterpart solutions obtained using the commercially available code ABAQUS. Results show that the approach is effective in identifying minimum weight composite duct configurations for a number of representative combinations of duct geometry, composite material and foam properties, and propulsive duct applied pressure loading.

  16. Evidence for ultrafast outflows in radio-quiet AGNs - III. Location and energetics

    NASA Astrophysics Data System (ADS)

    Tombesi, F.; Cappi, M.; Reeves, J. N.; Braito, V.

    2012-05-01

    Using the results of a previous X-ray photoionization modelling of blueshifted Fe K absorption lines on a sample of 42 local radio-quiet AGNs observed with XMM-Newton, in this Letter we estimate the location and energetics of the associated ultrafast outflows (UFOs). Due to significant uncertainties, we are essentially able to place only lower/upper limits. On average, their location is in the interval ˜0.0003-0.03 pc (˜ 102-104rs) from the central black hole, consistent with what is expected for accretion disc winds/outflows. The mass outflow rates are constrained between ˜0.01 and 1 M⊙ yr-1, corresponding to >rsim5-10 per cent of the accretion rates. The average lower/upper limits on the mechanical power are log? 42.6-44.6 erg s-1. However, the minimum possible value of the ratio between the mechanical power and bolometric luminosity is constrained to be comparable or higher than the minimum required by simulations of feedback induced by winds/outflows. Therefore, this work demonstrates that UFOs are indeed capable to provide a significant contribution to the AGN cosmological feedback, in agreement with theoretical expectations and the recent observation of interactions between AGN outflows and the interstellar medium in several Seyfert galaxies.

  17. Quadrupole deformation ({beta},{gamma}) of light {Lambda} hypernuclei in a constrained relativistic mean field model: Shape evolution and shape polarization effect of the {Lambda} hyperon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu Bingnan; Zhao Enguang; Center of Theoretical Nuclear Physics, National Laboratory of Heavy Ion Accelerator, Lanzhou 730000

    2011-07-15

    The shapes of light normal nuclei and {Lambda} hypernuclei are investigated in the ({beta},{gamma}) deformation plane by using a newly developed constrained relativistic mean field (RMF) model. As examples, the results of some C, Mg, and Si nuclei are presented and discussed in details. We found that for normal nuclei the present RMF calculations and previous Skyrme-Hartree-Fock models predict similar trends of the shape evolution with the neutron number increasing. But some quantitative aspects from these two approaches, such as the depth of the minimum and the softness in the {gamma} direction, differ a lot for several nuclei. For {Lambda}more » hypernuclei, in most cases, the addition of a {Lambda} hyperon alters slightly the location of the ground state minimum toward the direction of smaller {beta} and softer {gamma} in the potential energy surface E{approx}({beta},{gamma}). There are three exceptions, namely, {sub {Lambda}}{sup 13}C, {sub {Lambda}}{sup 23}C, and {sub {Lambda}}{sup 31}Si in which the polarization effect of the additional {Lambda} is so strong that the shapes of these three hypernuclei are drastically different from their corresponding core nuclei.« less

  18. On a Minimum Problem in Smectic Elastomers

    NASA Astrophysics Data System (ADS)

    Buonsanti, Michele; Giovine, Pasquale

    2008-07-01

    Smectic elastomers are layered materials exhibiting a solid-like elastic response along the layer normal and a rubbery one in the plane. Balance equations for smectic elastomers are derived from the general theory of continua with constrained microstructure. In this work we investigate a very simple minimum problem based on multi-well potentials where the microstructure is taken into account. The set of polymeric strains minimizing the elastic energy contains a one-parameter family of simple strain associated with a micro-variation of the degree of freedom. We develop the energy functional through two terms, the first one nematic and the second one considering the tilting phenomenon; after, by developing in the rubber elasticity framework, we minimize over the tilt rotation angle and extract the engineering stress.

  19. Uncertainty relation for the discrete Fourier transform.

    PubMed

    Massar, Serge; Spindel, Philippe

    2008-05-16

    We derive an uncertainty relation for two unitary operators which obey a commutation relation of the form UV=e(i phi) VU. Its most important application is to constrain how much a quantum state can be localized simultaneously in two mutually unbiased bases related by a discrete fourier transform. It provides an uncertainty relation which smoothly interpolates between the well-known cases of the Pauli operators in two dimensions and the continuous variables position and momentum. This work also provides an uncertainty relation for modular variables, and could find applications in signal processing. In the finite dimensional case the minimum uncertainty states, discrete analogues of coherent and squeezed states, are minimum energy solutions of Harper's equation, a discrete version of the harmonic oscillator equation.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granderson, G.D.

    The purpose of the dissertation is to examine the impact of rate-of-return regulation on the cost of transporting natural gas in interstate commerce. Of particular interest is the effect of the regulation on the input choice of a firm. Does regulation induce a regulated firm to produce its selected level of output at greater than minimum cost The theoretical model is based on the work of Rolf Faere and James Logan who investigate the duality relationship between the cost and production functions of a rate-of-return regulated firm. Faere and Logan derive the cost function for a regulated firm as themore » minimum cost of producing the firm's selected level of output, subject to the regulatory constraint. The regulated cost function is used to recover the unregulated cost function. A firm's unregulated cost function is the minimum cost of producing its selected level of output. Characteristics of the production technology are obtained from duality between the production and unregulated cost functions. Using data on 20 pipeline companies from 1977 to 1987, the author estimates a random effects model that consists of a regulated cost function and its associated input share equations. The model is estimated as a set of seemingly unrelated regressions. The empirical results are used to test the Faere and Logan theory and the traditional Averch-Johnson hypothesis of overcapitalization. Parameter estimates are used to recover the unregulated cost function and to calculate the amount by which transportation costs are increased by the regulation of the industry. Empirical results show that a firm's transportation cost decreases as the allowed rate of return increases and the regulatory constraint becomes less tight. Elimination of the regulatory constraint would lead to a reduction in costs on average of 5.278%. There is evidence that firms overcapitalize on pipeline capital. There is inconclusive evidence on whether firms overcapitalized on compressor station capital.« less

  1. Mitigating nonlinearity in full waveform inversion using scaled-Sobolev pre-conditioning

    NASA Astrophysics Data System (ADS)

    Zuberi, M. AH; Pratt, R. G.

    2018-04-01

    The Born approximation successfully linearizes seismic full waveform inversion if the background velocity is sufficiently accurate. When the background velocity is not known it can be estimated by using model scale separation methods. A frequently used technique is to separate the spatial scales of the model according to the scattering angles present in the data, by using either first- or second-order terms in the Born series. For example, the well-known `banana-donut' and the `rabbit ear' shaped kernels are, respectively, the first- and second-order Born terms in which at least one of the scattering events is associated with a large angle. Whichever term of the Born series is used, all such methods suffer from errors in the starting velocity model because all terms in the Born series assume that the background Green's function is known. An alternative approach to Born-based scale separation is to work in the model domain, for example, by Gaussian smoothing of the update vectors, or some other approach for separation by model wavenumbers. However such model domain methods are usually based on a strict separation in which only the low-wavenumber updates are retained. This implies that the scattered information in the data is not taken into account. This can lead to the inversion being trapped in a false (local) minimum when sharp features are updated incorrectly. In this study we propose a scaled-Sobolev pre-conditioning (SSP) of the updates to achieve a constrained scale separation in the model domain. The SSP is obtained by introducing a scaled Sobolev inner product (SSIP) into the measure of the gradient of the objective function with respect to the model parameters. This modified measure seeks reductions in the L2 norm of the spatial derivatives of the gradient without changing the objective function. The SSP does not rely on the Born prediction of scale based on scattering angles, and requires negligible extra computational cost per iteration. Synthetic examples from the Marmousi model show that the constrained scale separation using SSP is able to keep the background updates in the zone of attraction of the global minimum, in spite of using a poor starting model in which conventional methods fail.

  2. Optimal short-range trajectories for helicopters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slater, G.L.; Erzberger, H.

    1982-12-01

    An optimal flight path algorithm using a simplified altitude state model and a priori climb cruise descent flight profile was developed and applied to determine minimum fuel and minimum cost trajectories for a helicopter flying a fixed range trajectory. In addition, a method was developed for obtaining a performance model in simplified form which is based on standard flight manual data and which is applicable to the computation of optimal trajectories. The entire performance optimization algorithm is simple enough that on line trajectory optimization is feasible with a relatively small computer. The helicopter model used is the Silorsky S-61N. Themore » results show that for this vehicle the optimal flight path and optimal cruise altitude can represent a 10% fuel saving on a minimum fuel trajectory. The optimal trajectories show considerable variability because of helicopter weight, ambient winds, and the relative cost trade off between time and fuel. In general, reasonable variations from the optimal velocities and cruise altitudes do not significantly degrade the optimal cost. For fuel optimal trajectories, the optimum cruise altitude varies from the maximum (12,000 ft) to the minimum (0 ft) depending on helicopter weight.« less

  3. Optimal mistuning for enhanced aeroelastic stability of transonic fans

    NASA Technical Reports Server (NTRS)

    Hall, K. C.; Crawley, E. F.

    1983-01-01

    An inverse design procedure was developed for the design of a mistuned rotor. The design requirements are that the stability margin of the eigenvalues of the aeroelastic system be greater than or equal to some minimum stability margin, and that the mass added to each blade be positive. The objective was to achieve these requirements with a minimal amount of mistuning. Hence, the problem was posed as a constrained optimization problem. The constrained minimization problem was solved by the technique of mathematical programming via augmented Lagrangians. The unconstrained minimization phase of this technique was solved by the variable metric method. The bladed disk was modelled as being composed of a rigid disk mounted on a rigid shaft. Each of the blades were modelled with a single tosional degree of freedom.

  4. Edgelist phase unwrapping algorithm for time series InSAR analysis.

    PubMed

    Shanker, A Piyush; Zebker, Howard

    2010-03-01

    We present here a new integer programming formulation for phase unwrapping of multidimensional data. Phase unwrapping is a key problem in many coherent imaging systems, including time series synthetic aperture radar interferometry (InSAR), with two spatial and one temporal data dimensions. The minimum cost flow (MCF) [IEEE Trans. Geosci. Remote Sens. 36, 813 (1998)] phase unwrapping algorithm describes a global cost minimization problem involving flow between phase residues computed over closed loops. Here we replace closed loops by reliable edges as the basic construct, thus leading to the name "edgelist." Our algorithm has several advantages over current methods-it simplifies the representation of multidimensional phase unwrapping, it incorporates data from external sources, such as GPS, where available to better constrain the unwrapped solution, and it treats regularly sampled or sparsely sampled data alike. It thus is particularly applicable to time series InSAR, where data are often irregularly spaced in time and individual interferograms can be corrupted with large decorrelated regions. We show that, similar to the MCF network problem, the edgelist formulation also exhibits total unimodularity, which enables us to solve the integer program problem by using efficient linear programming tools. We apply our method to a persistent scatterer-InSAR data set from the creeping section of the Central San Andreas Fault and find that the average creep rate of 22 mm/Yr is constant within 3 mm/Yr over 1992-2004 but varies systematically with ground location, with a slightly higher rate in 1992-1998 than in 1999-2003.

  5. 40 CFR 35.937-6 - Cost and price considerations.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 1 2011-07-01 2011-07-01 false Cost and price considerations. 35.937-6...) shall apply. (1) The candidate(s) selected for negotiation shall submit to the grantee for review...) Cost review. (1) The grantee shall review proposed subagreement costs. (2) As a minimum, proposed...

  6. 40 CFR 35.937-6 - Cost and price considerations.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Cost and price considerations. 35.937-6...) shall apply. (1) The candidate(s) selected for negotiation shall submit to the grantee for review...) Cost review. (1) The grantee shall review proposed subagreement costs. (2) As a minimum, proposed...

  7. Cost-efficient scheduling of FAST observations

    NASA Astrophysics Data System (ADS)

    Luo, Qi; Zhao, Laiping; Yu, Ce; Xiao, Jian; Sun, Jizhou; Zhu, Ming; Zhong, Yi

    2018-03-01

    A cost-efficient schedule for the Five-hundred-meter Aperture Spherical radio Telescope (FAST) requires to maximize the number of observable proposals and the overall scientific priority, and minimize the overall slew-cost generated by telescope shifting, while taking into account the constraints including the astronomical objects visibility, user-defined observable times, avoiding Radio Frequency Interference (RFI). In this contribution, first we solve the problem of maximizing the number of observable proposals and scientific priority by modeling it as a Minimum Cost Maximum Flow (MCMF) problem. The optimal schedule can be found by any MCMF solution algorithm. Then, for minimizing the slew-cost of the generated schedule, we devise a maximally-matchable edges detection-based method to reduce the problem size, and propose a backtracking algorithm to find the perfect matching with minimum slew-cost. Experiments on a real dataset from NASA/IPAC Extragalactic Database (NED) show that, the proposed scheduler can increase the usage of available times with high scientific priority and reduce the slew-cost significantly in a very short time.

  8. A holistic framework for design of cost-effective minimum water utilization network.

    PubMed

    Wan Alwi, S R; Manan, Z A; Samingin, M H; Misran, N

    2008-07-01

    Water pinch analysis (WPA) is a well-established tool for the design of a maximum water recovery (MWR) network. MWR, which is primarily concerned with water recovery and regeneration, only partly addresses water minimization problem. Strictly speaking, WPA can only lead to maximum water recovery targets as opposed to the minimum water targets as widely claimed by researchers over the years. The minimum water targets can be achieved when all water minimization options including elimination, reduction, reuse/recycling, outsourcing and regeneration have been holistically applied. Even though WPA has been well established for synthesis of MWR network, research towards holistic water minimization has lagged behind. This paper describes a new holistic framework for designing a cost-effective minimum water network (CEMWN) for industry and urban systems. The framework consists of five key steps, i.e. (1) Specify the limiting water data, (2) Determine MWR targets, (3) Screen process changes using water management hierarchy (WMH), (4) Apply Systematic Hierarchical Approach for Resilient Process Screening (SHARPS) strategy, and (5) Design water network. Three key contributions have emerged from this work. First is a hierarchical approach for systematic screening of process changes guided by the WMH. Second is a set of four new heuristics for implementing process changes that considers the interactions among process changes options as well as among equipment and the implications of applying each process change on utility targets. Third is the SHARPS cost-screening technique to customize process changes and ultimately generate a minimum water utilization network that is cost-effective and affordable. The CEMWN holistic framework has been successfully implemented on semiconductor and mosque case studies and yielded results within the designer payback period criterion.

  9. 2D magnetotelluric inversion using reflection seismic images as constraints and application in the COSC project

    NASA Astrophysics Data System (ADS)

    Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.

    2017-04-01

    We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.

  10. Planning Minimum-Energy Paths in an Off-Road Environment with Anisotropic Traversal Costs and Motion Constraints

    DTIC Science & Technology

    1989-06-01

    problems, and (3) weighted-region problems. Since the minimum-energy path-planning problem addressed in this dissertation is a hybrid between the two...contains components that are strictly vehicle dependent, components that are strictly terrain dependent, and components representing a hybrid of...Single Segment Braking/Multiple Segment Hybrid Using Eq. (3.46), the traversal cost U 1,.-1 can be rewritten as Uop- 1 = mgD Itan01 , (4.12a) and the

  11. Mars double-aeroflyby free returns

    NASA Astrophysics Data System (ADS)

    Jesick, Mark

    2017-09-01

    Mars double-flyby free-return trajectories that pass twice through the Martian atmosphere are documented. This class of trajectories is advantageous for potential Mars atmospheric sample return missions because of its low geocentric energy at departure and arrival, because it would enable two sample collections at unique locations during different Martian seasons, and because of its lack of deterministic maneuvers. Free return opportunities are documented over Earth departure dates ranging from 2015 through 2100, with viable missions available every Earth-Mars synodic period. After constraining the maximum lift-to-drag ratio to be less than one, the minimum observed Earth departure hyperbolic excess speed is 3.23 km/s, the minimum Earth atmospheric entry speed is 11.42 km/s, and the minimum round-trip flight time is 805 days. An algorithm using simplified dynamics is developed along with a method to derive an initial estimate for trajectories in a more realistic dynamic model. Multiple examples are presented, including free returns that pass outside and inside of Mars's appreciable atmosphere.

  12. Application of Climate Impact Metrics to Rotorcraft Design

    NASA Technical Reports Server (NTRS)

    Russell, Carl; Johnson, Wayne

    2013-01-01

    Multiple metrics are applied to the design of large civil rotorcraft, integrating minimum cost and minimum environmental impact. The design mission is passenger transport with similar range and capacity to a regional jet. Separate aircraft designs are generated for minimum empty weight, fuel burn, and environmental impact. A metric specifically developed for the design of aircraft is employed to evaluate emissions. The designs are generated using the NDARC rotorcraft sizing code, and rotor analysis is performed with the CAMRAD II aeromechanics code. Design and mission parameters such as wing loading, disk loading, and cruise altitude are varied to minimize both cost and environmental impact metrics. This paper presents the results of these parametric sweeps as well as the final aircraft designs.

  13. Application of Climate Impact Metrics to Civil Tiltrotor Design

    NASA Technical Reports Server (NTRS)

    Russell, Carl R.; Johnson, Wayne

    2013-01-01

    Multiple metrics are applied to the design of a large civil tiltrotor, integrating minimum cost and minimum environmental impact. The design mission is passenger transport with similar range and capacity to a regional jet. Separate aircraft designs are generated for minimum empty weight, fuel burn, and environmental impact. A metric specifically developed for the design of aircraft is employed to evaluate emissions. The designs are generated using the NDARC rotorcraft sizing code, and rotor analysis is performed with the CAMRAD II aeromechanics code. Design and mission parameters such as wing loading, disk loading, and cruise altitude are varied to minimize both cost and environmental impact metrics. This paper presents the results of these parametric sweeps as well as the final aircraft designs.

  14. Committed sea-level rise for the next century from Greenland ice sheet dynamics during the past decade

    PubMed Central

    Price, Stephen F.; Payne, Antony J.; Howat, Ian M.; Smith, Benjamin E.

    2011-01-01

    We use a three-dimensional, higher-order ice flow model and a realistic initial condition to simulate dynamic perturbations to the Greenland ice sheet during the last decade and to assess their contribution to sea level by 2100. Starting from our initial condition, we apply a time series of observationally constrained dynamic perturbations at the marine termini of Greenland’s three largest outlet glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier. The initial and long-term diffusive thinning within each glacier catchment is then integrated spatially and temporally to calculate a minimum sea-level contribution of approximately 1 ± 0.4 mm from these three glaciers by 2100. Based on scaling arguments, we extend our modeling to all of Greenland and estimate a minimum dynamic sea-level contribution of approximately 6 ± 2 mm by 2100. This estimate of committed sea-level rise is a minimum because it ignores mass loss due to future changes in ice sheet dynamics or surface mass balance. Importantly, > 75% of this value is from the long-term, diffusive response of the ice sheet, suggesting that the majority of sea-level rise from Greenland dynamics during the past decade is yet to come. Assuming similar and recurring forcing in future decades and a self-similar ice dynamical response, we estimate an upper bound of 45 mm of sea-level rise from Greenland dynamics by 2100. These estimates are constrained by recent observations of dynamic mass loss in Greenland and by realistic model behavior that accounts for both the long-term cumulative mass loss and its decay following episodic boundary forcing. PMID:21576500

  15. Committed sea-level rise for the next century from Greenland ice sheet dynamics during the past decade.

    PubMed

    Price, Stephen F; Payne, Antony J; Howat, Ian M; Smith, Benjamin E

    2011-05-31

    We use a three-dimensional, higher-order ice flow model and a realistic initial condition to simulate dynamic perturbations to the Greenland ice sheet during the last decade and to assess their contribution to sea level by 2100. Starting from our initial condition, we apply a time series of observationally constrained dynamic perturbations at the marine termini of Greenland's three largest outlet glaciers, Jakobshavn Isbræ, Helheim Glacier, and Kangerdlugssuaq Glacier. The initial and long-term diffusive thinning within each glacier catchment is then integrated spatially and temporally to calculate a minimum sea-level contribution of approximately 1 ± 0.4 mm from these three glaciers by 2100. Based on scaling arguments, we extend our modeling to all of Greenland and estimate a minimum dynamic sea-level contribution of approximately 6 ± 2 mm by 2100. This estimate of committed sea-level rise is a minimum because it ignores mass loss due to future changes in ice sheet dynamics or surface mass balance. Importantly, > 75% of this value is from the long-term, diffusive response of the ice sheet, suggesting that the majority of sea-level rise from Greenland dynamics during the past decade is yet to come. Assuming similar and recurring forcing in future decades and a self-similar ice dynamical response, we estimate an upper bound of 45 mm of sea-level rise from Greenland dynamics by 2100. These estimates are constrained by recent observations of dynamic mass loss in Greenland and by realistic model behavior that accounts for both the long-term cumulative mass loss and its decay following episodic boundary forcing.

  16. Automated observatory in Antarctica: real-time data transfer on constrained networks in practice

    NASA Astrophysics Data System (ADS)

    Bracke, Stephan; Gonsette, Alexandre; Rasson, Jean; Poncelet, Antoine; Hendrickx, Olivier

    2017-08-01

    In 2013 a project was started by the geophysical centre in Dourbes to install a fully automated magnetic observatory in Antarctica. This isolated place comes with specific requirements: unmanned station during 6 months, low temperatures with extreme values down to -50 °C, minimum power consumption and satellite bandwidth limited to 56 Kbit s-1. The ultimate aim is to transfer real-time magnetic data every second: vector data from a LEMI-25 vector magnetometer, absolute F measurements from a GEM Systems scalar proton magnetometer and absolute magnetic inclination-declination (DI) measurements (five times a day) with an automated DI-fluxgate magnetometer. Traditional file transfer protocols (for instance File Transfer Protocol (FTP), email, rsync) show severe limitations when it comes to real-time capability. After evaluation of pro and cons of the available real-time Internet of things (IoT) protocols and seismic software solutions, we chose to use Message Queuing Telemetry Transport (MQTT) and receive the 1 s data with a negligible latency cost and no loss of data. Each individual instrument sends the magnetic data immediately after capturing, and the data arrive approximately 300 ms after being sent, which corresponds with the normal satellite latency.

  17. A Bat Algorithm with Mutation for UCAV Path Planning

    PubMed Central

    Wang, Gaige; Guo, Lihong; Duan, Hong; Liu, Luo; Wang, Heqi

    2012-01-01

    Path planning for uninhabited combat air vehicle (UCAV) is a complicated high dimension optimization problem, which mainly centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. Original bat algorithm (BA) is used to solve the UCAV path planning problem. Furthermore, a new bat algorithm with mutation (BAM) is proposed to solve the UCAV path planning problem, and a modification is applied to mutate between bats during the process of the new solutions updating. Then, the UCAV can find the safe path by connecting the chosen nodes of the coordinates while avoiding the threat areas and costing minimum fuel. This new approach can accelerate the global convergence speed while preserving the strong robustness of the basic BA. The realization procedure for original BA and this improved metaheuristic approach BAM is also presented. To prove the performance of this proposed metaheuristic method, BAM is compared with BA and other population-based optimization methods, such as ACO, BBO, DE, ES, GA, PBIL, PSO, and SGA. The experiment shows that the proposed approach is more effective and feasible in UCAV path planning than the other models. PMID:23365518

  18. Electro-optical design for efficient visual communication

    NASA Astrophysics Data System (ADS)

    Huck, Friedrich O.; Fales, Carl L.; Jobson, Daniel J.; Rahman, Zia-ur

    1994-06-01

    Visual communication can be regarded as efficient only if the amount of information that it conveys from the scene to the observer approaches the maximum possible and the associated cost approaches the minimum possible. To deal with this problem, Fales and Huck have integrated the critical limiting factors that constrain image gathering into classical concepts of communication theory. This paper uses this approach to assess the electro-optical design of the image gathering device. Design variables include the f-number and apodization of the objective lens, the aperture size and sampling geometry of the photodetection mechanism, and lateral inhibition and nonlinear radiance-to-signal conversion akin to the retinal processing in the human eye. It is an agreeable consequence of this approach that the image gathering device that is designed along the guidelines developed from communication theory behaves very much like the human eye. The performance approaches the maximum possible in terms of the information content of the acquired data, and thereby, the fidelity, sharpness and clarity with which fine detail can be restored, the efficiency with which the visual information can be transmitted in the form of decorrelated data, and the robustness of these two attributes to the temporal and spatial variations in scene illumination.

  19. Genetic algorithm-based multi-objective optimal absorber system for three-dimensional seismic structures

    NASA Astrophysics Data System (ADS)

    Ren, Wenjie; Li, Hongnan; Song, Gangbing; Huo, Linsheng

    2009-03-01

    The problem of optimizing an absorber system for three-dimensional seismic structures is addressed. The objective is to determine the number and position of absorbers to minimize the coupling effects of translation-torsion of structures at minimum cost. A procedure for a multi-objective optimization problem is developed by integrating a dominance-based selection operator and a dominance-based penalty function method. Based on the two-branch tournament genetic algorithm, the selection operator is constructed by evaluating individuals according to their dominance in one run. The technique guarantees the better performing individual winning its competition, provides a slight selection pressure toward individuals and maintains diversity in the population. Moreover, due to the evaluation for individuals in each generation being finished in one run, less computational effort is taken. Penalty function methods are generally used to transform a constrained optimization problem into an unconstrained one. The dominance-based penalty function contains necessary information on non-dominated character and infeasible position of an individual, essential for success in seeking a Pareto optimal set. The proposed approach is used to obtain a set of non-dominated designs for a six-storey three-dimensional building with shape memory alloy dampers subjected to earthquake.

  20. Wise Investment? Modeling Industry Profitability and Risk of Targeted Chemotherapy for Incurable Solid Cancers

    PubMed Central

    Conter, Henry J.; Chu, Quincy S.C.

    2012-01-01

    Purpose: Pharmaceutical development involves substantial financial risk. This risk, rising development costs, and the promotion of continued research and development have been cited as major drivers in the progressive increase in drug prices. Currently, cost-effective analyses are being used to determine the value of treatment. However, cost-effective analyses practically function as a threshold for value and do not directly address the rationale for drug prices. We set out to create a functional model for industry price decisions and clarify the minimum acceptable profitability of new drugs. Methods: Assuming that industry should only invest in profitable ventures, we employed a linear cost-volume-profit breakeven analysis to equate initial capital investment and risk and post–drug-approval profits, where drug development represents the bulk of investment. A Markov decision analysis model was also used to define the relationships between investment events risk. A systematic literature search was performed to determine event probabilities, clinical trial costs, and total expenses as inputs into the model. Disease-specific inputs, current market size across regions, and lengths of treatment for cancer types were also included. Results: With development of single novel chemotherapies costing from $802 to $1,042 million (2002 US dollars), pharmaceutical profits should range from $4.3 to $5.2 billion, with an expected rate of return on investment of 11% annually. However, diversification across cancer types for chemotherapy can reduce the minimum required profit to less than $3 billion. For optimal diversification, industry should study four tumor types per drug; however, nonprofit organizations could tolerate eight parallel development tracks to minimize the risk of development failure. Assuming that pharmaceutical companies hold exclusive rights for drug sales for only 5 years after market approval, the minimum required profit per drug per month per patient ranges from $294 for end-stage lung cancer to $3,231 for end-stage renal cell carcinoma. Conclusion: Pharmaceutical development in oncology is costly, with substantial risk, but is also highly profitable. Minimum acceptable profits per drug per month of treatment per patient vary with prevalence of disease, but they should be less than $5,000 per month of treatment in the developed world. Minimum acceptable profits may be lower for treatments with additional efficacy in the earlier stages of a tumor type. However, this type of event could not be statistically modeled. PMID:29447097

  1. Assessing the prospective resource base for enhanced geothermal systems in Europe

    NASA Astrophysics Data System (ADS)

    Limberger, J.; Calcagno, P.; Manzella, A.; Trumpy, E.; Boxem, T.; Pluymaekers, M. P. D.; van Wees, J.-D.

    2014-12-01

    In this study the resource base for EGS (enhanced geothermal systems) in Europe was quantified and economically constrained, applying a discounted cash-flow model to different techno-economic scenarios for future EGS in 2020, 2030, and 2050. Temperature is a critical parameter that controls the amount of thermal energy available in the subsurface. Therefore, the first step in assessing the European resource base for EGS is the construction of a subsurface temperature model of onshore Europe. Subsurface temperatures were computed to a depth of 10 km below ground level for a regular 3-D hexahedral grid with a horizontal resolution of 10 km and a vertical resolution of 250 m. Vertical conductive heat transport was considered as the main heat transfer mechanism. Surface temperature and basal heat flow were used as boundary conditions for the top and bottom of the model, respectively. If publicly available, the most recent and comprehensive regional temperature models, based on data from wells, were incorporated. With the modeled subsurface temperatures and future technical and economic scenarios, the technical potential and minimum levelized cost of energy (LCOE) were calculated for each grid cell of the temperature model. Calculations for a typical EGS scenario yield costs of EUR 215 MWh-1 in 2020, EUR 127 MWh-1 in 2030, and EUR 70 MWh-1 in 2050. Cutoff values of EUR 200 MWh-1 in 2020, EUR 150 MWh-1 in 2030, and EUR 100 MWh-1 in 2050 are imposed to the calculated LCOE values in each grid cell to limit the technical potential, resulting in an economic potential for Europe of 19 GWe in 2020, 22 GWe in 2030, and 522 GWe in 2050. The results of our approach do not only provide an indication of prospective areas for future EGS in Europe, but also show a more realistic cost determined and depth-dependent distribution of the technical potential by applying different well cost models for 2020, 2030, and 2050.

  2. Optimization of solar cell contacts by system cost-per-watt minimization

    NASA Technical Reports Server (NTRS)

    Redfield, D.

    1977-01-01

    New, and considerably altered, optimum dimensions for solar-cell metallization patterns are found using the recently developed procedure whose optimization criterion is the minimum cost-per-watt effect on the entire photovoltaic system. It is also found that the optimum shadow fraction by the fine grid is independent of metal cost and resistivity as well as cell size. The optimum thickness of the fine grid metal depends on all these factors, and in familiar cases it should be appreciably greater than that found by less complete analyses. The optimum bus bar thickness is much greater than those generally used. The cost-per-watt penalty due to the need for increased amounts of metal per unit area on larger cells is determined quantitatively and thereby provides a criterion for the minimum benefits that must be obtained in other process steps to make larger cells cost effective.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Simonetto, Andrea

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less

  4. Weapon Acquisition Program Outcomes and Efforts to Reform DOD’s Acquisition Process

    DTIC Science & Technology

    2016-05-09

    portfolio’s total estimated acquisition cost. 11. The equity prices of contractors delivering the ten costliest programs performed well relative to broad...cost growth. • In a constrained funding environment, unforeseen cost increases limit investment choices. The equity prices of the contractors ...remain profitable well into the future • Five publicly-traded defense contractors are developing and delivering the ten largest DOD programs in the 2015

  5. Market frictions: A unified model of search costs and switching costs

    PubMed Central

    Wilson, Chris M.

    2012-01-01

    It is well known that search costs and switching costs can create market power by constraining the ability of consumers to change suppliers. While previous research has examined each cost in isolation, this paper demonstrates the benefits of examining the two types of friction in unison. The paper shows how subtle distinctions between the two costs can provide important differences in their effects upon consumer behaviour, competition and welfare. In addition, the paper also illustrates a simple empirical methodology for estimating separate measures of both costs, while demonstrating a potential bias that can arise if only one cost is considered. PMID:25550674

  6. Defining Continuous Improvement and Cost Minimization Possibilities through School Choice Experiments

    ERIC Educational Resources Information Center

    Merrifield, John

    2009-01-01

    Studies of existing best practices cannot determine whether the current "best" schooling practices could be even better, less costly, or more effective and/or improve at a faster rate, but we can discover a cost effective menu of schooling options and each item's minimum cost through market accountability experiments. This paper describes…

  7. X-1 to X-Wings: Developing a Parametric Cost Model

    NASA Technical Reports Server (NTRS)

    Sterk, Steve; McAtee, Aaron

    2015-01-01

    In todays cost-constrained environment, NASA needs an X-Plane database and parametric cost model that can quickly provide rough order of magnitude predictions of cost from initial concept to first fight of potential X-Plane aircraft. This paper takes a look at the steps taken in developing such a model and reports the results. The challenges encountered in the collection of historical data and recommendations for future database management are discussed. A step-by-step discussion of the development of Cost Estimating Relationships (CERs) is then covered.

  8. Improved superficial brain hemorrhage visualization in susceptibility weighted images by constrained minimum intensity projection

    NASA Astrophysics Data System (ADS)

    Castro, Marcelo A.; Pham, Dzung L.; Butman, John

    2016-03-01

    Minimum intensity projection is a technique commonly used to display magnetic resonance susceptibility weighted images, allowing the observer to better visualize hemorrhages and vasculature. The technique displays the minimum intensity in a given projection within a thick slab, allowing different connectivity patterns to be easily revealed. Unfortunately, the low signal intensity of the skull within the thick slab can mask superficial tissues near the skull base and other regions. Because superficial microhemorrhages are a common feature of traumatic brain injury, this effect limits the ability to proper diagnose and follow up patients. In order to overcome this limitation, we developed a method to allow minimum intensity projection to properly display superficial tissues adjacent to the skull. Our approach is based on two brain masks, the largest of which includes extracerebral voxels. The analysis of the rind within both masks containing the actual brain boundary allows reclassification of those voxels initially missed in the smaller mask. Morphological operations are applied to guarantee accuracy and topological correctness, and the mean intensity within the mask is assigned to all outer voxels. This prevents bone from dominating superficial regions in the projection, enabling superior visualization of cortical hemorrhages and vessels.

  9. Analysis of market penetration of renewable energy alternatives under uncertain and carbon constrained world

    EPA Science Inventory

    Future energy prices and supply, availability and costs can have a significant impact on how fast and cost effectively we could abate carbon emissions. Two-staged decision making methods embedded in U.S. EPA's MARKAL modeling system will be utilized to find the most robust mitig...

  10. Using the pallet costing system to determine costs and stay competitive in the pallet industry

    Treesearch

    A. Jefferson Jr. Palmer; Bruce G. Hansen; Bruce G. Hansen

    2002-01-01

    In order to stay competitive and keep production costs at a minimum, wood pallet manufacturers must plan, monitor, and control their various production activities. Cost information on pallet manufacturing operations, must be gathered and analyzed so that the plant manager can determine whether certain activities are efficient and profitable. The Pallet Costing System (...

  11. Design and optimization of organic rankine cycle for low temperature geothermal power plant

    NASA Astrophysics Data System (ADS)

    Barse, Kirtipal A.

    Rising oil prices and environmental concerns have increased attention to renewable energy. Geothermal energy is a very attractive source of renewable energy. Although low temperature resources (90°C to 150°C) are the most common and most abundant source of geothermal energy, they were not considered economical and technologically feasible for commercial power generation. Organic Rankine Cycle (ORC) technology makes it feasible to use low temperature resources to generate power by using low boiling temperature organic liquids. The first hypothesis for this research is that using ORC is technologically and economically feasible to generate electricity from low temperature geothermal resources. The second hypothesis for this research is redesigning the ORC system for the given resource condition will improve efficiency along with improving economics. ORC model was developed using process simulator and validated with the data obtained from Chena Hot Springs, Alaska. A correlation was observed between the critical temperature of the working fluid and the efficiency for the cycle. Exergy analysis of the cycle revealed that the highest exergy destruction occurs in evaporator followed by condenser, turbine and working fluid pump for the base case scenarios. Performance of ORC was studied using twelve working fluids in base, Internal Heat Exchanger and turbine bleeding constrained and non-constrained configurations. R601a, R245ca, R600 showed highest first and second law efficiency in the non-constrained IHX configuration. The highest net power was observed for R245ca, R601a and R601 working fluids in the non-constrained base configuration. Combined heat exchanger area and size parameter of the turbine showed an increasing trend as the critical temperature of the working fluid decreased. The lowest levelized cost of electricity was observed for R245ca followed by R601a, R236ea in non-constrained base configuration. The next best candidates in terms of LCOE were R601a, R245ca and R600 in non-constrained IHX configuration. LCOE is dependent on net power and higher net power favors to lower the cost of electricity. Overall R245ca, R601, R601a, R600 and R236ea show better performance among the fluids studied. Non constrained configurations display better performance compared to the constrained configurations. Base non-constrained offered the highest net power and lowest LCOE.

  12. Optimal Design and Operation of Permanent Irrigation Systems

    NASA Astrophysics Data System (ADS)

    Oron, Gideon; Walker, Wynn R.

    1981-01-01

    Solid-set pressurized irrigation system design and operation are studied with optimization techniques to determine the minimum cost distribution system. The principle of the analysis is to divide the irrigation system into subunits in such a manner that the trade-offs among energy, piping, and equipment costs are selected at the minimum cost point. The optimization procedure involves a nonlinear, mixed integer approach capable of achieving a variety of optimal solutions leading to significant conclusions with regard to the design and operation of the system. Factors investigated include field geometry, the effect of the pressure head, consumptive use rates, a smaller flow rate in the pipe system, and outlet (sprinkler or emitter) discharge.

  13. Achieving cost-neutrality with long-acting reversible contraceptive methods.

    PubMed

    Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna

    2015-01-01

    This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20-29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Achieving cost-neutrality with long-acting reversible contraceptive methods⋆

    PubMed Central

    Trussell, James; Hassan, Fareen; Lowin, Julia; Law, Amy; Filonenko, Anna

    2014-01-01

    Objectives This analysis aimed to estimate the average annual cost of available reversible contraceptive methods in the United States. In line with literature suggesting long-acting reversible contraceptive (LARC) methods become increasingly cost-saving with extended duration of use, it aimed to also quantify minimum duration of use required for LARC methods to achieve cost-neutrality relative to other reversible contraceptive methods while taking into consideration discontinuation. Study design A three-state economic model was developed to estimate relative costs of no method (chance), four short-acting reversible (SARC) methods (oral contraceptive, ring, patch and injection) and three LARC methods [implant, copper intrauterine device (IUD) and levonorgestrel intrauterine system (LNG-IUS) 20 mcg/24 h (total content 52 mg)]. The analysis was conducted over a 5-year time horizon in 1000 women aged 20–29 years. Method-specific failure and discontinuation rates were based on published literature. Costs associated with drug acquisition, administration and failure (defined as an unintended pregnancy) were considered. Key model outputs were annual average cost per method and minimum duration of LARC method usage to achieve cost-savings compared to SARC methods. Results The two least expensive methods were copper IUD ($304 per women, per year) and LNG-IUS 20 mcg/24 h ($308). Cost of SARC methods ranged between $432 (injection) and $730 (patch), per women, per year. A minimum of 2.1 years of LARC usage would result in cost-savings compared to SARC usage. Conclusions This analysis finds that even if LARC methods are not used for their full durations of efficacy, they become cost-saving relative to SARC methods within 3 years of use. Implications Previous economic arguments in support of using LARC methods have been criticized for not considering that LARC methods are not always used for their full duration of efficacy. This study calculated that cost-savings from LARC methods relative to SARC methods, with discontinuation rates considered, can be realized within 3 years. PMID:25282161

  15. [Maximization of economic yield, minimum cost optimal diets and cultivation diversification for small scale farmers of the highland region of Guatemala].

    PubMed

    Alarcón, J A; Immink, M D; Méndez, L F

    1989-12-01

    The present study was conducted as part of an evaluation of the economic and nutritional effects of a crop diversification program for small-scale farmers in the Western highlands of Guatemala. Linear programming models are employed in order to obtain optimal combinations of traditional and non-traditional food crops under different ecological conditions that: a) provide minimum cost diets for auto-consumption, and b) maximize net income and market availability of dietary energy. Data used were generated by means of an agroeconomic survey conducted in 1983 among 726 farming households. Food prices were obtained from the Institute of Agrarian Marketing; data on production costs, from the National Bank of Agricultural Development in Guatemala. The gestation periods for each crop were obtained from three different sources, and then averaged. The results indicated that the optimal cropping pattern for the minimum-cost diets for auto consumption include traditional foods (corn, beans, broad bean, wheat, potato), non-traditional foods (carrots, broccoli, beets) and foods of animal origin (milk, eggs). A significant number of farmers included in the sample did not have sufficient land availability to produce all foods included in the minimum-cost diet. Cropping patterns which maximize net incomes include only non-traditional foods: onions, carrots, broccoli and beets for farmers in the low highland areas, and raddish, broccoli, cauliflower and carrots for farmers in the higher parts. Optimal cropping patterns which maximize market availability of dietary energy include traditional and non-traditional foods; for farmers in the lower areas: wheat, corn, beets, carrots and onions; for farmers in the higher areas: potato, wheat, raddish, carrots and cabbage.

  16. Dopamine Manipulation Affects Response Vigor Independently of Opportunity Cost.

    PubMed

    Zénon, Alexandre; Devesse, Sophie; Olivier, Etienne

    2016-09-14

    Dopamine is known to be involved in regulating effort investment in relation to reward, and the disruption of this mechanism is thought to be central in some pathological situations such as Parkinson's disease, addiction, and depression. According to an influential model, dopamine plays this role by encoding the opportunity cost, i.e., the average value of forfeited actions, which is an important parameter to take into account when making decisions about which action to undertake and how fast to execute it. We tested this hypothesis by asking healthy human participants to perform two effort-based decision-making tasks, following either placebo or levodopa intake in a double blind within-subject protocol. In the effort-constrained task, there was a trade-off between the amount of force exerted and the time spent in executing the task, such that investing more effort decreased the opportunity cost. In the time-constrained task, the effort duration was constant, but exerting more force allowed the subject to earn more substantial reward instead of saving time. Contrary to the model predictions, we found that levodopa caused an increase in the force exerted only in the time-constrained task, in which there was no trade-off between effort and opportunity cost. In addition, a computational model showed that dopamine manipulation left the opportunity cost factor unaffected but altered the ratio between the effort cost and reinforcement value. These findings suggest that dopamine does not represent the opportunity cost but rather modulates how much effort a given reward is worth. Dopamine has been proposed in a prevalent theory to signal the average reward rate, used to estimate the cost of investing time in an action, also referred to as opportunity cost. We contrasted the effect of dopamine manipulation in healthy participants in two tasks, in which increasing response vigor (i.e., the amount of effort invested in an action) allowed either to save time or to earn more reward. We found that levodopa-a synthetic precursor of dopamine-increases response vigor only in the latter situation, demonstrating that, rather than the opportunity cost, dopamine is involved in computing the expected value of effort. Copyright © 2016 the authors 0270-6474/16/369516-10$15.00/0.

  17. 7 CFR 1710.205 - Minimum approval requirements for all load forecasts.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... computer software applications. RUS will evaluate borrower load forecasts for readability, understanding..., distribution costs, other systems costs, average revenue per kWh, and inflation. Also, a borrower's engineering...

  18. Good initialization model with constrained body structure for scene text recognition

    NASA Astrophysics Data System (ADS)

    Zhu, Anna; Wang, Guoyou; Dong, Yangbo

    2016-09-01

    Scene text recognition has gained significant attention in the computer vision community. Character detection and recognition are the promise of text recognition and affect the overall performance to a large extent. We proposed a good initialization model for scene character recognition from cropped text regions. We use constrained character's body structures with deformable part-based models to detect and recognize characters in various backgrounds. The character's body structures are achieved by an unsupervised discriminative clustering approach followed by a statistical model and a self-build minimum spanning tree model. Our method utilizes part appearance and location information, and combines character detection and recognition in cropped text region together. The evaluation results on the benchmark datasets demonstrate that our proposed scheme outperforms the state-of-the-art methods both on scene character recognition and word recognition aspects.

  19. The impact of initialization procedures on unsupervised unmixing of hyperspectral imagery using the constrained positive matrix factorization

    NASA Astrophysics Data System (ADS)

    Masalmah, Yahya M.; Vélez-Reyes, Miguel

    2007-04-01

    The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.

  20. Cost-of-illness study in a retrospective cohort of patients with dementia in Lima, Peru

    PubMed Central

    Custodio, Nilton; Lira, David; Herrera-Perez, Eder; del Prado, Liza Nuñez; Parodi, José; Guevara-Silva, Erik; Castro-Suarez, Sheila; Montesinos, Rosa

    2015-01-01

    Dementia is a major cause of dependency and disability among older persons, and imposes huge economic burdens. Only a few cost-of-illness studies for dementia have been carried out in middle and low-income countries. Objective The aim of this study was to analyze costs of dementia in demented patients of a private clinic in Lima, Peru. Methods. We performed a retrospective, cohort, 3-month study by extracting information from medical records of demented patients to assess the use of both healthcare and non-healthcare resources. The total costs of the disease were broken down into direct (medical and social care costs) and indirect costs (informal care costs). Results. In 136 outpatients, we observed that while half of non-demented patients had total care costs of less than US$ 23 over three months, demented patients had costs of US$ 1500 or over (and more than US$ 1860 for frontotemporal dementia). In our study, the monthly cost of a demented patient (US$ 570) was 2.5 times higher than the minimum wage (legal minimum monthly wage in Peru for 2011: US$ 222.22). Conclusion. Dementia constitutes a socioeconomic problem even in developing countries, since patients involve high healthcare and non-healthcare costs, with the costs being especially high for the patient's family. PMID:29213939

  1. Cost-of-illness study in a retrospective cohort of patients with dementia in Lima, Peru.

    PubMed

    Custodio, Nilton; Lira, David; Herrera-Perez, Eder; Del Prado, Liza Nuñez; Parodi, José; Guevara-Silva, Erik; Castro-Suarez, Sheila; Montesinos, Rosa

    2015-01-01

    Dementia is a major cause of dependency and disability among older persons, and imposes huge economic burdens. Only a few cost-of-illness studies for dementia have been carried out in middle and low-income countries. The aim of this study was to analyze costs of dementia in demented patients of a private clinic in Lima, Peru. We performed a retrospective, cohort, 3-month study by extracting information from medical records of demented patients to assess the use of both healthcare and non-healthcare resources. The total costs of the disease were broken down into direct (medical and social care costs) and indirect costs (informal care costs). In 136 outpatients, we observed that while half of non-demented patients had total care costs of less than US$ 23 over three months, demented patients had costs of US$ 1500 or over (and more than US$ 1860 for frontotemporal dementia). In our study, the monthly cost of a demented patient (US$ 570) was 2.5 times higher than the minimum wage (legal minimum monthly wage in Peru for 2011: US$ 222.22). Dementia constitutes a socioeconomic problem even in developing countries, since patients involve high healthcare and non-healthcare costs, with the costs being especially high for the patient's family.

  2. Planning multiple movements within a fixed time limit: The cost of constrained time allocation in a visuo-motor task

    PubMed Central

    Zhang, Hang; Wu, Shih-Wei; Maloney, Laurence T.

    2010-01-01

    S.-W. Wu, M. F. Dal Martello, and L. T. Maloney (2009) evaluated subjects' performance in a visuo-motor task where subjects were asked to hit two targets in sequence within a fixed time limit. Hitting targets earned rewards and Wu et al. varied rewards associated with targets. They found that subjects failed to maximize expected gain; they failed to invest more time in the movement to the more valuable target. What could explain this lack of response to reward? We first considered the possibility that subjects require training in allocating time between two movements. In Experiment 1, we found that, after extensive training, subjects still failed: They did not vary time allocation with changes in payoff. However, their actual gains equaled or exceeded the expected gain of an ideal time allocator, indicating that constraining time itself has a cost for motor accuracy. In a second experiment, we found that movements made under externally imposed time limits were less accurate than movements made with the same timing freely selected by the mover. Constrained time allocation cost about 17% in expected gain. These results suggest that there is no single speed–accuracy tradeoff for movement in our task and that subjects pursued different motor strategies with distinct speed–accuracy tradeoffs in different conditions. PMID:20884550

  3. The Particle Size Distribution in Saturn’s C Ring from UVIS and VIMS Stellar Occultations and RSS Radio Occultations

    NASA Astrophysics Data System (ADS)

    Jerousek, Richard Gregory; Colwell, Josh; Hedman, Matthew M.; French, Richard G.; Marouf, Essam A.; Esposito, Larry; Nicholson, Philip D.

    2017-10-01

    The Cassini Ultraviolet Imaging Spectrograph (UVIS) and Visual and Infrared Mapping Spectrometer (VIMS) have measured ring optical depths over a wide range of viewing geometries at effective wavelengths of 0.15 μm and 2.9 μm respectively. Using Voyager S and X band radio occultations and the direct inversion of the forward scattered S band signal, Marouf et al. (1982), (1983), and Zebker et al. (1985) determined the power-law size distribution parameters assuming a minimum particle radius of 1 mm. Many further studies have also constrained aspects of the particle size distribution throughout the main rings. Marouf et al. (2008a) determined the smallest ring particles to have radii of 4-5 mm using Cassini RSS data. Harbison et al. (2013) used VIMS solar occultations and also found minimum particle sizes of 4-5 mm in the C ring with q ~ 3.1, where n(a)da=Ca^(-q)da is the assumed differential power-law size distribution for particles of radius a. Recent studies of excess variance in stellar signal by Colwell et al. (2017, submitted) constrain the cross-section-weighted effective particle radius to 1 m to several meters. Using the wide range of viewing geometries available to VIMS and UVIS stellar occultations we find that normal optical depth does not strongly depend on viewing geometry at 10km resolution (which would be the case if self-gravity wakes were present). Throughout the C ring, we fit power-law derived optical depths to those measured by UVIS, VIMS, and by the Cassini Radio Science Subsystem (RSS) at 0.94 and 3.6 cm wavelengths to constrain the four parameters of the size distribution at 10km radial resolution. We find significant amounts of particle size sorting throughout the region with a positive correlation between maximum particles size (amax) and normal optical depth with a mean value of amax ~ 3 m in the background C ring. This correlation is negative in the C ring plateaus. We find an inverse correlation in minimum particle radius with normal optical depth and a mean value of amin ~ 4 mm in the background C ring with slightly larger smallest particles in the C ring plateaus.

  4. Use of linkage mapping and centrality analysis across habitat gradients to conserve connectivity of gray wolf populations in western North America.

    PubMed

    Carroll, Carlos; McRae, Brad H; Brookes, Allen

    2012-02-01

    Centrality metrics evaluate paths between all possible pairwise combinations of sites on a landscape to rank the contribution of each site to facilitating ecological flows across the network of sites. Computational advances now allow application of centrality metrics to landscapes represented as continuous gradients of habitat quality. This avoids the binary classification of landscapes into patch and matrix required by patch-based graph analyses of connectivity. It also avoids the focus on delineating paths between individual pairs of core areas characteristic of most corridor- or linkage-mapping methods of connectivity analysis. Conservation of regional habitat connectivity has the potential to facilitate recovery of the gray wolf (Canis lupus), a species currently recolonizing portions of its historic range in the western United States. We applied 3 contrasting linkage-mapping methods (shortest path, current flow, and minimum-cost-maximum-flow) to spatial data representing wolf habitat to analyze connectivity between wolf populations in central Idaho and Yellowstone National Park (Wyoming). We then applied 3 analogous betweenness centrality metrics to analyze connectivity of wolf habitat throughout the northwestern United States and southwestern Canada to determine where it might be possible to facilitate range expansion and interpopulation dispersal. We developed software to facilitate application of centrality metrics. Shortest-path betweenness centrality identified a minimal network of linkages analogous to those identified by least-cost-path corridor mapping. Current flow and minimum-cost-maximum-flow betweenness centrality identified diffuse networks that included alternative linkages, which will allow greater flexibility in planning. Minimum-cost-maximum-flow betweenness centrality, by integrating both land cost and habitat capacity, allows connectivity to be considered within planning processes that seek to maximize species protection at minimum cost. Centrality analysis is relevant to conservation and landscape genetics at a range of spatial extents, but it may be most broadly applicable within single- and multispecies planning efforts to conserve regional habitat connectivity. ©2011 Society for Conservation Biology.

  5. The financial burden of surgical and endovascular treatment of diabetic foot wounds.

    PubMed

    Joret, Maximilian O; Dean, Anastasia; Cao, Colin; Stewart, Joanna; Bhamidipaty, Venu

    2016-09-01

    The cost of treating diabetes-related disease in New Zealand is increasing and is expected to reach New Zealand dollars (NZD) 1.8 billion in 2021. The financial burden attached to the treatment of diabetic foot wounds is difficult to quantify and reported costs of treatment vary greatly in the literature. As of yet, no study has captured the true total cost of treating a diabetic foot wound. In this study, we investigate the total minimum cost of treating a diabetic foot ulcer at a tertiary institution. A retrospective audit of hospital and interhospital records was performed to identify adult patients with diabetes who were treated operatively for a diabetic foot wound by the department of vascular surgery at Auckland Hospital between January 2009 and June 2014. Costs from the patients' admissions and outpatient clinics from their first meeting to the achievement of a final outcome were tallied to calculate the total cost of healing the wound. The hospital's expenses were calculated using a fully absorbed activity-based costing methodology and correlated with a variety of demographic and clinical factors extracted from patients' electronic records using a general linear mixed model. We identified 225 patients accounting for 265 wound episodes, 700 inpatient admissions, 815 outpatient consultations, 367 surgical procedures, and 248 endovascular procedures. The total minimum cost to the Auckland city hospital was NZD 10,217,115 (NZD 9,886,963 inpatient costs; NZD 330,152 outpatient costs). The median cost per wound episode was NZD 29,537 (NZD 28,491 inpatient costs; NZD 834 outpatient cost). Wound healing was achieved in 70% of wound episodes (average length of healing, 9 months); 19% of wounds had not healed before the patient's death. Of every 3.5 wound episodes, one required a major amputation. Wound treatment modality, particularly surgical management, was the strongest predictor of high resource utilization. Wounds treated with endovascular intervention and no surgical intervention cost less. Surgical management (indiscriminate of type) was associated with faster wound healing than wounds managed endovascularly (median duration, 140 vs 224 days). Clinical risk factors including smoking, ischemic heart disease, hypercholesterolemia, hypertension, and chronic kidney disease did not affect treatment cost significantly. We estimate the minimum median cost incurred by our department of vascular surgery in treating a diabetic foot wound to be NZD 30,000 and identify wound treatment modality to be a significant determinant of cost. While readily acknowledging our study's inherent limitations, we believe it provides a real-world representation of the minimum total cost involved in treating diabetic foot lesions in a tertiary center. Given the increasing rate of diabetes, we believe this high cost reinforces the need for the establishment of a multidisciplinary diabetic foot team in our region. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  6. The long-term impact of employment on mental health service use and costs for persons with severe mental illness.

    PubMed

    Bush, Philip W; Drake, Robert E; Xie, Haiyi; McHugo, Gregory J; Haslett, William R

    2009-08-01

    Stable employment promotes recovery for persons with severe mental illness by enhancing income and quality of life, but its impact on mental health costs has been unclear. This study examined service cost over ten years among participants in a co-occurring disorders study. Latent-class growth analysis of competitive employment identified trajectory groups. The authors calculated annual costs of outpatient services and institutional stays for 187 participants and examined group differences in ten-year utilization and cost. A steady-work group (N=51) included individuals whose work hours increased rapidly and then stabilized to average 5,060 hours per person over ten years. A late-work group (N=57) and a no-work group (N=79) did not differ significantly in utilization or cost outcomes, so they were combined into a minimum-work group (N=136). More education, a bipolar disorder diagnosis (versus schizophrenia or schizoaffective disorder), work in the past year, and lower scores on the expanded Brief Psychiatric Rating Scale predicted membership in the steady-work group. These variables were controlled for in the outcomes analysis. Use of outpatient services for the steady-work group declined at a significantly greater rate than it did for the minimum-work group, while institutional (hospital, jail, or prison) stays declined for both groups without a significant difference. The average cost per participant for outpatient services and institutional stays for the minimum-work group exceeded that of the steady-work group by $166,350 over ten years. Highly significant reductions in service use were associated with steady employment. Given supported employment's well-established contributions to recovery, evidence of long-term reductions in the cost of mental health services should lead policy makers and insurers to promote wider implementation.

  7. Changing paradigms: Manufacturing vs. fabricating a high volume Hold Down and Release Mechanism

    NASA Technical Reports Server (NTRS)

    Maus, Daryl; Monick, Doug

    1995-01-01

    A detailed description of the Hold Down and Release Mechanisms designed for a 70+ constellation of spacecraft is presented. The design is reviewed to understand the practical implications of severely constraining cost. Strategies for adapting the traditional aerospace design paradigm to a more commercial, cost driven paradigm are discussed and practical examples are cited.

  8. How Might the Ares V Change the Need for Future Mirror Technology

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip

    2008-01-01

    More Massive Missions do not need to be More Expensive. Simple, robust, low-risk, high-TRL mission is likely to be low cost. It is also likely to be more massive than a complex, high-risk, low TRL mission. The challenge will be to overcome human nature. Launch Date Constrained Missions Cost Less

  9. Constrained Optimization Problems in Cost and Managerial Accounting--Spreadsheet Tools

    ERIC Educational Resources Information Center

    Amlie, Thomas T.

    2009-01-01

    A common problem addressed in Managerial and Cost Accounting classes is that of selecting an optimal production mix given scarce resources. That is, if a firm produces a number of different products, and is faced with scarce resources (e.g., limitations on labor, materials, or machine time), what combination of products yields the greatest profit…

  10. Trade-Offs Between Plant Growth and Defense Against Insect Herbivory: An Emerging Mechanistic Synthesis.

    PubMed

    Züst, Tobias; Agrawal, Anurag A

    2017-04-28

    Costs of defense are central to our understanding of interactions between organisms and their environment, and defensive phenotypes of plants have long been considered to be constrained by trade-offs that reflect the allocation of limiting resources. Recent advances in uncovering signal transduction networks have revealed that defense trade-offs are often the result of regulatory "decisions" by the plant, enabling it to fine-tune its phenotype in response to diverse environmental challenges. We place these results in the context of classic studies in ecology and evolutionary biology, and propose a unifying framework for growth-defense trade-offs as a means to study the plant's allocation of limiting resources. Pervasive physiological costs constrain the upper limit to growth and defense traits, but the diversity of selective pressures on plants often favors negative correlations at intermediate trait levels. Despite the ubiquity of underlying costs of defense, the current challenge is using physiological and molecular approaches to predict the conditions where they manifest as detectable trade-offs.

  11. CIDR

    Science.gov Websites

    * Minimum # Experimental Samples DNA Volume (ul) Genomic DNA Concentration (ng/ul) Low Input DNA Volume (ul . **Please inquire about additional cost for low input option. Genotyping Minimum # Experimental Samples DNA sample quality. If you do submit WGA samples, you should anticipate a higher non-random missing data rate

  12. Optimization of Turkish Air Force SAR Units Forward Deployment Points for a Central Based SAR Force Structure

    DTIC Science & Technology

    2015-03-26

    Turkish Airborne Early Warning and Control (AEW& C ) aircraft in the combat arena. He examines three combat scenarios Turkey might encounter to cover and...to limited SAR assets, constrained budgets, logistic- maintenance problems, and high risk level of military flights. In recent years, the Turkish Air...model, Set Covering Location Problem (SCLP), defines the minimum number of SAR DPs to cover all fighter aircraft training areas (TAs). The second

  13. The Three-Dimensional Power Spectrum Of Galaxies from the Sloan Digital Sky Survey

    DTIC Science & Technology

    2004-05-10

    aspects of the three-dimensional clustering of a much larger data set involving over 200,000 galaxies with redshifts. This paper is focused on measuring... papers , we will constrain galaxy bias empirically by using clustering measurements on smaller scales (e.g., I. Zehavi et al. 2004, in preparation...minimum-variance measurements in 22 k-bands of both the clustering power and its anisotropy due to redshift-space distortions, with narrow and well

  14. Effects of Visual Complexity and Sublexical Information in the Occipitotemporal Cortex in the Reading of Chinese Phonograms: A Single-Trial Analysis with MEG

    ERIC Educational Resources Information Center

    Hsu, Chun-Hsien; Lee, Chia-Ying; Marantz, Alec

    2011-01-01

    We employ a linear mixed-effects model to estimate the effects of visual form and the linguistic properties of Chinese characters on M100 and M170 MEG responses from single-trial data of Chinese and English speakers in a Chinese lexical decision task. Cortically constrained minimum-norm estimation is used to compute the activation of M100 and M170…

  15. Evidence for Ultra-Fast Outflows in Radio-Quiet AGNs: III - Location and Energetics

    NASA Technical Reports Server (NTRS)

    Tombesi, F.; Cappi, M.; Reeves, J. N.; Braito, V.

    2012-01-01

    Using the results of a previous X-ray photo-ionization modelling of blue-shifted Fe K absorption lines on a sample of 42 local radio-quiet AGNs observed with XMM-Newton, in this letter we estimate the location and energetics of the associated ultrafast outflows (UFOs). Due to significant uncertainties, we are essentially able to place only lower/upper limits. On average, their location is in the interval approx.0.0003-0.03pc (approx.10(exp 2)-10(exp 4)tau(sub s) from the central black hole, consistent with what is expected for accretion disk winds/outflows. The mass outflow rates are constrained between approx.0.01- 1 Stellar Mass/y, corresponding to approx. or >5-10% of the accretion rates. The average lower-upper limits on the mechanical power are logE(sub K) approx. or = 42.6-44.6 erg/s. However, the minimum possible value of the ratio between the mechanical power and bolometric luminosity is constrained to be comparable or higher than the minimum required by simulations of feedback induced by winds/outflows. Therefore, this work demonstrates that UFOs are indeed capable to provide a significant contribution to the AGN r.osmological feedback, in agreement with theoretical expectations and the recent observation of interactions between AGN outflows and the interstellar medium in several Seyferts galaxies .

  16. The cost-effectiveness of quality improvement projects: a conceptual framework, checklist and online tool for considering the costs and consequences of implementation-based quality improvement.

    PubMed

    Thompson, Carl; Pulleyblank, Ryan; Parrott, Steve; Essex, Holly

    2016-02-01

    In resource constrained systems, decision makers should be concerned with the efficiency of implementing improvement techniques and technologies. Accordingly, they should consider both the costs and effectiveness of implementation as well as the cost-effectiveness of the innovation to be implemented. An approach to doing this effectively is encapsulated in the 'policy cost-effectiveness' approach. This paper outlines some of the theoretical and practical challenges to assessing policy cost-effectiveness (the cost-effectiveness of implementation projects). A checklist and associated (freely available) online application are also presented to help services develop more cost-effective implementation strategies. © 2015 John Wiley & Sons, Ltd.

  17. Analysis of electric vehicle's trip cost allowing late arrival

    NASA Astrophysics Data System (ADS)

    Leng, Jun-Qiang; Liu, Wei-Yi; Zhao, Lin

    2017-05-01

    In this paper, we use a car-following model to study each electric vehicle's trip cost and the total trip cost allowing late arrival. The numerical result show that the electricity cost has great effects on each commuter's trip cost and the total trip costs and that these effects are dependent on each commuter's time headway at the origin, but the electricity cost has no prominent impacts on the minimum value of total trip cost under each commuter's different time headway at the origin.

  18. Analysis of electric vehicle's trip cost without late arrival

    NASA Astrophysics Data System (ADS)

    Leng, Jun-Qiang; Zhao, Lin

    2017-03-01

    In this paper, we use a car-following model to study each electric vehicle's trip cost and the corresponding total trip cost without late arrival. The numerical result show that the electricity cost has significant effects on each electric vehicle's trip cost and the corresponding total trip costs and that the effects are dependent on its time headway at the origin, but the electricity cost has no prominent effects on the minimum value of the system's total trip cost.

  19. 49 CFR 1152.27 - Financial assistance procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... (including the cost of transporting removed materials to point of sale or point of storage for relay use... constitutional minimum value is computed without regard to labor protection costs. (7) Within 10 days of the...

  20. 49 CFR 1152.27 - Financial assistance procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... (including the cost of transporting removed materials to point of sale or point of storage for relay use... constitutional minimum value is computed without regard to labor protection costs. (7) Within 10 days of the...

  1. A cellular glass substrate solar concentrator

    NASA Technical Reports Server (NTRS)

    Bedard, R.; Bell, D.

    1980-01-01

    The design of a second generation point focusing solar concentration is discussed. The design is based on reflective gores fabricated of thin glass mirror bonded continuously to a contoured substrate of cellular glass. The concentrator aperture and structural stiffness was optimized for minimum concentrator cost given the performance requirement of delivering 56 kWth to a 22 cm diameter receiver aperture with a direct normal insolation of 845 watts sq m and an operating wind of 50 kmph. The reflective panel, support structure, drives, foundation and instrumentation and control subsystem designs, optimized for minimum cost, are summarized. The use of cellular glass as a reflective panel substrate material is shown to offer significant weight and cost advantages compared to existing technology materials.

  2. Influence of transportation cost on long-term retention in clinic for HIV patients in rural Haiti.

    PubMed

    Sowah, Leonard A; Turenne, Franck V; Buchwald, Ulrike K; Delva, Guesly; Mesidor, Romaine N; Dessaigne, Camille G; Previl, Harold; Patel, Devang; Edozien, Anthony; Redfield, Robert R; Amoroso, Anthony

    2014-12-01

    With improved access to antiretroviral therapy in resource-constrained settings, long-term retention in HIV clinics has become an important means of reducing costs and improving outcomes. Published data on retention in HIV clinics beyond 24 months are, however, limited. In our clinic in rural Haiti, we hypothesized that individuals residing in locations with higher transportation costs to clinic would have poorer retention than those who had lower costs. We used a retrospective cohort design to evaluate potential predictors of HIV clinic retention. Patient information was abstracted from the electronic medical records. Cox proportional hazards regression was used to identify independent predictors of 4-year clinic retention. There were 410 patients in our cohort, 266 (64.9%) females and 144 (35.1%) males. Forty-five (11%) patients lived in locations with transportation costs >$2. Males were 1.5 times more likely to live in municipalities with transportation costs to clinic of >$2. Multivariate analysis suggested that age <30 years, male gender, and transportation cost were independent predictors of loss to follow-up (LTFU): risk ratio of 2.98, 95% confidence interval (CI): 1.73 to 4.96, P < 0.001; 1.71, CI: 1.08 to 2.70, P = 0.02; and 1.91, CI: 1.08 to 3.36, P = 0.02, respectively. Patients with transportation costs greater than $2 were 1.9 times more likely to be lost to care compared with those who paid less for transportation. HIV treatment programs in resource-constrained settings may need to pay closer attention to issues related to transportation cost to improve patient retention.

  3. Business, households, and governments: Health care costs, 1990

    PubMed Central

    Levit, Katharine R.; Cowan, Cathy A.

    1991-01-01

    This annual article presents information on health care costs by business, households, and government. Households funded 35 percent of expenditures in 1990, government 33 percent, and business, 29 percent. During the last decade, health care costs continued to grow at annual rates of 8 to 16 percent. Burden measures show that rapidly rising costs faced by each sponsor sector are exceeding increases in each sector's ability to fund them. Increased burden is particularly acute for business. The authors discuss the problems these rising costs pose for business, particularly small business, and some of the strategies businesses employ to constrain this cost growth. PMID:10122364

  4. The differential impact of low-carbon technologies on climate change mitigation cost under a range of socioeconomic and climate policy scenarios.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barron, Robert W.; McJeon, Haewon C.

    2015-05-01

    This paper considers the effect of several key parameters of low carbon energy technologies on the cost of abatement. A methodology for determining the minimum level of performance required for a parameter to have a statistically significant impact on CO2 abatement cost is developed and used to evaluate the impact of eight key parameters of low carbon energy supply technologies on the cost of CO2 abatement. The capital cost of nuclear technology is found to have the greatest impact of the parameters studied. The cost of biomass and CCS technologies also have impacts, while their efficiencies have little, if any.more » Sensitivity analysis of the results with respect to population, GDP, and CO2 emission constraint show that the minimum performance level and impact of nuclear technologies is consistent across the socioeconomic scenarios studied, while the other technology parameters show different performance under higher population, lower GDP scenarios. Solar technology was found to have a small impact, and then only at very low costs. These results indicate that the cost of nuclear is the single most important driver of abatement cost, and that trading efficiency for cost may make biomass and CCS technologies more competitive.« less

  5. Energetics of swimming by the ferret: consequences of forelimb paddling.

    PubMed

    Fish, Frank E; Baudinette, Russell V

    2008-06-01

    The domestic ferret (Mustela putorius furo) swims by alternate strokes of the forelimbs. This pectoral paddling is rare among semi-aquatic mammals. The energetic implications of swimming by pectoral paddling were examined by kinematic analysis and measurement of oxygen consumption. Ferrets maintained a constant stroke frequency, but increased swimming speed by increasing stroke amplitude. The ratio of swimming velocity to foot stroke velocity was low, indicating a low propulsive efficiency. Metabolic rate increased linearly with increasing speed. The cost of transport decreased with increasing swimming speed to a minimum of 3.59+/-0.28 J N(-1) m(-1) at U=0.44 m s(-1). The minimum cost of transport for the ferret was greater than values for semi-aquatic mammals using hind limb paddling, but lower than the minimum cost of transport for the closely related quadrupedally paddling mink. Differences in energetic performance may be due to the amount of muscle recruited for propulsion and the interrelationship hydrodynamic drag and interference between flow over the body surface and flow induced by propulsive appendages.

  6. Cost-effectiveness analysis of oral anti-viral drugs used for treatment of chronic hepatitis B in Turkey.

    PubMed

    Kockaya, Guvenc; Kose, Akin; Yenilmez, Fatma Betul; Ozdemir, Oktay; Kucuksayrac, Ece

    2015-01-01

    All international guidelines suggested that Tenofovir and Entecavir are the primary drugs at the first line therapy for the treatment of chronic hepatitis B (CHB). However, in Turkey these medications reimbursed at the second line therapy according to the Healthcare Implementation Notification. The aim of this study is to compare the cost effectiveness of oral antiviral treatment strategies in CHB for Turkey using lamuvidine, telbuvidine, entecavir, and tenofovir as medications. The analysis was conducted using Markov models. The analysis scenarios based on first line treatment options with Lamuvidine, Telbuvidine, Entecavir, and Tenofovir as the medications. In the analysis, inadequate response or resistance after receiving 12 months of the treatment with Entecavir and Telbivudine were compared to the results found from switching from Entecavir to Tenofovir or from switching from Telbuvidine to Tenofovir. In additional, inadequate response or resistance after receiving 6 months of the treatment for Lamivudine was compared to the results found from switching from Lamivudine to Tenofovir. The study population included men and women, who were 40 years of age. The patients` compliance was estimated 100 % for all of the therapy options. The model duration was constructed to evaluate, treatment strategy duration of 40 years. The cost of medications, examinations/follow-ups and complications were included in the model. Years of Potential Life Lost was used as the health outcome. An incremental cost-effectiveness ratio analysis has been conducted. While the minimum years of life lost was found as 0.22 with tenofovir treatment in 5 years, treatment cost was calculated as 12,169 TL. These values were detected as 0.56 years and 7727 TL, 0.37 years and 12,770 TL, respectively for lamuvidine and telbuvidine treatments. The maximum years of life lost and treatment cost was with lamuvidine treatment were detected as 1.60 years and 18,813 TL and, secondly 0.89 years and 24,007 TL for lamuvidine-tenofovir treatment during 10 years. The minimum years of life lost and cost are 0.54 year and 35,821 TL for tenofovir treatment during 10 years. The minimum years of life lost and cost were determined as 1.21 years and 52,839 TL for tenofovir treatment strategy during 20 years. During 30 years period, tenofovir treatment was found to have the minimum years of life lost (1.73 years) and minimum cost (84,149 TL). When the results of 40 years period were analyzed, years of life lost and costs are 2.06 years and 119,604 TL, 2.13 years and 162,115 TL, 2.13 years and 161,642 TL, 6.52 years and 147,245 TL, 3.20 years and 132,157 TL, 4.10 years and 151,059 TL and 3.05 years and 138,182 TL for tenofovir, entecavir, entecavir-tenofovir, lamuvidine, lamuvidine-tenofovir, telbivudine and telbivudine-tenofovir. In the model presented in this study, in cost effectiveness analysis about CHB treatments, Tenofovir was found to be one of the cost effective methods in comparison with other treatment strategies different time intervals. Beyond this achievement Tenofovir has shown to reduce cumulative treatment cost in first line CHB treatment when compared with regard to 40 year cumulative treatment cost.

  7. The cost of noise reduction in commercial tilt rotor aircraft

    NASA Technical Reports Server (NTRS)

    Faulkner, H. B.

    1974-01-01

    The relationship between direct operating cost (DOC) and departure noise annoyance was developed for commercial tilt rotor aircraft. This was accomplished by generating a series of tilt rotor aircraft designs to meet various noise goals at minimum DOC. These vehicles were spaced across the spectrum of possible noise levels from completely unconstrained to the quietest vehicle that could be designed within the study ground rules. A group of optimization parameters were varied to find the minimum DOC while other inputs were held constant and some external constraints were met. This basic variation was then extended to different aircraft sizes and technology time frames. It was concluded that reducing noise annoyance by designing for lower rotor tip speeds is a very promising avenue for future research and development. It appears that the cost of halving the annoyance compared to an unconstrained design is insignificant and the cost of halving the annoyance again is small.

  8. Techno-economic assessment of pellets produced from steam pretreated biomass feedstock

    DOE PAGES

    Shahrukh, Hassan; Oyedun, Adetoyese Olajire; Kumar, Amit; ...

    2016-03-10

    Minimum production cost and optimum plant size are determined for pellet plants for three types of biomass feedstock e forest residue, agricultural residue, and energy crops. The life cycle cost from harvesting to the delivery of the pellets to the co-firing facility is evaluated. The cost varies from 95 to 105 t -1 for regular pellets and 146–156 t -1 for steam pretreated pellets. The difference in the cost of producing regular and steam pretreated pellets per unit energy is in the range of 2e3 GJ -1. The economic optimum plant size (i.e., the size at which pellet production costmore » is minimum) is found to be 190 kt for regular pellet production and 250 kt for steam pretreated pellet. Furthermore, sensitivity and uncertainty analyses were carried out to identify sensitivity parameters and effects of model error.« less

  9. Value Added: The Costs and Benefits of College Preparatory Programs. American Higher Education Report Series

    ERIC Educational Resources Information Center

    Swail, Watson Scott

    2004-01-01

    Rarely do stakeholders ask about the effectiveness of outreach programs or whether they are an efficient use of tax dollars and philanthropic funds. As government budgets continue to be constrained and philanthropic investment gets more competitive, there is a growing acknowledgment of the need to look at the cost/benefit of these programs and…

  10. Targeted Radiation Therapy for Cancer Initiative

    DTIC Science & Technology

    2015-09-01

    costs and without financial incentive to treat patients with multiple fractions, will manage patients differently than a typical civilian practice...constrained by insurance billing practices. In addition, the increase in single fraction treatments represents a more cost effective use of...greater mean decrement in the urinary irritation and sexual domains, and a trend toward a greater mean decrement in the bowel/rectal domain, in

  11. Fuel and vehicle technology choices for passenger vehicles in achieving stringent CO2 targets: connections between transportation and other energy sectors.

    PubMed

    Grahn, M; Azar, C; Williander, M I; Anderson, J E; Mueller, S A; Wallington, T J

    2009-05-01

    The regionalized Global Energy Transition (GET-R 6.0) model has been modified to include a detailed description of light-duty vehicle options and used to investigate the potential impact of carbon capture and storage (CCS) and concentrating solar power (CSP) on cost-effective fuel/vehicle technologies in a carbon-constrained world. Total CO2 emissions were constrained to achieve stabilization at 400-550 ppm, by 2100, at lowesttotal system cost The dominantfuel/vehicle technologies varied significantly depending on CO2 constraint future cost of vehicle technologies, and availability of CCS and CSP. For many cases, no one technology dominated on a global scale. CCS provides relatively inexpensive low-CO2 electricity and heatwhich prolongs the use of traditional ICEVs. CSP displaces fossil fuel derived electricity, prolongs the use of traditional ICEVs, and promotes electrification of passenger vehicles. In all cases considered, CCS and CSP availability had a major impact on the lowest cost fuel/vehicle technologies, and alternative fuels are needed in response to expected dwindling oil and natural gas supply potential by the end of the century.

  12. Balancing reliability and cost to choose the best power subsystem

    NASA Technical Reports Server (NTRS)

    Suich, Ronald C.; Patterson, Richard L.

    1991-01-01

    A mathematical model is presented for computing total (spacecraft) subsystem cost including both the basic subsystem cost and the expected cost due to the failure of the subsystem. This model is then used to determine power subsystem cost as a function of reliability and redundancy. Minimum cost and maximum reliability and/or redundancy are not generally equivalent. Two example cases are presented. One is a small satellite, and the other is an interplanetary spacecraft.

  13. 42 CFR 447.52 - Minimum and maximum income-related charges.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... agency imposes cost sharing under § 447.54, the process by which hospital emergency room services are... option, cost sharing imposed for any service (other than for drugs and non-emergency services furnished... group under § 447.56(a), and (iii) For cost sharing imposed for non-emergency services furnished in an...

  14. Green Energy in New Construction: Maximize Energy Savings and Minimize Cost

    ERIC Educational Resources Information Center

    Ventresca, Joseph

    2010-01-01

    People often use the term "green energy" to refer to alternative energy technologies. But green energy doesn't guarantee maximum energy savings at a minimum cost--a common misconception. For school business officials, green energy means getting the lowest energy bills for the lowest construction cost, which translates into maximizing green energy…

  15. A minimum income for healthy living

    PubMed Central

    Morris, J; Donkin, A; Wonderling, D; Wilkinson, P; Dowler, E

    2000-01-01

    BACKGROUND—Half a century of research has provided consensual evidence of major personal requisites of adult health in nutrition, physical activity and psychosocial relations. Their minimal money costs, together with those of a home and other basic necessities, indicate disposable income that is now essential for health.
METHODS—In a first application we identified such representative minimal costs for healthy, single, working men aged 18-30, in the UK. Costs were derived from ad hoc survey, relevant figures in the national Family Expenditure Survey, and by pragmatic decision for the few minor items where survey data were not available.
RESULTS—Minimum costs were assessed at £131.86 per week (UK April 1999 prices). Component costs, especially those of housing (which represents around 40% of this total), depend on region and on several assumptions. By varying these a range of totals from £106.47 to £163.86 per week was detailed. These figures compare, 1999, with the new UK national minimum wage, after statutory deductions, of £105.84 at 18-21 years and £121.12 at 22+ years for a 38 hour working week. Corresponding basic social security rates are £40.70-£51.40 per week.
INTERPRETATION—Accumulating science means that absolute standards of living, "poverty", minimal official incomes and the like, can now be assessed by objective measurement of the personal capacity to meet the costs of major requisites of healthy living. A realistic assessment of these costs is presented as an impetus to public discussion. It is a historical role of public health as social medicine to lead in public advocacy of such a national agenda.


Keywords: income; public health; lifestyle; nutrition; housing; exercise; social exclusion; inequalities PMID:11076983

  16. Do Vascular Networks Branch Optimally or Randomly across Spatial Scales?

    PubMed Central

    Newberry, Mitchell G.; Savage, Van M.

    2016-01-01

    Modern models that derive allometric relationships between metabolic rate and body mass are based on the architectural design of the cardiovascular system and presume sibling vessels are symmetric in terms of radius, length, flow rate, and pressure. Here, we study the cardiovascular structure of the human head and torso and of a mouse lung based on three-dimensional images processed via our software Angicart. In contrast to modern allometric theories, we find systematic patterns of asymmetry in vascular branching, potentially explaining previously documented mismatches between predictions (power-law or concave curvature) and observed empirical data (convex curvature) for the allometric scaling of metabolic rate. To examine why these systematic asymmetries in vascular branching might arise, we construct a mathematical framework to derive predictions based on local, junction-level optimality principles that have been proposed to be favored in the course of natural selection and development. The two most commonly used principles are material-cost optimizations (construction materials or blood volume) and optimization of efficient flow via minimization of power loss. We show that material-cost optimization solutions match with distributions for asymmetric branching across the whole network but do not match well for individual junctions. Consequently, we also explore random branching that is constrained at scales that range from local (junction-level) to global (whole network). We find that material-cost optimizations are the strongest predictor of vascular branching in the human head and torso, whereas locally or intermediately constrained random branching is comparable to material-cost optimizations for the mouse lung. These differences could be attributable to developmentally-programmed local branching for larger vessels and constrained random branching for smaller vessels. PMID:27902691

  17. A generalised optimal linear quadratic tracker with universal applications. Part 2: discrete-time systems

    NASA Astrophysics Data System (ADS)

    Ebrahimzadeh, Faezeh; Tsai, Jason Sheng-Hong; Chung, Min-Ching; Liao, Ying Ting; Guo, Shu-Mei; Shieh, Leang-San; Wang, Li

    2017-01-01

    Contrastive to Part 1, Part 2 presents a generalised optimal linear quadratic digital tracker (LQDT) with universal applications for the discrete-time (DT) systems. This includes (1) a generalised optimal LQDT design for the system with the pre-specified trajectories of the output and the control input and additionally with both the input-to-output direct-feedthrough term and known/estimated system disturbances or extra input/output signals; (2) a new optimal filter-shaped proportional plus integral state-feedback LQDT design for non-square non-minimum phase DT systems to achieve a minimum-phase-like tracking performance; (3) a new approach for computing the control zeros of the given non-square DT systems; and (4) a one-learning-epoch input-constrained iterative learning LQDT design for the repetitive DT systems.

  18. Application of multivariable search techniques to structural design optimization

    NASA Technical Reports Server (NTRS)

    Jones, R. T.; Hague, D. S.

    1972-01-01

    Multivariable optimization techniques are applied to a particular class of minimum weight structural design problems: the design of an axially loaded, pressurized, stiffened cylinder. Minimum weight designs are obtained by a variety of search algorithms: first- and second-order, elemental perturbation, and randomized techniques. An exterior penalty function approach to constrained minimization is employed. Some comparisons are made with solutions obtained by an interior penalty function procedure. In general, it would appear that an interior penalty function approach may not be as well suited to the class of design problems considered as the exterior penalty function approach. It is also shown that a combination of search algorithms will tend to arrive at an extremal design in a more reliable manner than a single algorithm. The effect of incorporating realistic geometrical constraints on stiffener cross-sections is investigated. A limited comparison is made between minimum weight cylinders designed on the basis of a linear stability analysis and cylinders designed on the basis of empirical buckling data. Finally, a technique for locating more than one extremal is demonstrated.

  19. Partial differential equations constrained combinatorial optimization on an adiabatic quantum computer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh

    Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.

  20. 2dFLenS and KiDS: determining source redshift distributions with cross-correlations

    NASA Astrophysics Data System (ADS)

    Johnson, Andrew; Blake, Chris; Amon, Alexandra; Erben, Thomas; Glazebrook, Karl; Harnois-Deraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Joudaki, Shahab; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Marin, Felipe A.; McFarland, John; Morrison, Christopher B.; Parkinson, David; Poole, Gregory B.; Radovich, Mario; Wolf, Christian

    2017-03-01

    We develop a statistical estimator to infer the redshift probability distribution of a photometric sample of galaxies from its angular cross-correlation in redshift bins with an overlapping spectroscopic sample. This estimator is a minimum-variance weighted quadratic function of the data: a quadratic estimator. This extends and modifies the methodology presented by McQuinn & White. The derived source redshift distribution is degenerate with the source galaxy bias, which must be constrained via additional assumptions. We apply this estimator to constrain source galaxy redshift distributions in the Kilo-Degree imaging survey through cross-correlation with the spectroscopic 2-degree Field Lensing Survey, presenting results first as a binned step-wise distribution in the range z < 0.8, and then building a continuous distribution using a Gaussian process model. We demonstrate the robustness of our methodology using mock catalogues constructed from N-body simulations, and comparisons with other techniques for inferring the redshift distribution.

  1. Optimization of Stability Constrained Geometrically Nonlinear Shallow Trusses Using an Arc Length Sparse Method with a Strain Energy Density Approach

    NASA Technical Reports Server (NTRS)

    Hrinda, Glenn A.; Nguyen, Duc T.

    2008-01-01

    A technique for the optimization of stability constrained geometrically nonlinear shallow trusses with snap through behavior is demonstrated using the arc length method and a strain energy density approach within a discrete finite element formulation. The optimization method uses an iterative scheme that evaluates the design variables' performance and then updates them according to a recursive formula controlled by the arc length method. A minimum weight design is achieved when a uniform nonlinear strain energy density is found in all members. This minimal condition places the design load just below the critical limit load causing snap through of the structure. The optimization scheme is programmed into a nonlinear finite element algorithm to find the large strain energy at critical limit loads. Examples of highly nonlinear trusses found in literature are presented to verify the method.

  2. Computational strategies in the dynamic simulation of constrained flexible MBS

    NASA Technical Reports Server (NTRS)

    Amirouche, F. M. L.; Xie, M.

    1993-01-01

    This research focuses on the computational dynamics of flexible constrained multibody systems. At first a recursive mapping formulation of the kinematical expressions in a minimum dimension as well as the matrix representation of the equations of motion are presented. The method employs Kane's equation, FEM, and concepts of continuum mechanics. The generalized active forces are extended to include the effects of high temperature conditions, such as creep, thermal stress, and elastic-plastic deformation. The time variant constraint relations for rolling/contact conditions between two flexible bodies are also studied. The constraints for validation of MBS simulation of gear meshing contact using a modified Timoshenko beam theory are also presented. The last part deals with minimization of vibration/deformation of the elastic beam in multibody systems making use of time variant boundary conditions. The above methodologies and computational procedures developed are being implemented in a program called DYAMUS.

  3. Machine Learning Techniques in Optimal Design

    NASA Technical Reports Server (NTRS)

    Cerbone, Giuseppe

    1992-01-01

    Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution to the problem, is then obtained by solving in parallel each of the sub-problems in the set and computing the one with the minimum cost. In addition to speeding up the optimization process, our use of learning methods also relieves the expert from the burden of identifying rules that exactly pinpoint optimal candidate sub-problems. In real engineering tasks it is usually too costly to the engineers to derive such rules. Therefore, this paper also contributes to a further step towards the solution of the knowledge acquisition bottleneck [Feigenbaum, 1977] which has somewhat impaired the construction of rulebased expert systems.

  4. Minimum-sized ideal reactor for continuous alcohol fermentation using immobilized microorganism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamane, T.; Shimizu, S.

    Recently, alcohol fermentation has gained considerable attention with the aim of lowering its production cost in the production processes of both fuel ethanol and alcoholic beverages. The over-all cost is a summation of costs of various subsystems such as raw material (sugar, starch, and cellulosic substances) treatment, fermentation process, and alcohol separation from water solutions; lowering the cost of the fermentation processes is very important in lowering the total cost. Several new techniques have been developed for economic continuous ethanol production, use of a continuous wine fermentor with no mechanical stirring, cell recycle combined with continuous removal of ethanol undermore » vaccum, a technique involving a bed of yeast admixed with an inert carrier, and use of immobilized yeast reactors in packed-bed column and in a three-stage double conical fluidized-bed bioreactor. All these techniques lead to increases more or less, in reactor productivity, which in turn result in the reduction of the reactor size for a given production rate and a particular conversion. Since an improvement in the fermentation process often leads to a reduction of fermentor size and hence, a lowering of the initial construction cost, it is important to theoretically arrive at a solution to what is the minimum-size setup of ideal reactors from the viewpoint of liquid backmixing. In this short communication, the minimum-sized ideal reactor for continuous alcohol fermentation using immobilized cells will be specifically discussed on the basis of a mathematical model. The solution will serve for designing an optimal bioreactor. (Refs. 26).« less

  5. The cost of noise reduction for departure and arrival operations of commercial tilt rotor aircraft

    NASA Technical Reports Server (NTRS)

    Faulkner, H. B.; Swan, W. M.

    1976-01-01

    The relationship between direct operating cost (DOC) and noise annoyance due to a departure and an arrival operation was developed for commercial tilt rotor aircraft. This was accomplished by generating a series of tilt rotor aircraft designs to meet various noise goals at minimum DOC. These vehicles ranged across the spectrum of possible noise levels from completely unconstrained to the quietest vehicles that could be designed within the study ground rules. Optimization parameters were varied to find the minimum DOC. This basic variation was then extended to different aircraft sizes and technology time frames.

  6. Confronting Regulatory Cost and Quality Expectations. An Exploration of Technical Change in Minimum Efficiency Performance Standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Margaret; Spurlock, C. Anna; Yang, Hung-Chia

    The dual purpose of this project was to contribute to basic knowledge about the interaction between regulation and innovation and to inform the cost and benefit expectations related to technical change which are embedded in the rulemaking process of an important area of national regulation. The area of regulation focused on here is minimum efficiency performance standards (MEPS) for appliances and other energy-using products. Relevant both to U.S. climate policy and energy policy for buildings, MEPS remove certain product models from the market that do not meet specified efficiency thresholds.

  7. The social cost of illegal drug consumption in Spain.

    PubMed

    García-Altés, Anna; Ollé, Josep Ma; Antoñanzas, Fernando; Colom, Joan

    2002-09-01

    The objective of this study was to estimate the social cost of the consumption of illegal drugs in Spain. We performed a cost-of-illness study, using a prevalence approximation and a societal perspective. The estimation of costs and consequences referred to 1997. As direct costs we included health-care costs, prevention, continuing education, research, administrative costs, non-governmental organizations and crime-related costs. As indirect costs we included lost productivity associated with mortality and the hospitalization of patients. Estimation of intangible costs was not included. The minimum cost of illegal drug consumption in Spain is 88,800 million pesetas (PTA) (467 million dollars). Seventy-seven per cent of the costs correspond to direct costs. Of those, crime-related costs represent 18%, while the largest part corresponds to the health-care costs (50% of direct costs). From the perspective of the health-care system, the minimum cost of illegal drug consumption is 44,000 million PTA (231 million dollars). The cost of illegal drug consumption represents 0.07% of the Spanish GDP. This gross figure compares with 2250 million PTA (12.5 million dollars) invested in prevention programmes during the same year, and with 12,300 million PTA (68.3 million dollars) spent on specific programmes and resources for the drug addict population. Although there are limitations intrinsic in this type of study and the estimations obtained in the present analysis are likely to be an underestimate of the real cost of this condition, we estimate that illegal drug consumption costs the Spanish economy at least 0.2% of GDP.

  8. Secure Fusion Estimation for Bandwidth Constrained Cyber-Physical Systems Under Replay Attacks.

    PubMed

    Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li; Bo Chen; Ho, Daniel W C; Guoqiang Hu; Li Yu; Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li

    2018-06-01

    State estimation plays an essential role in the monitoring and supervision of cyber-physical systems (CPSs), and its importance has made the security and estimation performance a major concern. In this case, multisensor information fusion estimation (MIFE) provides an attractive alternative to study secure estimation problems because MIFE can potentially improve estimation accuracy and enhance reliability and robustness against attacks. From the perspective of the defender, the secure distributed Kalman fusion estimation problem is investigated in this paper for a class of CPSs under replay attacks, where each local estimate obtained by the sink node is transmitted to a remote fusion center through bandwidth constrained communication channels. A new mathematical model with compensation strategy is proposed to characterize the replay attacks and bandwidth constrains, and then a recursive distributed Kalman fusion estimator (DKFE) is designed in the linear minimum variance sense. According to different communication frameworks, two classes of data compression and compensation algorithms are developed such that the DKFEs can achieve the desired performance. Several attack-dependent and bandwidth-dependent conditions are derived such that the DKFEs are secure under replay attacks. An illustrative example is given to demonstrate the effectiveness of the proposed methods.

  9. Feature-constrained surface reconstruction approach for point cloud data acquired with 3D laser scanner

    NASA Astrophysics Data System (ADS)

    Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai

    2008-04-01

    Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.

  10. Natural migration rates of trees: Global terrestrial carbon cycle implications. Book chapter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Solomon, A.M.

    The paper discusses the forest-ecological processes which constrain the rate of response by forests to rapid future environmental change. It establishes a minimum response time by natural tree populations which invade alien landscapes and reach the status of a mature, closed canopy forest when maximum carbon storage is realized. It considers rare long-distance and frequent short-distance seed transport, seedling and tree establishment, sequential tree and stand maturation, and spread between newly established colonies.

  11. A greedy algorithm for species selection in dimension reduction of combustion chemistry

    NASA Astrophysics Data System (ADS)

    Hiremath, Varun; Ren, Zhuyin; Pope, Stephen B.

    2010-09-01

    Computational calculations of combustion problems involving large numbers of species and reactions with a detailed description of the chemistry can be very expensive. Numerous dimension reduction techniques have been developed in the past to reduce the computational cost. In this paper, we consider the rate controlled constrained-equilibrium (RCCE) dimension reduction method, in which a set of constrained species is specified. For a given number of constrained species, the 'optimal' set of constrained species is that which minimizes the dimension reduction error. The direct determination of the optimal set is computationally infeasible, and instead we present a greedy algorithm which aims at determining a 'good' set of constrained species; that is, one leading to near-minimal dimension reduction error. The partially-stirred reactor (PaSR) involving methane premixed combustion with chemistry described by the GRI-Mech 1.2 mechanism containing 31 species is used to test the algorithm. Results on dimension reduction errors for different sets of constrained species are presented to assess the effectiveness of the greedy algorithm. It is shown that the first four constrained species selected using the proposed greedy algorithm produce lower dimension reduction error than constraints on the major species: CH4, O2, CO2 and H2O. It is also shown that the first ten constrained species selected using the proposed greedy algorithm produce a non-increasing dimension reduction error with every additional constrained species; and produce the lowest dimension reduction error in many cases tested over a wide range of equivalence ratios, pressures and initial temperatures.

  12. Constrained evolution in numerical relativity

    NASA Astrophysics Data System (ADS)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  13. Thermochronology, Uplift and Erosion at the Australian-Pacific Plate Boundary Alpine Fault restraining bend, New Zealand

    NASA Astrophysics Data System (ADS)

    Sagar, M. W.; Seward, D.; Norton, K. P.

    2016-12-01

    The 650 km-long Australian-Pacific plate boundary Alpine Fault is remarkably straight at a regional scale, except for a prominent S-shaped bend in the northern South Island. This is a restraining bend and has been referred to as the `Big Bend' due to similarities with the Transverse Ranges section of the San Andreas Fault. The Alpine Fault is the main source of seismic hazard in the South Island, yet there are no constraints on slip rates at the Big Bend. Furthermore, the timing of Big Bend development is poorly constrained to the Miocene. To address these issues we are using the fission-track (FT) and 40Ar/39Ar thermochronometers, together with basin-averaged cosmogenic nuclide 10Be concentrations to constrain the onset and rate of Neogene-Quaternary exhumation of the Australian and Pacific plates at the Big Bend. Exhumation rates at the Big Bend are expected to be greater than those for adjoining sections of the Alpine Fault due to locally enhanced shortening. Apatite FT ages and modelled thermal histories indicate that exhumation of the Australian Plate had begun by 13 Ma and 3 km of exhumation has occurred since that time, requiring a minimum exhumation rate of 0.2 mm/year. In contrast, on the Pacific Plate, zircon FT cooling ages suggest ≥7 km of exhumation in the past 2-3 Ma, corresponding to a minimum exhumation rate of 2 mm/year. Preliminary assessment of stream channel gradients either side of the Big Bend suggests equilibrium between uplift and erosion. The implication of this is that Quaternary erosion rates estimated from 10Be concentrations will approximate uplift rates. These uplift rates will help to better constrain the dip-slip rate of the Alpine Fault, which will allow the National Seismic Hazard Model to be updated.

  14. Charge redistribution in QM:QM ONIOM model systems: a constrained density functional theory approach

    NASA Astrophysics Data System (ADS)

    Beckett, Daniel; Krukau, Aliaksandr; Raghavachari, Krishnan

    2017-11-01

    The ONIOM hybrid method has found considerable success in QM:QM studies designed to approximate a high level of theory at a significantly reduced cost. This cost reduction is achieved by treating only a small model system with the target level of theory and the rest of the system with a low, inexpensive, level of theory. However, the choice of an appropriate model system is a limiting factor in ONIOM calculations and effects such as charge redistribution across the model system boundary must be considered as a source of error. In an effort to increase the general applicability of the ONIOM model, a method to treat the charge redistribution effect is developed using constrained density functional theory (CDFT) to constrain the charge experienced by the model system in the full calculation to the link atoms in the truncated model system calculations. Two separate CDFT-ONIOM schemes are developed and tested on a set of 20 reactions with eight combinations of levels of theory. It is shown that a scheme using a scaled Lagrange multiplier term obtained from the low-level CDFT model calculation outperforms ONIOM at each combination of levels of theory from 32% to 70%.

  15. Forum on energy conservation in buildings : implications for transportation

    DOT National Transportation Integrated Search

    1981-12-01

    In a transportation-dependent society constrained by potential : shortages and the high costs of conventional fuels, as well as by : sometimes conflicting national objectives for increased energy : conservation, improved environmental quality, enhanc...

  16. Adverse Climatic Conditions and Impact on Construction Scheduling and Cost

    DTIC Science & Technology

    1988-01-01

    ABBREVIATIONS ABS MAX MAX TEMP ...... Absolute maximum maximum temperature ABS MIN MIN TEMP ...... Absolute minimum minimum temperature BTU...o Degrees Farenheit MEAN MAX TEMP o.................... Mean maximum temperature MEAN MIN TEMP...temperatures available, a determination had to be made as to whether forecasts were based on absolute , mean, or statistically derived temperatures

  17. Hospital discharge costs in competitive and regulatory environments.

    PubMed

    Weil, T P

    1996-05-01

    To study the efficacy of America's current market-driven approaches to constrain health care expenditures, an analysis was undertaken of 1993 hospital discharge costs and related data of the 15 states in the United States with the highest percent of HMO market penetration. What is proposed herein to enhance hospital cost containment efforts is for a state to almost simultaneously use both market-driven and regulatory strategies similar to what was implemented in California over the last three decades and in Germany for the last 100 years.

  18. Comparative evaluation of existing expendable upper stages for space shuttle

    NASA Technical Reports Server (NTRS)

    Weyers, V. J.; Sagerman, G. D.; Borsody, J.; Lubick, R. J.

    1974-01-01

    The use of existing expendable upper stages in the space shuttle during its early years of operation is evaluated. The Burner 2, Scout, Delta, Agena, Transtage, and Centaur were each studied under contract by their respective manufacturers to determine the extent and cost of the minimum modifications necessary to integrate the stage with the shuttle orbiter. A comparative economic analysis of thirty-five different families of these stages is discussed. Results show that the overall transportation system cost differences between many of the families are quite small. However, by considering several factors in addition to cost, it is possible to select one family as being representative of the capability of the minimum modification existing stage approach. The selected family meets all of the specified mission requirements during the early years of shuttle operation.

  19. A class of solution-invariant transformations of cost functions for minimum cost flow phase unwrapping.

    PubMed

    Hubig, Michael; Suchandt, Steffen; Adam, Nico

    2004-10-01

    Phase unwrapping (PU) represents an important step in synthetic aperture radar interferometry (InSAR) and other interferometric applications. Among the different PU methods, the so called branch-cut approaches play an important role. In 1996 M. Costantini [Proceedings of the Fringe '96 Workshop ERS SAR Interferometry (European Space Agency, Munich, 1996), pp. 261-272] proposed to transform the problem of correctly placing branch cuts into a minimum cost flow (MCF) problem. The crucial point of this new approach is to generate cost functions that represent the a priori knowledge necessary for PU. Since cost functions are derived from measured data, they are random variables. This leads to the question of MCF solution stability: How much can the cost functions be varied without changing the cheapest flow that represents the correct branch cuts? This question is partially answered: The existence of a whole linear subspace in the space of cost functions is shown; this subspace contains all cost differences by which a cost function can be changed without changing the cost difference between any two flows that are discharging any residue configuration. These cost differences are called strictly stable cost differences. For quadrangular nonclosed networks (the most important type of MCF networks for interferometric purposes) a complete classification of strictly stable cost differences is presented. Further, the role of the well-known class of node potentials in the framework of strictly stable cost differences is investigated, and information on the vector-space structure representing the MCF environment is provided.

  20. Rail vs truck transport of biomass.

    PubMed

    Mahmudi, Hamed; Flynn, Peter C

    2006-01-01

    This study analyzes the economics of transshipping biomass from truck to train in a North American setting. Transshipment will only be economic when the cost per unit distance of a second transportation mode is less than the original mode. There is an optimum number of transshipment terminals which is related to biomass yield. Transshipment incurs incremental fixed costs, and hence there is a minimum shipping distance for rail transport above which lower costs/km offset the incremental fixed costs. For transport by dedicated unit train with an optimum number of terminals, the minimum economic rail shipping distance for straw is 170 km, and for boreal forest harvest residue wood chips is 145 km. The minimum economic shipping distance for straw exceeds the biomass draw distance for economically sized centrally located power plants, and hence the prospects for rail transport are limited to cases in which traffic congestion from truck transport would otherwise preclude project development. Ideally, wood chip transport costs would be lowered by rail transshipment for an economically sized centrally located power plant, but in a specific case in Alberta, Canada, the layout of existing rail lines precludes a centrally located plant supplied by rail, whereas a more versatile road system enables it by truck. Hence for wood chips as well as straw the economic incentive for rail transport to centrally located processing plants is limited. Rail transshipment may still be preferred in cases in which road congestion precludes truck delivery, for example as result of community objections.

  1. Multi-Objective Differential Evolution for Voltage Security Constrained Optimal Power Flow in Deregulated Power Systems

    NASA Astrophysics Data System (ADS)

    Roselyn, J. Preetha; Devaraj, D.; Dash, Subhransu Sekhar

    2013-11-01

    Voltage stability is an important issue in the planning and operation of deregulated power systems. The voltage stability problems is a most challenging one for the system operators in deregulated power systems because of the intense use of transmission line capabilities and poor regulation in market environment. This article addresses the congestion management problem avoiding offline transmission capacity limits related to voltage stability by considering Voltage Security Constrained Optimal Power Flow (VSCOPF) problem in deregulated environment. This article presents the application of Multi Objective Differential Evolution (MODE) algorithm to solve the VSCOPF problem in new competitive power systems. The maximum of L-index of the load buses is taken as the indicator of voltage stability and is incorporated in the Optimal Power Flow (OPF) problem. The proposed method in hybrid power market which also gives solutions to voltage stability problems by considering the generation rescheduling cost and load shedding cost which relieves the congestion problem in deregulated environment. The buses for load shedding are selected based on the minimum eigen value of Jacobian with respect to the load shed. In the proposed approach, real power settings of generators in base case and contingency cases, generator bus voltage magnitudes, real and reactive power demands of selected load buses using sensitivity analysis are taken as the control variables and are represented as the combination of floating point numbers and integers. DE/randSF/1/bin strategy scheme of differential evolution with self-tuned parameter which employs binomial crossover and difference vector based mutation is used for the VSCOPF problem. A fuzzy based mechanism is employed to get the best compromise solution from the pareto front to aid the decision maker. The proposed VSCOPF planning model is implemented on IEEE 30-bus system, IEEE 57 bus practical system and IEEE 118 bus system. The pareto optimal front obtained from MODE is compared with reference pareto front and the best compromise solution for all the cases are obtained from fuzzy decision making strategy. The performance measures of proposed MODE in two test systems are calculated using suitable performance metrices. The simulation results show that the proposed approach provides considerable improvement in the congestion management by generation rescheduling and load shedding while enhancing the voltage stability in deregulated power system.

  2. Minimization of transmission cost in decentralized control systems

    NASA Technical Reports Server (NTRS)

    Wang, S.-H.; Davison, E. J.

    1978-01-01

    This paper considers the problem of stabilizing a linear time-invariant multivariable system by using local feedback controllers and some limited information exchange among local stations. The problem of achieving a given degree of stability with minimum transmission cost is solved.

  3. 48 CFR 16.405-1 - Cost-plus-incentive-fee contracts.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... provides for the initially negotiated fee to be adjusted later by a formula based on the relationship of... minimum fee that may be a zero fee or, in rare cases, a negative fee. (c) Limitations. No cost-plus...

  4. Minimum cost strategies for sequestering carbon in forests.

    Treesearch

    Darius M. Adams; Ralph J. Alig; Bruce A. McCarl; John M. Callaway; Steven M. Winnett

    1999-01-01

    This paper examines the costs of meeting explicit targets for increments of carbon sequestered in forests when both forest management decisions and the area of forests can be varied. Costs are estimated as welfare losses in markets for forest and agricultural products. Results show greatest change in management actions when targets require large near-term flux...

  5. The integrated business information system: using automation to monitor cost-effectiveness of park operations

    Treesearch

    Dick Stanley; Bruce Jackson

    1995-01-01

    The cost-effectiveness of park operations is often neglected because information is laborious to compile. The information, however, is critical if we are to derive maximum benefit from scarce resources. This paper describes an automated system for calculating cost-effectiveness ratios with minimum effort using data from existing data bases.

  6. Produce yellow-poplar furniture dimension at minimum cost by using YELLOPOP

    Treesearch

    David G. Marten; David G. Marten

    1986-01-01

    Describes a computer program called YELLOPOP that determines the least-cost combination of lumber grades required to produce a given cutting order of furniture dimension parts. If the least-cost mix is not available, YELLOPOP can be used to determine the next best alternative. The steps involved in using the program are also described.

  7. Automatic aneurysm neck detection using surface Voronoi diagrams.

    PubMed

    Cárdenes, Rubén; Pozo, José María; Bogunovic, Hrvoje; Larrabide, Ignacio; Frangi, Alejandro F

    2011-10-01

    A new automatic approach for saccular intracranial aneurysm isolation is proposed in this work. Due to the inter- and intra-observer variability in manual delineation of the aneurysm neck, a definition based on a minimum cost path around the aneurysm sac is proposed that copes with this variability and is able to make consistent measurements along different data sets, as well as to automate and speedup the analysis of cerebral aneurysms. The method is based on the computation of a minimal path along a scalar field obtained on the vessel surface, to find the aneurysm neck in a robust and fast manner. The computation of the scalar field on the surface is obtained using a fast marching approach with a speed function based on the exponential of the distance from the centerline bifurcation between the aneurysm dome and the parent vessels. In order to assure a correct topology of the aneurysm sac, the neck computation is constrained to a region defined by a surface Voronoi diagram obtained from the branches of the vessel centerline. We validate this method comparing our results in 26 real cases with manual aneurysm isolation obtained using a cut-plane, and also with results obtained using manual delineations from three different observers by comparing typical morphological measures. © 2011 IEEE

  8. A Hybrid Metaheuristic DE/CS Algorithm for UCAV Three-Dimension Path Planning

    PubMed Central

    Wang, Gaige; Guo, Lihong; Duan, Hong; Wang, Heqi; Liu, Luo; Shao, Mingzhen

    2012-01-01

    Three-dimension path planning for uninhabited combat air vehicle (UCAV) is a complicated high-dimension optimization problem, which primarily centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. A new hybrid metaheuristic differential evolution (DE) and cuckoo search (CS) algorithm is proposed to solve the UCAV three-dimension path planning problem. DE is applied to optimize the process of selecting cuckoos of the improved CS model during the process of cuckoo updating in nest. The cuckoos can act as an agent in searching the optimal UCAV path. And then, the UCAV can find the safe path by connecting the chosen nodes of the coordinates while avoiding the threat areas and costing minimum fuel. This new approach can accelerate the global convergence speed while preserving the strong robustness of the basic CS. The realization procedure for this hybrid metaheuristic approach DE/CS is also presented. In order to make the optimized UCAV path more feasible, the B-Spline curve is adopted for smoothing the path. To prove the performance of this proposed hybrid metaheuristic method, it is compared with basic CS algorithm. The experiment shows that the proposed approach is more effective and feasible in UCAV three-dimension path planning than the basic CS model. PMID:23193383

  9. Health, globalization and developing countries.

    PubMed

    Cilingiroglu, Nesrin

    2005-02-01

    In health care today, scientific and technological frontiers are expanding at unprecedented rates, even as economic and financial pressures shrink profit margins, intensify competition, and constrain the funds available for investment. Therefore, the world today has more economic, and social opportunities for people than 10 or 100 years since globalization has created a new ground somewhat characterized by rapid economic transformation, deregulation of national markets by new trade regimes, amazing transport, electronic communication possibilities and high turnover of foreign investment and capital flow as well as skilled labor. These trends can easily mask great inequalities in developing countries such as importation and spreading of infectious and non-communicable diseases; miniaturization of movement of medical technology; health sector trades management driven by economics without consideration to the social and health aspects and its effects, increasing health inequalities and their economic and social burden creation; multinational companies' cheap labor employment promotion in widening income differentials; and others. As a matter of fact, all these factors are major determinants of ill health. Health authorities of developing countries have to strengthen their regulatory framework in order to ensure that national health systems derive maximum benefit in terms of equity, quality and efficiency, while reducing potential social cost to a minimum generated risky side of globalization.

  10. Support to the Safe Motherhood Programme in Nepal: an integrated approach.

    PubMed

    Barker, Carol E; Bird, Cherry E; Pradhan, Ajit; Shakya, Ganga

    2007-11-01

    Evidence gathered from 1997 to 2006 indicates progress in reducing maternal mortality in Nepal, but public health services are still constrained by resource and staff shortages, especially in rural areas. The five-year Support to the Safe Motherhood Programme builds on the experience of the Nepal Safer Motherhood Project (1997-2004). It is working with the Government of Nepal to build capacity to institute a minimum package of essential maternity services, linking evidence-based policy development with health system strengthening. It has supported long-term planning, working towards skilled attendance at every birth, safe blood supplies, staff training, building management capacity, improving monitoring systems and use of process indicators, promoting dialogue between women and providers on quality of care, and increasing equity and access at district level. An incentives scheme finances transport costs to a health facility for all pregnant women and incentives to health workers attending deliveries, with free services and subsidies to facilities in the poorest 25 districts. Despite bureaucracy, frequent transfer of key government staff and political instability, there has been progress in policy development, and public health sector expenditure has increased. For the future, a human resources strategy with career paths that encourage skilled staff to stay in the government service is key.

  11. A hybrid metaheuristic DE/CS algorithm for UCAV three-dimension path planning.

    PubMed

    Wang, Gaige; Guo, Lihong; Duan, Hong; Wang, Heqi; Liu, Luo; Shao, Mingzhen

    2012-01-01

    Three-dimension path planning for uninhabited combat air vehicle (UCAV) is a complicated high-dimension optimization problem, which primarily centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. A new hybrid metaheuristic differential evolution (DE) and cuckoo search (CS) algorithm is proposed to solve the UCAV three-dimension path planning problem. DE is applied to optimize the process of selecting cuckoos of the improved CS model during the process of cuckoo updating in nest. The cuckoos can act as an agent in searching the optimal UCAV path. And then, the UCAV can find the safe path by connecting the chosen nodes of the coordinates while avoiding the threat areas and costing minimum fuel. This new approach can accelerate the global convergence speed while preserving the strong robustness of the basic CS. The realization procedure for this hybrid metaheuristic approach DE/CS is also presented. In order to make the optimized UCAV path more feasible, the B-Spline curve is adopted for smoothing the path. To prove the performance of this proposed hybrid metaheuristic method, it is compared with basic CS algorithm. The experiment shows that the proposed approach is more effective and feasible in UCAV three-dimension path planning than the basic CS model.

  12. MODOPTIM: A general optimization program for ground-water flow model calibration and ground-water management with MODFLOW

    USGS Publications Warehouse

    Halford, Keith J.

    2006-01-01

    MODOPTIM is a non-linear ground-water model calibration and management tool that simulates flow with MODFLOW-96 as a subroutine. A weighted sum-of-squares objective function defines optimal solutions for calibration and management problems. Water levels, discharges, water quality, subsidence, and pumping-lift costs are the five direct observation types that can be compared in MODOPTIM. Differences between direct observations of the same type can be compared to fit temporal changes and spatial gradients. Water levels in pumping wells, wellbore storage in the observation wells, and rotational translation of observation wells also can be compared. Negative and positive residuals can be weighted unequally so inequality constraints such as maximum chloride concentrations or minimum water levels can be incorporated in the objective function. Optimization parameters are defined with zones and parameter-weight matrices. Parameter change is estimated iteratively with a quasi-Newton algorithm and is constrained to a user-defined maximum parameter change per iteration. Parameters that are less sensitive than a user-defined threshold are not estimated. MODOPTIM facilitates testing more conceptual models by expediting calibration of each conceptual model. Examples of applying MODOPTIM to aquifer-test analysis, ground-water management, and parameter estimation problems are presented.

  13. News update.

    PubMed

    2012-11-28

    The RCN has raised concerns that many healthcare assistants are paid less than the ethical minimum. While the current UK-wide minimum wage is £6.19 per hour, the Living Wage Foundation - which campaigns for minimum wages that refl ect the cost of living - claims the minimum wage should be £8.30 per hour in London and £7.20 per hour in other parts of UK. Commenting during the recent Living Wage Week, RCN general secretary Peter Carter said: 'This affects college members, particularly healthcare assistants in the private sector, who are often paid less than the living wage. Some struggle to make ends meet, forcing them to claim benefi ts.'

  14. Optimum Repair Level Analysis (ORLA) for the Space Transportation System (STS)

    NASA Technical Reports Server (NTRS)

    Henry, W. R.

    1979-01-01

    A repair level analysis method applied to a space shuttle scenario is presented. A determination of the most cost effective level of repair for reparable hardware, the location for the repair, and a system which will accrue minimum total support costs within operational and technical constraints over the system design are defined. The method includes cost equations for comparison of selected costs to completion for assumed repair alternates.

  15. Cost analysis in support of minimum energy standards for clothes washers and dryers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1979-02-02

    The results of the cost analysis of energy conservation design options for laundry products are presented. The analysis was conducted using two approaches. The first, is directed toward the development of industrial engineering cost estimates of each energy conservation option. This approach results in the estimation of manufacturers costs. The second approach is directed toward determining the market price differential of energy conservation features. The results of this approach are shown. The market cost represents the cost to the consumer. It is the final cost, and therefore includes distribution costs as well as manufacturing costs.

  16. Joint Chance-Constrained Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob

    2012-01-01

    This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.

  17. Energy Requirements of Hydrogen-utilizing Microbes: A Boundary Condition for Subsurface Life

    NASA Technical Reports Server (NTRS)

    Hoehler, Tori M.; Alperin, Marc J.; Albert, Daniel B.; Martens, Christopher S.

    2003-01-01

    Microbial ecosystems based on the energy supplied by water-rock chemistry carry particular significance in the context of geo- and astrobiology. With no direct dependence on solar energy, lithotrophic microbes could conceivably penetrate a planetary crust to a depth limited only by temperature or pressure constraints (several kilometers or more). The deep lithospheric habitat is thereby potentially much greater in volume than its surface counterpart, and in addition offers a stable refuge against inhospitable surface conditions related to climatic or atmospheric evolution (e.g., Mars) or even high-energy impacts (e.g., early in Earth's history). The possibilities for a deep microbial biosphere are, however, greatly constrained by life s need to obtain energy at a certain minimum rate (the maintenance energy requirement) and of a certain minimum magnitude (the energy quantum requirement). The mere existence of these requirements implies that a significant fraction of the chemical free energy available in the subsurface environment cannot be exploited by life. Similar limits may also apply to the usefulness of light energy at very low intensities or long wavelengths. Quantification of these minimum energy requirements in terrestrial microbial ecosystems will help to establish a criterion of energetic habitability that can significantly constrain the prospects for life in Earth's subsurface, or on other bodies in the solar system. Our early work has focused on quantifying the biological energy quantum requirement for methanogenic archaea, as representatives of a plausible subsurface metabolism, in anoxic sediments (where energy availability is among the most limiting factors in microbial population growth). In both field and laboratory experiments utilizing these sediments, methanogens retain a remarkably consistent free energy intake, in the face of fluctuating environmental conditions that affect energy availability. The energy yields apparently required by methanogens in these sediment systems for sustained metabolism are about half that previously thought necessary. Lowered energy requirements would imply that a correspondingly greater proportion of the planetary subsurface could represent viable habitat for microorganisms.

  18. In the right place at the right time: habitat representation in protected areas of South American Nothofagus-dominated plants after a dispersal constrained climate change scenario.

    PubMed

    Alarcón, Diego; Cavieres, Lohengrin A

    2015-01-01

    In order to assess the effects of climate change in temperate rainforest plants in southern South America in terms of habitat size, representation in protected areas, considering also if the expected impacts are similar for dominant trees and understory plant species, we used niche modeling constrained by species migration on 118 plant species, considering two groups of dominant trees and two groups of understory ferns. Representation in protected areas included Chilean national protected areas, private protected areas, and priority areas planned for future reserves, with two thresholds for minimum representation at the country level: 10% and 17%. With a 10% representation threshold, national protected areas currently represent only 50% of the assessed species. Private reserves are important since they increase up to 66% the species representation level. Besides, 97% of the evaluated species may achieve the minimum representation target only if the proposed priority areas were included. With the climate change scenario representation levels slightly increase to 53%, 69%, and 99%, respectively, to the categories previously mentioned. Thus, the current location of all the representation categories is useful for overcoming climate change by 2050. Climate change impacts on habitat size and representation of dominant trees in protected areas are not applicable to understory plants, highlighting the importance of assessing these effects with a larger number of species. Although climate change will modify the habitat size of plant species in South American temperate rainforests, it will have no significant impact in terms of the number of species adequately represented in Chile, where the implementation of the proposed reserves is vital to accomplish the present and future minimum representation. Our results also show the importance of using migration dispersal constraints to develop more realistic future habitat maps from climate change predictions.

  19. In the Right Place at the Right Time: Habitat Representation in Protected Areas of South American Nothofagus-Dominated Plants after a Dispersal Constrained Climate Change Scenario

    PubMed Central

    Alarcón, Diego; Cavieres, Lohengrin A.

    2015-01-01

    In order to assess the effects of climate change in temperate rainforest plants in southern South America in terms of habitat size, representation in protected areas, considering also if the expected impacts are similar for dominant trees and understory plant species, we used niche modeling constrained by species migration on 118 plant species, considering two groups of dominant trees and two groups of understory ferns. Representation in protected areas included Chilean national protected areas, private protected areas, and priority areas planned for future reserves, with two thresholds for minimum representation at the country level: 10% and 17%. With a 10% representation threshold, national protected areas currently represent only 50% of the assessed species. Private reserves are important since they increase up to 66% the species representation level. Besides, 97% of the evaluated species may achieve the minimum representation target only if the proposed priority areas were included. With the climate change scenario representation levels slightly increase to 53%, 69%, and 99%, respectively, to the categories previously mentioned. Thus, the current location of all the representation categories is useful for overcoming climate change by 2050. Climate change impacts on habitat size and representation of dominant trees in protected areas are not applicable to understory plants, highlighting the importance of assessing these effects with a larger number of species. Although climate change will modify the habitat size of plant species in South American temperate rainforests, it will have no significant impact in terms of the number of species adequately represented in Chile, where the implementation of the proposed reserves is vital to accomplish the present and future minimum representation. Our results also show the importance of using migration dispersal constraints to develop more realistic future habitat maps from climate change predictions. PMID:25786226

  20. The cost-effectiveness of multi-purpose HIV and pregnancy prevention technologies in South Africa.

    PubMed

    Quaife, Matthew; Terris-Prestholt, Fern; Eakle, Robyn; Cabrera Escobar, Maria A; Kilbourne-Brook, Maggie; Mvundura, Mercy; Meyer-Rath, Gesine; Delany-Moretlwe, Sinead; Vickerman, Peter

    2018-03-01

    A number of antiretroviral HIV prevention products are efficacious in preventing HIV infection. However, the sexual and reproductive health needs of many women extend beyond HIV prevention, and research is ongoing to develop multi-purpose prevention technologies (MPTs) that offer dual HIV and pregnancy protection. We do not yet know if these products will be an efficient use of constrained health resources. In this paper, we estimate the cost-effectiveness of combinations of candidate multi-purpose prevention technologies (MPTs), in South Africa among general population women and female sex workers (FSWs). We combined a cost model with a static model of product impact based on incidence data in South Africa to estimate the cost-effectiveness of five candidate co-formulated or co-provided MPTs: oral PrEP, intravaginal ring, injectable ARV, microbicide gel and SILCS diaphragm used in concert with gel. We accounted for the preferences of end-users by predicting uptake using a discrete choice experiment (DCE). Product availability and protection were systematically varied in five potential rollout scenarios. The impact model estimated the number of infections averted through decreased incidence due to product use over one year. The comparator for each scenario was current levels of male condom use, while a health system perspective was used to estimate discounted lifetime treatment costs averted per HIV infection. Product benefit was estimated in disability-adjusted life years (DALYs) averted. Benefits from contraception were incorporated through adjusting the uptake of these products based on the DCE and through estimating the costs averted from avoiding unwanted pregnancies. We explore the additional impact of STI protection through increased uptake in a sensitivity analysis. At central incidence rates, all single- and multi-purpose scenarios modelled were cost-effective among FSWs and women aged 16-24, at a governmental willingness-to-pay threshold of $1175/DALY averted (range: $214-$810/DALY averted among non-dominant scenarios), however, none were cost-effective among women aged 25-49 (minimum $1706/DALY averted). The cost-effectiveness of products improved with additional protection from pregnancy. Estimates were sensitive to variation in incidence assumptions, but robust to other parameters. To the best of our knowledge, this is the first study to estimate the cost-effectiveness of a range of potential MPTs; suggesting that MPTs will be cost-effective among higher incidence FSWs or young women, but not among lower incidence older women. More work is needed to make attractive MPTs available to potential users who could use them effectively. © 2018 The Authors. Journal of the International AIDS Society published by John Wiley & sons Ltd on behalf of the International AIDS Society.

  1. Call Admission Control on Single Node Networks under Output Rate-Controlled Generalized Processor Sharing (ORC-GPS) Scheduler

    NASA Astrophysics Data System (ADS)

    Hanada, Masaki; Nakazato, Hidenori; Watanabe, Hitoshi

    Multimedia applications such as music or video streaming, video teleconferencing and IP telephony are flourishing in packet-switched networks. Applications that generate such real-time data can have very diverse quality-of-service (QoS) requirements. In order to guarantee diverse QoS requirements, the combined use of a packet scheduling algorithm based on Generalized Processor Sharing (GPS) and leaky bucket traffic regulator is the most successful QoS mechanism. GPS can provide a minimum guaranteed service rate for each session and tight delay bounds for leaky bucket constrained sessions. However, the delay bounds for leaky bucket constrained sessions under GPS are unnecessarily large because each session is served according to its associated constant weight until the session buffer is empty. In order to solve this problem, a scheduling policy called Output Rate-Controlled Generalized Processor Sharing (ORC-GPS) was proposed in [17]. ORC-GPS is a rate-based scheduling like GPS, and controls the service rate in order to lower the delay bounds for leaky bucket constrained sessions. In this paper, we propose a call admission control (CAC) algorithm for ORC-GPS, for leaky-bucket constrained sessions with deterministic delay requirements. This CAC algorithm for ORC-GPS determines the optimal values of parameters of ORC-GPS from the deterministic delay requirements of the sessions. In numerical experiments, we compare the CAC algorithm for ORC-GPS with one for GPS in terms of schedulable region and computational complexity.

  2. Long-term performance of minimum-input oak restoration plantings

    Treesearch

    Elizabeth Bernhardt; Tedmund J. Swiecki

    2015-01-01

    Starting in 1989, we used minimum-input methods to restore native oaks to parts of their former ranges in Vacaville, California. Each restoration site was analyzed, and only those inputs deemed necessary to overcome expected limiting factors for oak establishment were used. We avoided unnecessary inputs that added to cost and could have unintended negative consequences...

  3. Automated chest-radiography as a triage for Xpert testing in resource-constrained settings: a prospective study of diagnostic accuracy and costs

    NASA Astrophysics Data System (ADS)

    Philipsen, R. H. H. M.; Sánchez, C. I.; Maduskar, P.; Melendez, J.; Peters-Bax, L.; Peter, J. G.; Dawson, R.; Theron, G.; Dheda, K.; van Ginneken, B.

    2015-07-01

    Molecular tests hold great potential for tuberculosis (TB) diagnosis, but are costly, time consuming, and HIV-infected patients are often sputum scarce. Therefore, alternative approaches are needed. We evaluated automated digital chest radiography (ACR) as a rapid and cheap pre-screen test prior to Xpert MTB/RIF (Xpert). 388 suspected TB subjects underwent chest radiography, Xpert and sputum culture testing. Radiographs were analysed by computer software (CAD4TB) and specialist readers, and abnormality scores were allocated. A triage algorithm was simulated in which subjects with a score above a threshold underwent Xpert. We computed sensitivity, specificity, cost per screened subject (CSS), cost per notified TB case (CNTBC) and throughput for different diagnostic thresholds. 18.3% of subjects had culture positive TB. For Xpert alone, sensitivity was 78.9%, specificity 98.1%, CSS $13.09 and CNTBC $90.70. In a pre-screening setting where 40% of subjects would undergo Xpert, CSS decreased to $6.72 and CNTBC to $54.34, with eight TB cases missed and throughput increased from 45 to 113 patients/day. Specialists, on average, read 57% of radiographs as abnormal, reducing CSS ($8.95) and CNTBC ($64.84). ACR pre-screening could substantially reduce costs, and increase daily throughput with few TB cases missed. These data inform public health policy in resource-constrained settings.

  4. Stochastic evolutionary dynamics in minimum-effort coordination games

    NASA Astrophysics Data System (ADS)

    Li, Kun; Cong, Rui; Wang, Long

    2016-08-01

    The minimum-effort coordination game draws recently more attention for the fact that human behavior in this social dilemma is often inconsistent with the predictions of classical game theory. Here, we combine evolutionary game theory and coalescence theory to investigate this game in finite populations. Both analytic results and individual-based simulations show that effort costs play a key role in the evolution of contribution levels, which is in good agreement with those observed experimentally. Besides well-mixed populations, set structured populations have also been taken into consideration. Therein we find that large number of sets and moderate migration rate greatly promote effort levels, especially for high effort costs.

  5. 3D CSEM data inversion using Newton and Halley class methods

    NASA Astrophysics Data System (ADS)

    Amaya, M.; Hansen, K. R.; Morten, J. P.

    2016-05-01

    For the first time in 3D controlled source electromagnetic data inversion, we explore the use of the Newton and the Halley optimization methods, which may show their potential when the cost function has a complex topology. The inversion is formulated as a constrained nonlinear least-squares problem which is solved by iterative optimization. These methods require the derivatives up to second order of the residuals with respect to model parameters. We show how Green's functions determine the high-order derivatives, and develop a diagrammatical representation of the residual derivatives. The Green's functions are efficiently calculated on-the-fly, making use of a finite-difference frequency-domain forward modelling code based on a multi-frontal sparse direct solver. This allow us to build the second-order derivatives of the residuals keeping the memory cost in the same order as in a Gauss-Newton (GN) scheme. Model updates are computed with a trust-region based conjugate-gradient solver which does not require the computation of a stabilizer. We present inversion results for a synthetic survey and compare the GN, Newton, and super-Halley optimization schemes, and consider two different approaches to set the initial trust-region radius. Our analysis shows that the Newton and super-Halley schemes, using the same regularization configuration, add significant information to the inversion so that the convergence is reached by different paths. In our simple resistivity model examples, the convergence speed of the Newton and the super-Halley schemes are either similar or slightly superior with respect to the convergence speed of the GN scheme, close to the minimum of the cost function. Due to the current noise levels and other measurement inaccuracies in geophysical investigations, this advantageous behaviour is at present of low consequence, but may, with the further improvement of geophysical data acquisition, be an argument for more accurate higher-order methods like those applied in this paper.

  6. Optimizing Teleportation Cost in Distributed Quantum Circuits

    NASA Astrophysics Data System (ADS)

    Zomorodi-Moghadam, Mariam; Houshmand, Mahboobeh; Houshmand, Monireh

    2018-03-01

    The presented work provides a procedure for optimizing the communication cost of a distributed quantum circuit (DQC) in terms of the number of qubit teleportations. Because of technology limitations which do not allow large quantum computers to work as a single processing element, distributed quantum computation is an appropriate solution to overcome this difficulty. Previous studies have applied ad-hoc solutions to distribute a quantum system for special cases and applications. In this study, a general approach is proposed to optimize the number of teleportations for a DQC consisting of two spatially separated and long-distance quantum subsystems. To this end, different configurations of locations for executing gates whose qubits are in distinct subsystems are considered and for each of these configurations, the proposed algorithm is run to find the minimum number of required teleportations. Finally, the configuration which leads to the minimum number of teleportations is reported. The proposed method can be used as an automated procedure to find the configuration with the optimal communication cost for the DQC. This cost can be used as a basic measure of the communication cost for future works in the distributed quantum circuits.

  7. A cost-function approach to rival penalized competitive learning (RPCL).

    PubMed

    Ma, Jinwen; Wang, Taijun

    2006-08-01

    Rival penalized competitive learning (RPCL) has been shown to be a useful tool for clustering on a set of sample data in which the number of clusters is unknown. However, the RPCL algorithm was proposed heuristically and is still in lack of a mathematical theory to describe its convergence behavior. In order to solve the convergence problem, we investigate it via a cost-function approach. By theoretical analysis, we prove that a general form of RPCL, called distance-sensitive RPCL (DSRPCL), is associated with the minimization of a cost function on the weight vectors of a competitive learning network. As a DSRPCL process decreases the cost to a local minimum, a number of weight vectors eventually fall into a hypersphere surrounding the sample data, while the other weight vectors diverge to infinity. Moreover, it is shown by the theoretical analysis and simulation experiments that if the cost reduces into the global minimum, a correct number of weight vectors is automatically selected and located around the centers of the actual clusters, respectively. Finally, we apply the DSRPCL algorithms to unsupervised color image segmentation and classification of the wine data.

  8. MM Algorithms for Geometric and Signomial Programming

    PubMed Central

    Lange, Kenneth; Zhou, Hua

    2013-01-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates. PMID:24634545

  9. A MATLAB implementation of the minimum relative entropy method for linear inverse problems

    NASA Astrophysics Data System (ADS)

    Neupauer, Roseanna M.; Borchers, Brian

    2001-08-01

    The minimum relative entropy (MRE) method can be used to solve linear inverse problems of the form Gm= d, where m is a vector of unknown model parameters and d is a vector of measured data. The MRE method treats the elements of m as random variables, and obtains a multivariate probability density function for m. The probability density function is constrained by prior information about the upper and lower bounds of m, a prior expected value of m, and the measured data. The solution of the inverse problem is the expected value of m, based on the derived probability density function. We present a MATLAB implementation of the MRE method. Several numerical issues arise in the implementation of the MRE method and are discussed here. We present the source history reconstruction problem from groundwater hydrology as an example of the MRE implementation.

  10. MM Algorithms for Geometric and Signomial Programming.

    PubMed

    Lange, Kenneth; Zhou, Hua

    2014-02-01

    This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the MM algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the MM algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the MM framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the MM algorithm involves very simple updates.

  11. The cultural moral right to a basic minimum of accessible health care.

    PubMed

    Menzel, Paul T

    2011-03-01

    (1) The conception of a cultural moral right is useful in capturing the social-moral realities that underlie debate about universal health care. In asserting such rights, individuals make claims above and beyond their legal rights, but those claims are based on the society's existing commitments and moral culture. In the United States such a right to accessible basic health care is generated by various empirical social facts, primarily the conjunction of the legal requirement of access to emergency care with widely held principles about unfair free riding and just sharing of costs between well and ill. The right can get expressed in social policy through either single-payer or mandated insurance. (2) The same elements that generate this right provide modest assistance in determining its content, the structure and scope of a basic minimum of care. They justify limits on patient cost sharing, require comparative effectiveness, and make cost considerations relevant. They shed light on the status of expensive, marginally life extending, last-chance therapies, as well as life support for PVS patients. They are of less assistance in settling contentious debates about screening for breast and prostate cancer and treatments for infertility and erectile dysfunction, but even there they establish a useful framework for discussion. Scarcity of resources need not be a leading conceptual consideration in discerning a basic minimum. More important are the societal elements that generate the cultural moral right to a basic minimum.

  12. The oncology pharmacy in cancer care delivery in a resource-constrained setting in western Kenya.

    PubMed

    Strother, R Matthew; Rao, Kamakshi V; Gregory, Kelly M; Jakait, Beatrice; Busakhala, Naftali; Schellhase, Ellen; Pastakia, Sonak; Krzyzanowska, Monika; Loehrer, Patrick J

    2012-12-01

    The movement to deliver cancer care in resource-limited settings is gaining momentum, with particular emphasis on the creation of cost-effective, rational algorithms utilizing affordable chemotherapeutics to treat curable disease. The delivery of cancer care in resource-replete settings is a concerted effort by a team of multidisciplinary care providers. The oncology pharmacy, which is now considered integral to cancer care in resourced medical practice, developed over the last several decades in an effort to limit healthcare provider exposure to workplace hazards and to limit risk to patients. In developing cancer care services in resource-constrained settings, creation of oncology pharmacies can help to both mitigate the risks to practitioners and patients, and also limit the costs of cancer care and the environmental impact of chemotherapeutics. This article describes the experience and lessons learned in establishing a chemotherapy pharmacy in western Kenya.

  13. Mixed-Strategy Chance Constrained Optimal Control

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J.

    2013-01-01

    This paper presents a novel chance constrained optimal control (CCOC) algorithm that chooses a control action probabilistically. A CCOC problem is to find a control input that minimizes the expected cost while guaranteeing that the probability of violating a set of constraints is below a user-specified threshold. We show that a probabilistic control approach, which we refer to as a mixed control strategy, enables us to obtain a cost that is better than what deterministic control strategies can achieve when the CCOC problem is nonconvex. The resulting mixed-strategy CCOC problem turns out to be a convexification of the original nonconvex CCOC problem. Furthermore, we also show that a mixed control strategy only needs to "mix" up to two deterministic control actions in order to achieve optimality. Building upon an iterative dual optimization, the proposed algorithm quickly converges to the optimal mixed control strategy with a user-specified tolerance.

  14. 45 CFR 2553.44 - May cost reimbursements received by a RSVP volunteer be subject to any tax or charge, treated as...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false May cost reimbursements received by a RSVP... benefit payments or minimum wage laws. Cost reimbursements are not subject to garnishment, do not reduce... receive assistance from other programs? 2553.44 Section 2553.44 Public Welfare Regulations Relating to...

  15. A cost-sharing formula for online circulation and a union catalog through a regional, multitype library cooperative.

    PubMed Central

    Arcari, R D

    1987-01-01

    The experience of the Capitol Region Library Council and the University of Connecticut Health Center in developing a cost allocation formula for a circulation and online catalog shared by twenty-nine libraries is reviewed. The resulting formula identifies a basic unit cost as a minimum for each system participant. PMID:3676536

  16. Cost evaluation of cellulase enzyme for industrial-scale cellulosic ethanol production based on rigorous Aspen Plus modeling.

    PubMed

    Liu, Gang; Zhang, Jian; Bao, Jie

    2016-01-01

    Cost reduction on cellulase enzyme usage has been the central effort in the commercialization of fuel ethanol production from lignocellulose biomass. Therefore, establishing an accurate evaluation method on cellulase enzyme cost is crucially important to support the health development of the future biorefinery industry. Currently, the cellulase cost evaluation methods were complicated and various controversial or even conflict results were presented. To give a reliable evaluation on this important topic, a rigorous analysis based on the Aspen Plus flowsheet simulation in the commercial scale ethanol plant was proposed in this study. The minimum ethanol selling price (MESP) was used as the indicator to show the impacts of varying enzyme supply modes, enzyme prices, process parameters, as well as enzyme loading on the enzyme cost. The results reveal that the enzyme cost drives the cellulosic ethanol price below the minimum profit point when the enzyme is purchased from the current industrial enzyme market. An innovative production of cellulase enzyme such as on-site enzyme production should be explored and tested in the industrial scale to yield an economically sound enzyme supply for the future cellulosic ethanol production.

  17. Economics of liquid hydrogen from water electrolysis

    NASA Technical Reports Server (NTRS)

    Lin, F. N.; Moore, W. I.; Walker, S. W.

    1985-01-01

    An economical model for preliminary analysis of LH2 cost from water electrolysis is presented. The model is based on data from vendors and open literature, and is suitable for computer analysis of different scenarios for 'directional' purposes. Cost data associated with a production rate of 10,886 kg/day are presented. With minimum modification, the model can also be used to predict LH2 cost from any electrolyzer once the electrolyzer's cost data are available.

  18. Earth Observatory Satellite system definition study. Report no. 4: Management approach recommendations

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A management approach for the Earth Observatory Satellite (EOS) which will meet the challenge of a constrained cost environment is presented. Areas of consideration are contracting techniques, test philosophy, reliability and quality assurance requirements, commonality options, and documentation and control requirements. The various functional areas which were examined for cost reduction possibilities are identified. The recommended management approach is developed to show the primary and alternative methods.

  19. Discrete Fluctuations in Memory Erasure without Energy Cost

    NASA Astrophysics Data System (ADS)

    Croucher, Toshio; Bedkihal, Salil; Vaccaro, Joan A.

    2017-02-01

    According to Landauer's principle, erasing one bit of information incurs a minimum energy cost. Recently, Vaccaro and Barnett (VB) explored information erasure within the context of generalized Gibbs ensembles and demonstrated that for energy-degenerate spin reservoirs the cost of erasure can be solely in terms of a minimum amount of spin angular momentum and no energy. As opposed to the Landauer case, the cost of erasure in this case is associated with an intrinsically discrete degree of freedom. Here we study the discrete fluctuations in this cost and the probability of violation of the VB bound. We also obtain a Jarzynski-like equality for the VB erasure protocol. We find that the fluctuations below the VB bound are exponentially suppressed at a far greater rate and more tightly than for an equivalent Jarzynski expression for VB erasure. We expose a trade-off between the size of the fluctuations and the cost of erasure. We find that the discrete nature of the fluctuations is pronounced in the regime where reservoir spins are maximally polarized. We also state the first laws of thermodynamics corresponding to the conservation of spin angular momentum for this particular erasure protocol. Our work will be important for novel heat engines based on information erasure schemes that do not incur an energy cost.

  20. Optimal cost for strengthening or destroying a given network

    NASA Astrophysics Data System (ADS)

    Patron, Amikam; Cohen, Reuven; Li, Daqing; Havlin, Shlomo

    2017-05-01

    Strengthening or destroying a network is a very important issue in designing resilient networks or in planning attacks against networks, including planning strategies to immunize a network against diseases, viruses, etc. Here we develop a method for strengthening or destroying a random network with a minimum cost. We assume a correlation between the cost required to strengthen or destroy a node and the degree of the node. Accordingly, we define a cost function c (k ) , which is the cost of strengthening or destroying a node with degree k . Using the degrees k in a network and the cost function c (k ) , we develop a method for defining a list of priorities of degrees and for choosing the right group of degrees to be strengthened or destroyed that minimizes the total price of strengthening or destroying the entire network. We find that the list of priorities of degrees is universal and independent of the network's degree distribution, for all kinds of random networks. The list of priorities is the same for both strengthening a network and for destroying a network with minimum cost. However, in spite of this similarity, there is a difference between their pc, the critical fraction of nodes that has to be functional to guarantee the existence of a giant component in the network.

  1. Optimal cost for strengthening or destroying a given network.

    PubMed

    Patron, Amikam; Cohen, Reuven; Li, Daqing; Havlin, Shlomo

    2017-05-01

    Strengthening or destroying a network is a very important issue in designing resilient networks or in planning attacks against networks, including planning strategies to immunize a network against diseases, viruses, etc. Here we develop a method for strengthening or destroying a random network with a minimum cost. We assume a correlation between the cost required to strengthen or destroy a node and the degree of the node. Accordingly, we define a cost function c(k), which is the cost of strengthening or destroying a node with degree k. Using the degrees k in a network and the cost function c(k), we develop a method for defining a list of priorities of degrees and for choosing the right group of degrees to be strengthened or destroyed that minimizes the total price of strengthening or destroying the entire network. We find that the list of priorities of degrees is universal and independent of the network's degree distribution, for all kinds of random networks. The list of priorities is the same for both strengthening a network and for destroying a network with minimum cost. However, in spite of this similarity, there is a difference between their p_{c}, the critical fraction of nodes that has to be functional to guarantee the existence of a giant component in the network.

  2. 25 CFR 47.9 - What are the minimum requirements for the local educational financial plan?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... EDUCATION UNIFORM DIRECT FUNDING AND SUPPORT FOR BUREAU-OPERATED SCHOOLS § 47.9 What are the minimum..., including each program funded through the Indian School Equalization Program; (2) A budget showing the costs...) Certification by the chairman of the school board that the plan has been ratified in an action of record by the...

  3. 25 CFR 47.9 - What are the minimum requirements for the local educational financial plan?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... EDUCATION UNIFORM DIRECT FUNDING AND SUPPORT FOR BUREAU-OPERATED SCHOOLS § 47.9 What are the minimum..., including each program funded through the Indian School Equalization Program; (2) A budget showing the costs...) Certification by the chairman of the school board that the plan has been ratified in an action of record by the...

  4. 25 CFR 47.9 - What are the minimum requirements for the local educational financial plan?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... EDUCATION UNIFORM DIRECT FUNDING AND SUPPORT FOR BUREAU-OPERATED SCHOOLS § 47.9 What are the minimum..., including each program funded through the Indian School Equalization Program; (2) A budget showing the costs...) Certification by the chairman of the school board that the plan has been ratified in an action of record by the...

  5. 25 CFR 47.9 - What are the minimum requirements for the local educational financial plan?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... EDUCATION UNIFORM DIRECT FUNDING AND SUPPORT FOR BUREAU-OPERATED SCHOOLS § 47.9 What are the minimum..., including each program funded through the Indian School Equalization Program; (2) A budget showing the costs...) Certification by the chairman of the school board that the plan has been ratified in an action of record by the...

  6. 25 CFR 47.9 - What are the minimum requirements for the local educational financial plan?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... EDUCATION UNIFORM DIRECT FUNDING AND SUPPORT FOR BUREAU-OPERATED SCHOOLS § 47.9 What are the minimum..., including each program funded through the Indian School Equalization Program; (2) A budget showing the costs...) Certification by the chairman of the school board that the plan has been ratified in an action of record by the...

  7. Community air monitoring and the Village Green Project

    EPA Science Inventory

    Abstract: Cost and logistics are practical issues that have historically constrained the number of locations where long-term, active air pollution measurement is possible. In addition, traditional air monitoring approaches are generally conducted by technical experts with limite...

  8. Community air monitoring and the Village Green Project

    EPA Science Inventory

    Cost and logistics are practical issues that have historically constrained the number of locations where long-term, active air pollution measurement is possible. In addition, traditional air monitoring approaches are generally conducted by technical experts with limited engageme...

  9. Pneumatic Conveying of Seed Cotton: Minimum Velocity and Pressure Drop

    USDA-ARS?s Scientific Manuscript database

    Electricity is major cost for cotton gins, representing approximately 20% of the industry’s variable costs. Fans used for pneumatic conveying consume the majority of electricity at cotton gins. Development of control systems to reduce the air velocity used for conveying seed cotton could significant...

  10. Pneumatic Conveying of Seed Cotton: Minimum Velocity and Pressure Drop

    USDA-ARS?s Scientific Manuscript database

    Electricity is a major cost for cotton gins, representing approximately 20% of variable costs. Fans used for pneumatic conveying consume the majority of electricity at cotton gins. Development of control systems to reduce the air velocity used for conveying seed cotton could significantly decrease e...

  11. Wing Configuration Impact on Design Optimums for a Subsonic Passenger Transport

    NASA Technical Reports Server (NTRS)

    Wells, Douglas P.

    2014-01-01

    This study sought to compare four aircraft wing configurations at a conceptual level using a multi-disciplinary optimization (MDO) process. The MDO framework used was created by Georgia Institute of Technology and Virginia Polytechnic Institute and State University. They created a multi-disciplinary design and optimization environment that could capture the unique features of the truss-braced wing (TBW) configuration. The four wing configurations selected for the study were a low wing cantilever installation, a high wing cantilever, a strut-braced wing, and a single jury TBW. The mission that was used for this study was a 160 passenger transport aircraft with a design range of 2,875 nautical miles at the design payload, flown at a cruise Mach number of 0.78. This paper includes discussion and optimization results for multiple design objectives. Five design objectives were chosen to illustrate the impact of selected objective on the optimization result: minimum takeoff gross weight (TOGW), minimum operating empty weight, minimum block fuel weight, maximum start of cruise lift-to-drag ratio, and minimum start of cruise drag coefficient. The results show that the design objective selected will impact the characteristics of the optimized aircraft. Although minimum life cycle cost was not one of the objectives, TOGW is often used as a proxy for life cycle cost. The low wing cantilever had the lowest TOGW followed by the strut-braced wing.

  12. An Authentication and Key Management Mechanism for Resource Constrained Devices in IEEE 802.11-based IoT Access Networks.

    PubMed

    Kim, Ki-Wook; Han, Youn-Hee; Min, Sung-Gi

    2017-09-21

    Many Internet of Things (IoT) services utilize an IoT access network to connect small devices with remote servers. They can share an access network with standard communication technology, such as IEEE 802.11ah. However, an authentication and key management (AKM) mechanism for resource constrained IoT devices using IEEE 802.11ah has not been proposed as yet. We therefore propose a new AKM mechanism for an IoT access network, which is based on IEEE 802.11 key management with the IEEE 802.1X authentication mechanism. The proposed AKM mechanism does not require any pre-configured security information between the access network domain and the IoT service domain. It considers the resource constraints of IoT devices, allowing IoT devices to delegate the burden of AKM processes to a powerful agent. The agent has sufficient power to support various authentication methods for the access point, and it performs cryptographic functions for the IoT devices. Performance analysis shows that the proposed mechanism greatly reduces computation costs, network costs, and memory usage of the resource-constrained IoT device as compared to the existing IEEE 802.11 Key Management with the IEEE 802.1X authentication mechanism.

  13. An Authentication and Key Management Mechanism for Resource Constrained Devices in IEEE 802.11-based IoT Access Networks

    PubMed Central

    Han, Youn-Hee; Min, Sung-Gi

    2017-01-01

    Many Internet of Things (IoT) services utilize an IoT access network to connect small devices with remote servers. They can share an access network with standard communication technology, such as IEEE 802.11ah. However, an authentication and key management (AKM) mechanism for resource constrained IoT devices using IEEE 802.11ah has not been proposed as yet. We therefore propose a new AKM mechanism for an IoT access network, which is based on IEEE 802.11 key management with the IEEE 802.1X authentication mechanism. The proposed AKM mechanism does not require any pre-configured security information between the access network domain and the IoT service domain. It considers the resource constraints of IoT devices, allowing IoT devices to delegate the burden of AKM processes to a powerful agent. The agent has sufficient power to support various authentication methods for the access point, and it performs cryptographic functions for the IoT devices. Performance analysis shows that the proposed mechanism greatly reduces computation costs, network costs, and memory usage of the resource-constrained IoT device as compared to the existing IEEE 802.11 Key Management with the IEEE 802.1X authentication mechanism. PMID:28934152

  14. A methodology based on reduced complexity algorithm for system applications using microprocessors

    NASA Technical Reports Server (NTRS)

    Yan, T. Y.; Yao, K.

    1988-01-01

    The paper considers a methodology on the analysis and design of a minimum mean-square error criterion linear system incorporating a tapped delay line (TDL) where all the full-precision multiplications in the TDL are constrained to be powers of two. A linear equalizer based on the dispersive and additive noise channel is presented. This microprocessor implementation with optimized power of two TDL coefficients achieves a system performance comparable to the optimum linear equalization with full-precision multiplications for an input data rate of 300 baud.

  15. Low authority-threshold control for large flexible structures

    NASA Technical Reports Server (NTRS)

    Zimmerman, D. C.; Inman, D. J.; Juang, J.-N.

    1988-01-01

    An improved active control strategy for the vibration control of large flexible structures is presented. A minimum force, low authority-threshold controller is developed to bring a system with or without known external disturbances back into an 'allowable' state manifold over a finite time interval. The concept of a constrained, or allowable feedback form of the controller is introduced that reflects practical hardware implementation concerns. The robustness properties of the control strategy are then assessed. Finally, examples are presented which highlight the key points made within the paper.

  16. Detection of Ionospheric Alfven Resonator Signatures in the Equatorial Ionosphere

    NASA Technical Reports Server (NTRS)

    Simoes, Fernando; Klenzing, Jeffrey; Ivanov, Stoyan; Pfaff, Robert; Freudenreich, Henry; Bilitza, Dieter; Rowland, Douglas; Bromund, Kenneth; Liebrecht, Maria Carmen; Martin, Steven; hide

    2012-01-01

    The ionosphere response resulting from minimum solar activity during cycle 23/24 was unusual and offered unique opportunities for investigating space weather in the near-Earth environment. We report ultra low frequency electric field signatures related to the ionospheric Alfven resonator detected by the Communications/Navigation Outage Forecasting System (C/NOFS) satellite in the equatorial region. These signatures are used to constrain ionospheric empirical models and offer a new approach for monitoring ionosphere dynamics and space weather phenomena, namely aeronomy processes, Alfven wave propagation, and troposphere24 ionosphere-magnetosphere coupling mechanisms.

  17. Ground Vibration Testing Options for Space Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Patterson, Alan; Smith, Robert K.; Goggin, David; Newsom, Jerry

    2011-01-01

    New NASA launch vehicles will require development of robust systems in a fiscally-constrained environment. NASA, Department of Defense (DoD), and commercial space companies routinely conduct ground vibration tests as an essential part of math model validation and launch vehicle certification. Although ground vibration testing must be a part of the integrated test planning process, more affordable approaches must also be considered. A study evaluated several ground vibration test options for the NASA Constellation Program flight test vehicles, Orion-1 and Orion-2, which concluded that more affordable ground vibration test options are available. The motivation for ground vibration testing is supported by historical examples from NASA and DoD. The approach used in the present study employed surveys of ground vibration test subject-matter experts that provided data to qualitatively rank six test options. Twenty-five experts from NASA, DoD, and industry provided scoring and comments for this study. The current study determined that both element-level modal tests and integrated vehicle modal tests have technical merits. Both have been successful in validating structural dynamic math models of launch vehicles. However, element-level testing has less overall cost and schedule risk as compared to integrated vehicle testing. Future NASA launch vehicle development programs should anticipate that some structural dynamics testing will be necessary. Analysis alone will be inadequate to certify a crew-capable launch vehicle. At a minimum, component and element structural dynamic tests are recommended for new vehicle elements. Three viable structural dynamic test options were identified. Modal testing of the new vehicle elements and an integrated vehicle test on the mobile launcher provided the optimal trade between technical, cost, and schedule.

  18. High-Resolution Biogeochemical Simulation Identifies Practical Opportunities for Bioenergy Landscape Intensification Across Diverse US Agricultural Regions

    NASA Astrophysics Data System (ADS)

    Field, J.; Adler, P. R.; Evans, S.; Paustian, K.; Marx, E.; Easter, M.

    2015-12-01

    The sustainability of biofuel expansion is strongly dependent on the environmental footprint of feedstock production, including both direct impacts within feedstock-producing areas and potential leakage effects due to disruption of existing food, feed, or fiber production. Assessing and minimizing these impacts requires novel methods compared to traditional supply chain lifecycle assessment. When properly validated and applied at appropriate spatial resolutions, biogeochemical process models are useful for simulating how the productivity and soil greenhouse gas fluxes of cultivating both conventional crops and advanced feedstock crops respond across gradients of land quality and management intensity. In this work we use the DayCent model to assess the biogeochemical impacts of agricultural residue collection, establishment of perennial grasses on marginal cropland or conservation easements, and intensification of existing cropping at high spatial resolution across several real-world case study landscapes in diverse US agricultural regions. We integrate the resulting estimates of productivity, soil carbon changes, and nitrous oxide emissions with crop production budgets and lifecycle inventories, and perform a basic optimization to generate landscape cost/GHG frontiers and determine the most practical opportunities for low-impact feedstock provisioning. The optimization is constrained to assess the minimum combined impacts of residue collection, land use change, and intensification of existing agriculture necessary for the landscape to supply a commercial-scale biorefinery while maintaining exiting food, feed, and fiber production levels. These techniques can be used to assess how different feedstock provisioning strategies perform on both economic and environmental criteria, and sensitivity of performance to environmental and land use factors. The included figure shows an example feedstock cost-GHG mitigation tradeoff frontier for a commercial-scale cellulosic biofuel facility in Kansas.

  19. Ensemble of surrogates-based optimization for identifying an optimal surfactant-enhanced aquifer remediation strategy at heterogeneous DNAPL-contaminated sites

    NASA Astrophysics Data System (ADS)

    Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin

    2015-11-01

    The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.

  20. Ensemble of Surrogates-based Optimization for Identifying an Optimal Surfactant-enhanced Aquifer Remediation Strategy at Heterogeneous DNAPL-contaminated Sites

    NASA Astrophysics Data System (ADS)

    Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.

    2015-12-01

    The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.

  1. Reducing the complexity of NASA's space communications infrastructure

    NASA Technical Reports Server (NTRS)

    Miller, Raymond E.; Liu, Hong; Song, Junehwa

    1995-01-01

    This report describes the range of activities performed during the annual reporting period in support of the NASA Code O Success Team - Lifecycle Effectiveness for Strategic Success (COST LESS) team. The overall goal of the COST LESS team is to redefine success in a constrained fiscal environment and reduce the cost of success for end-to-end mission operations. This goal is more encompassing than the original proposal made to NASA for reducing complexity of NASA's Space Communications Infrastructure. The COST LESS team approach for reengineering the space operations infrastructure has a focus on reversing the trend of engineering special solutions to similar problems.

  2. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  3. Trajectory optimization and guidance for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Mease, Kenneth D.; Vanburen, Mark A.

    1989-01-01

    The first step in the approach to developing guidance laws for a horizontal take-off, air breathing single-stage-to-orbit vehicle is to characterize the minimum-fuel ascent trajectories. The capability to generate constrained, minimum fuel ascent trajectories for a single-stage-to-orbit vehicle was developed. A key component of this capability is the general purpose trajectory optimization program OTIS. The pre-production version, OTIS 0.96 was installed and run on a Convex C-1. A propulsion model was developed covering the entire flight envelope of a single-stage-to-orbit vehicle. Three separate propulsion modes, corresponding to an after burning turbojet, a ramjet and a scramjet, are used in the air breathing propulsion phase. The Generic Hypersonic Aerodynamic Model Example aerodynamic model of a hypersonic air breathing single-stage-to-orbit vehicle was obtained and implemented. Preliminary results pertaining to the effects of variations in acceleration constraints, available thrust level and fuel specific impulse on the shape of the minimum-fuel ascent trajectories were obtained. The results show that, if the air breathing engines are sized for acceleration to orbital velocity, it is the acceleration constraint rather than the dynamic pressure constraint that is active during ascent.

  4. Some extemporaneous comments on our experiences with towers for wind generators

    NASA Technical Reports Server (NTRS)

    Hutter, U.

    1973-01-01

    A wind generator tower must be designed to withstand fatigue forces and gust winds loads. Optimum tower height depends on the energy cost to the customer because an increase in height results in an increase in the cost of the plant. It is suggested that costs are minimum for the shortest tower possible and that the rotor should be as large as possible.

  5. When Benefits Are Difficult to Measure.

    ERIC Educational Resources Information Center

    Birdsall, William C.

    1987-01-01

    It is difficult to apply benefit cost analysis to human service programs. This paper explains "threshold benefit analysis," the derivation of the minimum dollar value which the benefits must attain in order for their value to equal the intervention costs. The method is applied to a mobility training program. (BS)

  6. Technological Minimalism: A Cost-Effective Alternative for Course Design and Development.

    ERIC Educational Resources Information Center

    Lorenzo, George

    2001-01-01

    Discusses the use of minimum levels of technology, or technological minimalism, for Web-based multimedia course content. Highlights include cost effectiveness; problems with video streaming, the use of XML for Web pages, and Flash and Java applets; listservs instead of proprietary software; and proper faculty training. (LRW)

  7. 43 CFR 9239.1-3 - Measure of damages.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... MANAGEMENT, DEPARTMENT OF THE INTERIOR TECHNICAL SERVICES (9000) TRESPASS Kinds of Trespass § 9239.1-3... prevail, the following minimum damages apply to trespass of timber and other vegetative resources: (1) Administrative costs incurred by the United States as a consequence of the trespass. (2) Costs associated with...

  8. 43 CFR 9239.1-3 - Measure of damages.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... MANAGEMENT, DEPARTMENT OF THE INTERIOR TECHNICAL SERVICES (9000) TRESPASS Kinds of Trespass § 9239.1-3... prevail, the following minimum damages apply to trespass of timber and other vegetative resources: (1) Administrative costs incurred by the United States as a consequence of the trespass. (2) Costs associated with...

  9. 43 CFR 9239.1-3 - Measure of damages.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... MANAGEMENT, DEPARTMENT OF THE INTERIOR TECHNICAL SERVICES (9000) TRESPASS Kinds of Trespass § 9239.1-3... prevail, the following minimum damages apply to trespass of timber and other vegetative resources: (1) Administrative costs incurred by the United States as a consequence of the trespass. (2) Costs associated with...

  10. 43 CFR 9239.1-3 - Measure of damages.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... MANAGEMENT, DEPARTMENT OF THE INTERIOR TECHNICAL SERVICES (9000) TRESPASS Kinds of Trespass § 9239.1-3... prevail, the following minimum damages apply to trespass of timber and other vegetative resources: (1) Administrative costs incurred by the United States as a consequence of the trespass. (2) Costs associated with...

  11. Prescriptions vary in ponderosa regeneration

    Treesearch

    Dale O. Hall

    1969-01-01

    Nonproducing acres and unproductive years are both costly to timberland owners. These costs are reduced by restoring timber stocking with minimum delay. The proper prescription for regeneration can insure fast restocking. But the silviculturist prescribes slash disposal, site preparation, seeding, and planting only after he has carefully examined the site environment...

  12. 20 CFR 632.253 - Special operating provisions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... NATIVE AMERICAN EMPLOYMENT AND TRAINING PROGRAMS Summer Youth Employment and Training Programs § 632.253... assistance from the summer program, and youth who remain in school but are likely to be confronted with... provided in the summer program at no cost, or at minimum cost, to the summer program; (d) Assure that...

  13. 45 CFR 156.215 - Advance payments of the premium tax credit and cost-sharing reduction standards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... cost-sharing reduction standards. 156.215 Section 156.215 Public Welfare Department of Health and Human Services REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO EXCHANGES Qualified Health Plan Minimum Certification...

  14. 45 CFR 156.215 - Advance payments of the premium tax credit and cost-sharing reduction standards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... cost-sharing reduction standards. 156.215 Section 156.215 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES REQUIREMENTS RELATING TO HEALTH CARE ACCESS HEALTH INSURANCE ISSUER STANDARDS UNDER THE AFFORDABLE CARE ACT, INCLUDING STANDARDS RELATED TO EXCHANGES Qualified Health Plan Minimum Certification...

  15. Some Options for a Minimum Solar Probe Mission

    NASA Technical Reports Server (NTRS)

    Randolph, J. E.; Tsurutani, B. T.; Turner, P. R.; Miyake, R. M.; Ayon, J. A.

    1996-01-01

    Smaller and lower cost options of NASA's Solar Probe mission have recently been studied. The difference between these options and the results of earlier studies is dramatic. The motivation for low cost has encouraged the JPL design team to accomodate a smaller scientific payload using innovative multi-functional subsystems.

  16. The Costs and Benefits of Increasing the Minimum Service Requirement for NROTC Graduates

    DTIC Science & Technology

    2008-12-01

    3. Cost Analysis ......................................................................................14 B. WOMEN AND MINORITIES IN THE NAVY...16 1. Motivations for Women and Minorities ..........................................16 2. The Present and...Another adverse effect of the MSR extension might be to change the propensity for women and minorities to accept NROTC scholarships. The history

  17. On the Cost of Engineering Education.

    ERIC Educational Resources Information Center

    Black, Guy

    This study examines how the cost of engineering education changes with the size and characteristics of programs, and tries to establish whether there is some minimum scale at which point engineering education becomes financially viable, and considers the manner in which financial viability is affected by program characteristics. Chapter I presents…

  18. Discovery Planetary Mission Operations Concepts

    NASA Technical Reports Server (NTRS)

    Coffin, R.

    1994-01-01

    The NASA Discovery Program of small planetary missions will provide opportunities to continue scientific exploration of the solar system in today's cost-constrained environment. Using a multidisciplinary team, JPL has developed plans to provide mission operations within the financial parameters established by the Discovery Program. This paper describes experiences and methods that show promise of allowing the Discovery Missions to operate within the program cost constraints while maintaining low mission risk, high data quality, and reponsive operations.

  19. C-5M Fuel Efficiency Through MFOQA Data Analysis

    DTIC Science & Technology

    2015-03-26

    deterioration of commercial high-bypass ratio turbofan engines. ( No. 801118).SAE Technical Paper. Mirtich, J. M. (2011). Cost index flying. (Unpublished...D. L. (2010). Constrained kalman filtering via density function truncation for turbofan engine health estimation. International Journal of Systems

  20. Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.

    PubMed

    Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo

    2017-06-01

    Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.

  1. Geochronological constraints on the evolution of El Hierro (Canary Islands)

    NASA Astrophysics Data System (ADS)

    Becerril, Laura; Ubide, Teresa; Sudo, Masafumi; Martí, Joan; Galindo, Inés; Galé, Carlos; Morales, Jose María; Yepes, Jorge; Lago, Marceliano

    2016-01-01

    New age data have been obtained to time constrain the recent Quaternary volcanism of El Hierro (Canary Islands) and to estimate its recurrence rate. We have carried out 40Ar/39Ar geochronology on samples spanning the entire volcanostratigraphic sequence of the island and 14C geochronology on the most recent eruption on the northeast rift of the island: 2280 ± 30 yr BP. We combine the new absolute data with a revision of published ages onshore, some of which were identified through geomorphological criteria (relative data). We present a revised and updated chronology of volcanism for the last 33 ka that we use to estimate the maximum eruptive recurrence of the island. The number of events per year determined is 9.7 × 10-4 for the emerged part of the island, which means that, as a minimum, one eruption has occurred approximately every 1000 years. This highlights the need of more geochronological data to better constrain the eruptive recurrence of El Hierro.

  2. The ability of land owners and their cooperatives to leverage payments greater than opportunity costs from conservation contracts.

    PubMed

    Lennox, Gareth D; Armsworth, Paul R

    2013-06-01

    In negotiations over land-right acquisitions, landowners have an informational advantage over conservation groups because they know more about the opportunity costs of conservation measures on their sites. This advantage creates the possibility that landowners will demand payments greater than the required minimum, where this minimum required payment is known as the landowner’s willingness to accept (WTA). However, in recent studies of conservation costs, researchers have assumed landowners will accept conservation with minimum payments. We investigated the ability of landowners to demand payments above their WTA when a conservation group has identified multiple sites for protection. First, we estimated the maximum payment landowners could potentially demand, which is set when groups of landowners act as a cooperative. Next, through the simulation of conservation auctions, we explored the amount of money above landowners’ WTA (i.e., surplus) that conservation groups could cede to secure conservation agreements, again investigating the influence of landowner cooperatives. The simulations showed the informational advantage landowners held could make conservation investments up to 42% more expensive than suggested by the site WTAs. Moreover, all auctions resulted in landowners obtaining payments greater than their WTA; thus, it may be unrealistic to assume landowners will accept conservation contracts with minimum payments. Of particular significance for species conservation, conservation objectives focused on overall species richness,which therefore recognize site complementarity, create an incentive for land owners to form cooperatives to capture surplus. To the contrary, objectives in which sites are substitutes, such as the maximization of species occurrences, create a disincentive for cooperative formation.

  3. Total quality assurance

    NASA Astrophysics Data System (ADS)

    Louzon, E.

    1989-12-01

    Quality, cost, and schedule are three factors affecting the competitiveness of a company; they require balancing so that products of acceptable quality are delivered, on time and at a competitive cost. Quality costs comprise investment in quality maintenance and failure costs which arise from failure to maintain standards. The basic principle for achieving the required quality at minimum cost is that of prevention of failures, etc., through production control, attention to manufacturing practices, and appropriate management and training. Total quality control involves attention to the product throughout its life cycle, including in-service performance evaluation, servicing, and maintenance.

  4. Minimization of Food Cost on 2000-Calorie Diabetic Diet

    NASA Astrophysics Data System (ADS)

    Urrutia, J. D.; Mercado, J.; Tampis, R. L.

    2017-03-01

    This study focuses on minimization of food cost that satisfies the daily nutrients required based on 2000-calorie diet for a diabetic person. This paper attempts to provide a food combination that satisfies the daily nutrient requirements of a diabetic person and its lowest possible dietary food cost. A linear programming diet model is used to determine the cheapest combination of food items that satisfy the recommended daily nutritional requirements of the diabetic persons. According to the findings, a 50 year old and above diabetic male need to spend a minimum of 72.22 pesos for foods that satisfy the daily nutrients they need. In order to attain the minimum spending, the foods must consist of 60.49 grams of anchovy, 91.24 grams of carrot, 121.92 grams of durian, 121.41 grams of chicken egg, 70.82 grams of pork (lean), and 369.70 grams of rice (well-milled). For a 50 year old and above diabetic female, the minimum spending is 64.65 pesos per day and the food must consist of 75.87 grams of anchovy, 43.38 grams of carrot, 160.46 grams of durian, 69.66 grams of chicken egg, 23.16 grams of pork (lean) and 416.19 grams of rice (well-milled).

  5. SmallSats, Iodine Propulsion Technology, Applications to Low-Cost Lunar Missions, and the Iodine Satellite (iSAT) Project

    NASA Technical Reports Server (NTRS)

    Dankanich, John W.

    2014-01-01

    Closing Remarks: ?(1) SmallSats hold significant potential for future low cost high value missions; (2) Propulsion remains a key limiting capability for SmallSats that Iodine can address: High ISP * Density for volume constrained spacecraft; Indefinite quiescence, unpressurized and non-hazardous as a secondary payload; (3) Iodine enables MicroSat and SmallSat maneuverability: Enables transfer into high value orbits, constellation deployment and deorbit; (4) Iodine may enable a new class of planetary and exploration class missions: Enables GTO launched secondary spacecraft to transit to the moon, asteroids, and other interplanetary destinations for approximately 150 million dollars full life cycle cost including the launch; (5) ESPA based OTVs are also volume constrained and a shift from xenon to iodine can significantly increase the transfer vehicle change in volume capability including transfers from GTO to a range of Lunar Orbits; (6) The iSAT project is a fast pace high value iodine Hall technology demonstration mission: Partnership with NASA GRC and NASA MSFC with industry partner - Busek; (7) The iSAT mission is an approved project with PDR in November of 2014 and is targeting a flight opportunity in FY17.

  6. Performance evaluation of the inverse dynamics method for optimal spacecraft reorientation

    NASA Astrophysics Data System (ADS)

    Ventura, Jacopo; Romano, Marcello; Walter, Ulrich

    2015-05-01

    This paper investigates the application of the inverse dynamics in the virtual domain method to Euler angles, quaternions, and modified Rodrigues parameters for rapid optimal attitude trajectory generation for spacecraft reorientation maneuvers. The impact of the virtual domain and attitude representation is numerically investigated for both minimum time and minimum energy problems. Owing to the nature of the inverse dynamics method, it yields sub-optimal solutions for minimum time problems. Furthermore, the virtual domain improves the optimality of the solution, but at the cost of more computational time. The attitude representation also affects solution quality and computational speed. For minimum energy problems, the optimal solution can be obtained without the virtual domain with any considered attitude representation.

  7. The cost of performance - A comparison of the space transportation main engine and the Space Shuttle main engine

    NASA Technical Reports Server (NTRS)

    Barisa, B. B.; Flinchbaugh, G. D.; Zachary, A. T.

    1989-01-01

    This paper compares the cost of the Space Shuttle Main Engine (SSME) and the Space Transportation Main Engine (STME) proposed by the Advanced Launch System Program. A brief description of the SSME and STME engines is presented, followed by a comparison of these engines that illustrates the impact of focusing on acceptable performance at minimum cost (as for the STME) or on maximum performance (as for the SSME). Several examples of cost reduction methods are presented.

  8. Automated Reconstruction of Neural Trees Using Front Re-initialization

    PubMed Central

    Mukherjee, Amit; Stepanyants, Armen

    2013-01-01

    This paper proposes a greedy algorithm for automated reconstruction of neural arbors from light microscopy stacks of images. The algorithm is based on the minimum cost path method. While the minimum cost path, obtained using the Fast Marching Method, results in a trace with the least cumulative cost between the start and the end points, it is not sufficient for the reconstruction of neural trees. This is because sections of the minimum cost path can erroneously travel through the image background with undetectable detriment to the cumulative cost. To circumvent this problem we propose an algorithm that grows a neural tree from a specified root by iteratively re-initializing the Fast Marching fronts. The speed image used in the Fast Marching Method is generated by computing the average outward flux of the gradient vector flow field. Each iteration of the algorithm produces a candidate extension by allowing the front to travel a specified distance and then tracking from the farthest point of the front back to the tree. Robust likelihood ratio test is used to evaluate the quality of the candidate extension by comparing voxel intensities along the extension to those in the foreground and the background. The qualified extensions are appended to the current tree, the front is re-initialized, and Fast Marching is continued until the stopping criterion is met. To evaluate the performance of the algorithm we reconstructed 6 stacks of two-photon microscopy images and compared the results to the ground truth reconstructions by using the DIADEM metric. The average comparison score was 0.82 out of 1.0, which is on par with the performance achieved by expert manual tracers. PMID:24386539

  9. A study of the relative effectiveness and cost of computerized information retrieval in the interactive mode

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.; Furniss, M. A.; Potter, T. R.

    1974-01-01

    Results of a number of experiments to illuminate the relative effectiveness and costs of computerized information retrieval in the interactive mode are reported. It was found that for equal time spent in preparing the search strategy, the batch and interactive modes gave approximately equal recall and relevance. The interactive mode however encourages the searcher to devote more time to the task and therefore usually yields improved output. Engineering costs as a result are higher in this mode. Estimates of associated hardware costs also indicate that operation in this mode is more expensive. Skilled RECON users like the rapid feedback and additional features offered by this mode if they are not constrained by considerations of cost.

  10. The 30/20 GHz mixed user architecture development study

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A mixed-user system is described which provides cost-effective communications services to a wide range of user terminal classes, ranging from one or two voice channel support in a direct-to-user mode, to multiple 500 mbps trunking channel support. Advanced satellite capabilities are utilized to minimize the cost of small terminals. In a system with thousands of small terminals, this approach results in minimum system cost.

  11. Integrating expert opinion with modelling for quantitative multi-hazard risk assessment in the Eastern Italian Alps

    NASA Astrophysics Data System (ADS)

    Chen, Lixia; van Westen, Cees J.; Hussin, Haydar; Ciurean, Roxana L.; Turkington, Thea; Chavarro-Rincon, Diana; Shrestha, Dhruba P.

    2016-11-01

    Extreme rainfall events are the main triggering causes for hydro-meteorological hazards in mountainous areas, where development is often constrained by the limited space suitable for construction. In these areas, hazard and risk assessments are fundamental for risk mitigation, especially for preventive planning, risk communication and emergency preparedness. Multi-hazard risk assessment in mountainous areas at local and regional scales remain a major challenge because of lack of data related to past events and causal factors, and the interactions between different types of hazards. The lack of data leads to a high level of uncertainty in the application of quantitative methods for hazard and risk assessment. Therefore, a systematic approach is required to combine these quantitative methods with expert-based assumptions and decisions. In this study, a quantitative multi-hazard risk assessment was carried out in the Fella River valley, prone to debris flows and flood in the north-eastern Italian Alps. The main steps include data collection and development of inventory maps, definition of hazard scenarios, hazard assessment in terms of temporal and spatial probability calculation and intensity modelling, elements-at-risk mapping, estimation of asset values and the number of people, physical vulnerability assessment, the generation of risk curves and annual risk calculation. To compare the risk for each type of hazard, risk curves were generated for debris flows, river floods and flash floods. Uncertainties were expressed as minimum, average and maximum values of temporal and spatial probability, replacement costs of assets, population numbers, and physical vulnerability. These result in minimum, average and maximum risk curves. To validate this approach, a back analysis was conducted using the extreme hydro-meteorological event that occurred in August 2003 in the Fella River valley. The results show a good performance when compared to the historical damage reports.

  12. Cost-effectiveness of alternative changes to a national blood collection service.

    PubMed

    Willis, S; De Corte, K; Cairns, J A; Zia Sadique, M; Hawkins, N; Pennington, M; Cho, G; Roberts, D J; Miflin, G; Grieve, R

    2018-05-16

    To evaluate the cost-effectiveness of changing opening times, introducing a donor health report and reducing the minimum inter-donation interval for donors attending static centres. Evidence is required about the effect of changes to the blood collection service on costs and the frequency of donation. This study estimated the effect of changes to the blood collection service in England on the annual number of whole-blood donations by current donors. We used donors' responses to a stated preference survey, donor registry data on donation frequency and deferral rates from the INTERVAL trial. Costs measured were those anticipated to differ between strategies. We reported the cost per additional unit of blood collected for each strategy versus current practice. Strategies with a cost per additional unit of whole blood less than £30 (an estimate of the current cost of collection) were judged likely to be cost-effective. In static donor centres, extending opening times to evenings and weekends provided an additional unit of whole blood at a cost of £23 and £29, respectively. Introducing a health report cost £130 per additional unit of blood collected. Although the strategy of reducing the minimum inter-donation interval had the lowest cost per additional unit of blood collected (£10), this increased the rate of deferrals due to low haemoglobin (Hb). The introduction of a donor health report is unlikely to provide a sufficient increase in donation frequency to justify the additional costs. A more cost-effective change is to extend opening hours for blood collection at static centres. © 2018 The Authors. Transfusion Medicine published by John Wiley & Sons Ltd on behalf of British Blood Transfusion Society.

  13. Cost-effectiveness of preventive interventions to reduce alcohol consumption in Denmark.

    PubMed

    Holm, Astrid Ledgaard; Veerman, Lennert; Cobiac, Linda; Ekholm, Ola; Diderichsen, Finn

    2014-01-01

    Excessive alcohol consumption increases the risk of many diseases and injuries, and the Global Burden of Disease 2010 study estimated that 6% of the burden of disease in Denmark is due to alcohol consumption. Alcohol consumption thus places a considerable economic burden on society. We analysed the cost-effectiveness of six interventions aimed at preventing alcohol abuse in the adult Danish population: 30% increased taxation, increased minimum legal drinking age, advertisement bans, limited hours of retail sales, and brief and longer individual interventions. Potential health effects were evaluated as changes in incidence, prevalence and mortality of alcohol-related diseases and injuries. Net costs were calculated as the sum of intervention costs and cost offsets related to treatment of alcohol-related outcomes, based on health care costs from Danish national registers. Cost-effectiveness was evaluated by calculating incremental cost-effectiveness ratios (ICERs) for each intervention. We also created an intervention pathway to determine the optimal sequence of interventions and their combined effects. Three of the analysed interventions (advertising bans, limited hours of retail sales and taxation) were cost-saving, and the remaining three interventions were all cost-effective. Net costs varied from € -17 million per year for advertisement ban to € 8 million for longer individual intervention. Effectiveness varied from 115 disability-adjusted life years (DALY) per year for minimum legal drinking age to 2,900 DALY for advertisement ban. The total annual effect if all interventions were implemented would be 7,300 DALY, with a net cost of € -30 million. Our results show that interventions targeting the whole population were more effective than individual-focused interventions. A ban on alcohol advertising, limited hours of retail sale and increased taxation had the highest probability of being cost-saving and should thus be first priority for implementation.

  14. Study on Operation Optimization of Pumping Station's 24 Hours Operation under Influences of Tides and Peak-Valley Electricity Prices

    NASA Astrophysics Data System (ADS)

    Yi, Gong; Jilin, Cheng; Lihua, Zhang; Rentian, Zhang

    2010-06-01

    According to different processes of tides and peak-valley electricity prices, this paper determines the optimal start up time in pumping station's 24 hours operation between the rating state and adjusting blade angle state respectively based on the optimization objective function and optimization model for single-unit pump's 24 hours operation taking JiangDu No.4 Pumping Station for example. In the meantime, this paper proposes the following regularities between optimal start up time of pumping station and the process of tides and peak-valley electricity prices each day within a month: (1) In the rating and adjusting blade angle state, the optimal start up time in pumping station's 24 hours operation which depends on the tide generation at the same day varies with the process of tides. There are mainly two kinds of optimal start up time which include the time at tide generation and 12 hours after it. (2) In the rating state, the optimal start up time on each day in a month exhibits a rule of symmetry from 29 to 28 of next month in the lunar calendar. The time of tide generation usually exists in the period of peak electricity price or the valley one. The higher electricity price corresponds to the higher minimum cost of water pumping at unit, which means that the minimum cost of water pumping at unit depends on the peak-valley electricity price at the time of tide generation on the same day. (3) In the adjusting blade angle state, the minimum cost of water pumping at unit in pumping station's 24 hour operation depends on the process of peak-valley electricity prices. And in the adjusting blade angle state, 4.85%˜5.37% of the minimum cost of water pumping at unit will be saved than that of in the rating state.

  15. Integrated Analysis and Visualization of Group Differences in Structural and Functional Brain Connectivity: Applications in Typical Ageing and Schizophrenia.

    PubMed

    Langen, Carolyn D; White, Tonya; Ikram, M Arfan; Vernooij, Meike W; Niessen, Wiro J

    2015-01-01

    Structural and functional brain connectivity are increasingly used to identify and analyze group differences in studies of brain disease. This study presents methods to analyze uni- and bi-modal brain connectivity and evaluate their ability to identify differences. Novel visualizations of significantly different connections comparing multiple metrics are presented. On the global level, "bi-modal comparison plots" show the distribution of uni- and bi-modal group differences and the relationship between structure and function. Differences between brain lobes are visualized using "worm plots". Group differences in connections are examined with an existing visualization, the "connectogram". These visualizations were evaluated in two proof-of-concept studies: (1) middle-aged versus elderly subjects; and (2) patients with schizophrenia versus controls. Each included two measures derived from diffusion weighted images and two from functional magnetic resonance images. The structural measures were minimum cost path between two anatomical regions according to the "Statistical Analysis of Minimum cost path based Structural Connectivity" method and the average fractional anisotropy along the fiber. The functional measures were Pearson's correlation and partial correlation of mean regional time series. The relationship between structure and function was similar in both studies. Uni-modal group differences varied greatly between connectivity types. Group differences were identified in both studies globally, within brain lobes and between regions. In the aging study, minimum cost path was highly effective in identifying group differences on all levels; fractional anisotropy and mean correlation showed smaller differences on the brain lobe and regional levels. In the schizophrenia study, minimum cost path and fractional anisotropy showed differences on the global level and within brain lobes; mean correlation showed small differences on the lobe level. Only fractional anisotropy and mean correlation showed regional differences. The presented visualizations were helpful in comparing and evaluating connectivity measures on multiple levels in both studies.

  16. Theoretical calculation of reorganization energy for electron self-exchange reaction by constrained density functional theory and constrained equilibrium thermodynamics.

    PubMed

    Ren, Hai-Sheng; Ming, Mei-Jun; Ma, Jian-Yi; Li, Xiang-Yuan

    2013-08-22

    Within the framework of constrained density functional theory (CDFT), the diabatic or charge localized states of electron transfer (ET) have been constructed. Based on the diabatic states, inner reorganization energy λin has been directly calculated. For solvent reorganization energy λs, a novel and reasonable nonequilibrium solvation model is established by introducing a constrained equilibrium manipulation, and a new expression of λs has been formulated. It is found that λs is actually the cost of maintaining the residual polarization, which equilibrates with the extra electric field. On the basis of diabatic states constructed by CDFT, a numerical algorithm using the new formulations with the dielectric polarizable continuum model (D-PCM) has been implemented. As typical test cases, self-exchange ET reactions between tetracyanoethylene (TCNE) and tetrathiafulvalene (TTF) and their corresponding ionic radicals in acetonitrile are investigated. The calculated reorganization energies λ are 7293 cm(-1) for TCNE/TCNE(-) and 5939 cm(-1) for TTF/TTF(+) reactions, agreeing well with available experimental results of 7250 cm(-1) and 5810 cm(-1), respectively.

  17. Dynamical Analysis of the Circumprimary Planet in the Eccentric Binary System HD 59686

    NASA Astrophysics Data System (ADS)

    Trifonov, Trifon; Lee, Man Hoi; Reffert, Sabine; Quirrenbach, Andreas

    2018-04-01

    We present a detailed orbital and stability analysis of the HD 59686 binary-star planet system. HD 59686 is a single-lined, moderately close (a B = 13.6 au) eccentric (e B = 0.73) binary, where the primary is an evolved K giant with mass M = 1.9 M ⊙ and the secondary is a star with a minimum mass of m B = 0.53 M ⊙. Additionally, on the basis of precise radial velocity (RV) data, a Jovian planet with a minimum mass of m p = 7 M Jup, orbiting the primary on a nearly circular S-type orbit with e p = 0.05 and a p = 1.09 au, has recently been announced. We investigate large sets of orbital fits consistent with HD 59686's RV data by applying bootstrap and systematic grid search techniques coupled with self-consistent dynamical fitting. We perform long-term dynamical integrations of these fits to constrain the permitted orbital configurations. We find that if the binary and the planet in this system have prograde and aligned coplanar orbits, there are narrow regions of stable orbital solutions locked in a secular apsidal alignment with the angle between the periapses, Δω, librating about 0°. We also test a large number of mutually inclined dynamical models in an attempt to constrain the three-dimensional orbital architecture. We find that for nearly coplanar and retrograde orbits with mutual inclination 145° ≲ Δi ≤ 180°, the system is fully stable for a large range of orbital solutions.

  18. Temperature fine-tunes Mediterranean Arabidopsis thaliana life-cycle phenology geographically.

    PubMed

    Marcer, A; Vidigal, D S; James, P M A; Fortin, M-J; Méndez-Vigo, B; Hilhorst, H W M; Bentsink, L; Alonso-Blanco, C; Picó, F X

    2018-01-01

    To understand how adaptive evolution in life-cycle phenology operates in plants, we need to unravel the effects of geographic variation in putative agents of natural selection on life-cycle phenology by considering all key developmental transitions and their co-variation patterns. We address this goal by quantifying the temperature-driven and geographically varying relationship between seed dormancy and flowering time in the annual Arabidopsis thaliana across the Iberian Peninsula. We used data on genetic variation in two major life-cycle traits, seed dormancy (DSDS50) and flowering time (FT), in a collection of 300 A. thaliana accessions from the Iberian Peninsula. The geographically varying relationship between life-cycle traits and minimum temperature, a major driver of variation in DSDS50 and FT, was explored with geographically weighted regressions (GWR). The environmentally varying correlation between DSDS50 and FT was analysed by means of sliding window analysis across a minimum temperature gradient. Maximum local adjustments between minimum temperature and life-cycle traits were obtained in the southwest Iberian Peninsula, an area with the highest minimum temperatures. In contrast, in off-southwest locations, the effects of minimum temperature on DSDS50 were rather constant across the region, whereas those of minimum temperature on FT were more variable, with peaks of strong local adjustments of GWR models in central and northwest Spain. Sliding window analysis identified a minimum temperature turning point in the relationship between DSDS50 and FT around a minimum temperature of 7.2 °C. Above this minimum temperature turning point, the variation in the FT/DSDS50 ratio became rapidly constrained and the negative correlation between FT and DSDS50 did not increase any further with increasing minimum temperatures. The southwest Iberian Peninsula emerges as an area where variation in life-cycle phenology appears to be restricted by the duration and severity of the hot summer drought. The temperature-driven varying relationship between DSDS50 and FT detected environmental boundaries for the co-evolution between FT and DSDS50 in A. thaliana. In the context of global warming, we conclude that A. thaliana phenology from the southwest Iberian Peninsula, determined by early flowering and deep seed dormancy, might become the most common life-cycle phenotype for this annual plant in the region. © 2017 German Botanical Society and The Royal Botanical Society of the Netherlands.

  19. Bookending the Opportunity to Lower Wind’s LCOE by Reducing the Uncertainty Surrounding Annual Energy Production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolinger, Mark

    Reducing the performance risk surrounding a wind project can potentially lead to a lower weighted-average cost of capital (WACC), and hence a lower levelized cost of energy (LCOE), through an advantageous shift in capital structure, and possibly also a reduction in the cost of capital. Specifically, a reduction in performance risk will move the 1-year P99 annual energy production (AEP) estimate closer to the P50 AEP estimate, which in turn reduces the minimum debt service coverage ratio (DSCR) required by lenders, thereby allowing the project to be financed with a greater proportion of low-cost debt. In addition, a reduction inmore » performance risk might also reduce the cost of one or more of the three sources of capital that are commonly used to finance wind projects: sponsor or cash equity, tax equity, and/or debt. Preliminary internal LBNL analysis of the maximum possible LCOE reduction attainable from reducing the performance risk of a wind project found a potentially significant opportunity for LCOE reduction of ~$10/MWh, by reducing the P50 DSCR to its theoretical minimum value of 1.0 (Bolinger 2015b, 2014) and by reducing the cost of sponsor equity and debt by one-third to one-half each (Bolinger 2015a, 2015b). However, with FY17 funding from the U.S. Department of Energy’s Atmosphere to Electrons (A2e) Performance Risk, Uncertainty, and Finance (PRUF) initiative, LBNL has been revisiting this “bookending” exercise in more depth, and now believes that its earlier preliminary assessment of the LCOE reduction opportunity was overstated. This reassessment is based on two new-found understandings: (1) Due to ever-present and largely irreducible inter-annual variability (IAV) in the wind resource, the minimum required DSCR cannot possibly fall to 1.0 (on a P50 basis), and (2) A reduction in AEP uncertainty will not necessarily lead to a reduction in the cost of capital, meaning that a shift in capital structure is perhaps the best that can be expected (perhaps along with a modest decline in the cost of cash equity as new investors enter the market).« less

  20. 78 FR 54996 - Information Reporting by Applicable Large Employers on Health Insurance Coverage Offered Under...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-09

    ... employer-sponsored plan is affordable if the employee's required contribution for the lowest-cost self-only... the lowest-cost employer-sponsored self-only coverage that provides minimum value to verify the... the premium tax credit, the Exchanges will employ a verification process. Because the information...

  1. 7 CFR 1781.17 - Docket preparation and processing.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., schedules, and estimated consumption of water should be made by the same methods as for loans for domestic... preliminary draft of the watershed plan or RCD area plan, together with an estimate of costs and benefits.... It should relate project costs to benefits of the WS or RCD loan or WS advance. Minimum and average...

  2. 7 CFR 1781.17 - Docket preparation and processing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., schedules, and estimated consumption of water should be made by the same methods as for loans for domestic... preliminary draft of the watershed plan or RCD area plan, together with an estimate of costs and benefits.... It should relate project costs to benefits of the WS or RCD loan or WS advance. Minimum and average...

  3. 7 CFR 1781.17 - Docket preparation and processing.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., schedules, and estimated consumption of water should be made by the same methods as for loans for domestic... preliminary draft of the watershed plan or RCD area plan, together with an estimate of costs and benefits.... It should relate project costs to benefits of the WS or RCD loan or WS advance. Minimum and average...

  4. 7 CFR 1781.17 - Docket preparation and processing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., schedules, and estimated consumption of water should be made by the same methods as for loans for domestic... preliminary draft of the watershed plan or RCD area plan, together with an estimate of costs and benefits.... It should relate project costs to benefits of the WS or RCD loan or WS advance. Minimum and average...

  5. 7 CFR 1781.17 - Docket preparation and processing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., schedules, and estimated consumption of water should be made by the same methods as for loans for domestic... preliminary draft of the watershed plan or RCD area plan, together with an estimate of costs and benefits.... It should relate project costs to benefits of the WS or RCD loan or WS advance. Minimum and average...

  6. Principles of minimum cost refining for optimum linerboard strength

    Treesearch

    Thomas J. Urbanik; Jong Myoung Won

    2006-01-01

    The mechanical properties of paper at a single basis weight and a single targeted refining freeness level have traditionally been used to compare papers. Understanding the economics of corrugated fiberboard requires a more global characterization of the variation of mechanical properties and refining energy consumption with freeness. The cost of refining energy to...

  7. Power Extension Package (PEP) system definition extension, orbital service module systems analysis study. Volume 11: PEP, cost, schedules, and work breakdown structure dictionary

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Cost scheduling and funding data are presented for the reference design of the power extension package. Major schedule milestones are correlated with current Spacelab flight dates. Funding distributions provide for minimum expenditure during the first year of the project.

  8. Robust, Adaptive Radar Detection and Estimation

    DTIC Science & Technology

    2015-07-21

    cost function is not a convex function in R, we apply a transformation variables i.e., let X = σ2R−1 and S′ = 1 σ2 S. Then, the revised cost function in...1 viv H i . We apply this inverse covariance matrix in computing the SINR as well as estimator variance. • Rank Constrained Maximum Likelihood: Our...even as almost all available training samples are corrupted. Probability of Detection vs. SNR We apply three test statistics, the normalized matched

  9. A novel minimum cost maximum power algorithm for future smart home energy management.

    PubMed

    Singaravelan, A; Kowsalya, M

    2017-11-01

    With the latest development of smart grid technology, the energy management system can be efficiently implemented at consumer premises. In this paper, an energy management system with wireless communication and smart meter are designed for scheduling the electric home appliances efficiently with an aim of reducing the cost and peak demand. For an efficient scheduling scheme, the appliances are classified into two types: uninterruptible and interruptible appliances. The problem formulation was constructed based on the practical constraints that make the proposed algorithm cope up with the real-time situation. The formulated problem was identified as Mixed Integer Linear Programming (MILP) problem, so this problem was solved by a step-wise approach. This paper proposes a novel Minimum Cost Maximum Power (MCMP) algorithm to solve the formulated problem. The proposed algorithm was simulated with input data available in the existing method. For validating the proposed MCMP algorithm, results were compared with the existing method. The compared results prove that the proposed algorithm efficiently reduces the consumer electricity consumption cost and peak demand to optimum level with 100% task completion without sacrificing the consumer comfort.

  10. Information dynamics in carcinogenesis and tumor growth.

    PubMed

    Gatenby, Robert A; Frieden, B Roy

    2004-12-21

    The storage and transmission of information is vital to the function of normal and transformed cells. We use methods from information theory and Monte Carlo theory to analyze the role of information in carcinogenesis. Our analysis demonstrates that, during somatic evolution of the malignant phenotype, the accumulation of genomic mutations degrades intracellular information. However, the degradation is constrained by the Darwinian somatic ecology in which mutant clones proliferate only when the mutation confers a selective growth advantage. In that environment, genes that normally decrease cellular proliferation, such as tumor suppressor or differentiation genes, suffer maximum information degradation. Conversely, those that increase proliferation, such as oncogenes, are conserved or exhibit only gain of function mutations. These constraints shield most cellular populations from catastrophic mutator-induced loss of the transmembrane entropy gradient and, therefore, cell death. The dynamics of constrained information degradation during carcinogenesis cause the tumor genome to asymptotically approach a minimum information state that is manifested clinically as dedifferentiation and unconstrained proliferation. Extreme physical information (EPI) theory demonstrates that altered information flow from cancer cells to their environment will manifest in-vivo as power law tumor growth with an exponent of size 1.62. This prediction is based only on the assumption that tumor cells are at an absolute information minimum and are capable of "free field" growth that is, they are unconstrained by external biological parameters. The prediction agrees remarkably well with several studies demonstrating power law growth in small human breast cancers with an exponent of 1.72+/-0.24. This successful derivation of an analytic expression for cancer growth from EPI alone supports the conceptual model that carcinogenesis is a process of constrained information degradation and that malignant cells are minimum information systems. EPI theory also predicts that the estimated age of a clinically observed tumor is subject to a root-mean square error of about 30%. This is due to information loss and tissue disorganization and probably manifests as a randomly variable lag phase in the growth pattern that has been observed experimentally. This difference between tumor size and age may impose a fundamental limit on the efficacy of screening based on early detection of small tumors. Independent of the EPI analysis, Monte Carlo methods are applied to predict statistical tumor growth due to perturbed information flow from the environment into transformed cells. A "simplest" Monte Carlo model is suggested by the findings in the EPI approach that tumor growth arises out of a minimally complex mechanism. The outputs of large numbers of simulations show that (a) about 40% of the populations do not survive the first two-generations due to mutations in critical gene segments; but (b) those that do survive will experience power law growth identical to the predicted rate obtained from the independent EPI approach. The agreement between these two very different approaches to the problem strongly supports the idea that tumor cells regress to a state of minimum information during carcinogenesis, and that information dynamics are integrally related to tumor development and growth.

  11. Cost-effectiveness of population-level expansion of highly active antiretroviral treatment for HIV in British Columbia, Canada: a modelling study.

    PubMed

    Nosyk, Bohdan; Min, Jeong E; Lima, Viviane D; Hogg, Robert S; Montaner, Julio S G

    2015-09-01

    Widespread HIV screening and access to highly active antiretroviral treatment (ART) were cost effective in mathematical models, but population-level implementation has led to questions about cost, value, and feasibility. In 1996, British Columbia, Canada, introduced universal coverage of drug and other health-care costs for people with HIV/AIDS and and began extensive scale-up in access to ART. We aimed to assess the cost-effectiveness of ART scale-up in British Columbia compared with hypothetical scenarios of constrained treatment access. Using comprehensive linked population-level data, we populated a dynamic, compartmental transmission model to simulate the HIV/AIDS epidemic in British Columbia from 1997 to 2010. We estimated HIV incidence, prevalence, mortality, costs (in 2010 CAN$), and quality-adjusted life-years (QALYs) for the study period, which was 1997-2010. We calculated incremental cost-effectiveness ratios from societal and third-party-payer perspectives to compare actual practice (true numbers of individuals accessing ART) to scenarios of constrained expansion (75% and 50% probability of accessing ART). We also investigated structural and parameter uncertainty. Actual practice resulted in 263 averted incident cases compared with 75% of observed access and 676 averted cases compared with 50% of observed access to ART. From a third-party-payer perspective, actual practice resulted in incremental cost-effectiveness ratios of $23 679 per QALY versus 75% access and $24 250 per QALY versus 50% access. From a societal perspective, actual practice was cost saving within the study period. When the model was extended to 2035, current observed access resulted in cumulative savings of $25·1 million compared with the 75% access scenario and $65·5 million compared with the 50% access scenario. ART scale-up in British Columbia has decreased HIV-related morbidity, mortality, and transmission. Resulting incremental cost-effectiveness ratios for actual practice, derived within a limited timeframe, were within established cost-effectiveness thresholds and were cost saving from a societal perspective. BC Ministry of Health, National Institute of Drug Abuse at the US National Institutes of Health. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Communication: CDFT-CI couplings can be unreliable when there is fractional charge transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mavros, Michael G.; Van Voorhis, Troy, E-mail: tvan@mit.edu

    2015-12-21

    Constrained density functional theory with configuration interaction (CDFT-CI) is a useful, low-cost tool for the computational prediction of electronic couplings between pseudo-diabatic constrained electronic states. Such couplings are of paramount importance in electron transfer theory and transition state theory, among other areas of chemistry. Unfortunately, CDFT-CI occasionally fails significantly, predicting a coupling that does not decay exponentially with distance and/or overestimating the expected coupling by an order of magnitude or more. In this communication, we show that the eigenvalues of the difference density matrix between the two constrained states can be used as an a priori metric to determine whenmore » CDFT-CI are likely to be reliable: when the eigenvalues are near 0 or ±1, transfer of a whole electron is occurring, and CDFT-CI can be trusted. We demonstrate the utility of this metric with several illustrative examples.« less

  13. Constrained Low-Interference Relay Node Deployment for Underwater Acoustic Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Li, Deying; Li, Zheng; Ma, Wenkai; Chen, Wenping

    An Underwater Acoustic Wireless Sensor Network (UA-WSN) consists of many resource-constrained Underwater Sensor Nodes (USNs), which are deployed to perform collaborative monitoring tasks over a given region. One way to preserve network connectivity while guaranteing other network QoS is to deploy some Relay Nodes (RNs) in the networks, in which RNs' function is more powerful than USNs and their cost is more expensive. This paper addresses Constrained Low-interference Relay Node Deployment (C-LRND) problem for 3-D UA-WSNs in which the RNs are placed at a subset of candidate locations to ensure connectivity between the USNs, under both the number of RNs deployed and the value of total incremental interference constraints. We first prove that it is NP-hard, then present a general approximation algorithm framework and get two polynomial time O(1)-approximation algorithms.

  14. Communication: CDFT-CI couplings can be unreliable when there is fractional charge transfer

    NASA Astrophysics Data System (ADS)

    Mavros, Michael G.; Van Voorhis, Troy

    2015-12-01

    Constrained density functional theory with configuration interaction (CDFT-CI) is a useful, low-cost tool for the computational prediction of electronic couplings between pseudo-diabatic constrained electronic states. Such couplings are of paramount importance in electron transfer theory and transition state theory, among other areas of chemistry. Unfortunately, CDFT-CI occasionally fails significantly, predicting a coupling that does not decay exponentially with distance and/or overestimating the expected coupling by an order of magnitude or more. In this communication, we show that the eigenvalues of the difference density matrix between the two constrained states can be used as an a priori metric to determine when CDFT-CI are likely to be reliable: when the eigenvalues are near 0 or ±1, transfer of a whole electron is occurring, and CDFT-CI can be trusted. We demonstrate the utility of this metric with several illustrative examples.

  15. Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Wisconsin

    USGS Publications Warehouse

    Walker, J.F.; Osen, L.L.; Hughes, P.E.

    1987-01-01

    A minimum budget of $510,000 is required to operate the program; a budget less than this does not permit proper service and maintenance of the gaging stations. At this minimum budget, the theoretical average standard error of instantaneous discharge is 14.4%. The maximum budget analyzed was $650,000 and resulted in an average standard of error of instantaneous discharge of 7.2%. 

  16. Estimate of Cost-Effective Potential for Minimum Efficiency Performance Standards in 13 Major World Economies Energy Savings, Environmental and Financial Impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Letschert, Virginie E.; Bojda, Nicholas; Ke, Jing

    2012-07-01

    This study analyzes the financial impacts on consumers of minimum efficiency performance standards (MEPS) for appliances that could be implemented in 13 major economies around the world. We use the Bottom-Up Energy Analysis System (BUENAS), developed at Lawrence Berkeley National Laboratory (LBNL), to analyze various appliance efficiency target levels to estimate the net present value (NPV) of policies designed to provide maximum energy savings while not penalizing consumers financially. These policies constitute what we call the “cost-effective potential” (CEP) scenario. The CEP scenario is designed to answer the question: How high can we raise the efficiency bar in mandatory programsmore » while still saving consumers money?« less

  17. Information transfer satellite concept study. Volume 4: computer manual

    NASA Technical Reports Server (NTRS)

    Bergin, P.; Kincade, C.; Kurpiewski, D.; Leinhaupel, F.; Millican, F.; Onstad, R.

    1971-01-01

    The Satellite Telecommunications Analysis and Modeling Program (STAMP) provides the user with a flexible and comprehensive tool for the analysis of ITS system requirements. While obtaining minimum cost design points, the program enables the user to perform studies over a wide range of user requirements and parametric demands. The program utilizes a total system approach wherein the ground uplink and downlink, the spacecraft, and the launch vehicle are simultaneously synthesized. A steepest descent algorithm is employed to determine the minimum total system cost design subject to the fixed user requirements and imposed constraints. In the process of converging to the solution, the pertinent subsystem tradeoffs are resolved. This report documents STAMP through a technical analysis and a description of the principal techniques employed in the program.

  18. Cost-effectiveness of preventing dental caries and full mouth dental reconstructions among Alaska Native children in the Yukon–Kuskokwim delta region of Alaska

    PubMed Central

    Atkins, Charisma Y.; Thomas, Timothy K.; Lenaker, Dane; Day, Gretchen M.; Hennessy, Thomas W.; Meltzer, Martin I.

    2016-01-01

    Objective We conducted a cost-effectiveness analysis of five specific dental interventions to help guide resource allocation. Methods We developed a spreadsheet-based tool, from the healthcare payer perspective, to evaluate the cost effectiveness of specific dental interventions that are currently used among Alaska Native children (6-60 months). Interventions included: water fluoridation, dental sealants, fluoride varnish, tooth brushing with fluoride toothpaste, and conducting initial dental exams on children <18 months of age. We calculated the cost-effectiveness ratio of implementing the proposed interventions to reduce the number of carious teeth and full mouth dental reconstructions (FMDRs) over 10 years. Results A total of 322 children received caries treatments completed by a dental provider in the dental chair, while 161 children received FMDRs completed by a dental surgeon in an operating room. The average cost of treating dental caries in the dental chair was $1,467 (~258,000 per year); while the cost of treating FMDRs was $9,349 (~1.5 million per year). All interventions were shown to prevent caries and FMDRs; however tooth brushing prevented the greatest number of caries at minimum and maximum effectiveness with 1,433 and 1,910, respectively. Tooth brushing also prevented the greatest number of FMDRs (159 and 211) at minimum and maximum effectiveness. Conclusions All of the dental interventions evaluated were shown to produce cost savings. However, the level of that cost saving is dependent on the intervention chosen. PMID:26990678

  19. Thermo-Mechanical Modeling and Analysis for Turbopump Assemblies

    NASA Technical Reports Server (NTRS)

    Platt, Mike; Marsh, Matt

    2003-01-01

    Life, reliability, and cost are strongly impacted by steady and transient thermo-mechanical effect. Design cycle can suffer big setbacks when working a transient stress/deflection issue. Balance between objectives and constrains is always difficult. Requires assembly-level analysis early in the design cycle.

  20. Limit the Impact: Build Only What You Need

    ERIC Educational Resources Information Center

    Karp, Katie

    2012-01-01

    Economic downturn. Budget constraints. Cost of attendance. Campus infrastructure needs. Building to compete. For many colleges and universities, these phrases have become all too familiar over the last five years. With today's economic challenges, educational institutions are finding themselves in constrained financial positions. They are…

  1. Mini-batch optimized full waveform inversion with geological constrained gradient filtering

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai

    2018-05-01

    High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonetto, Andrea; Dall'Anese, Emiliano

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  3. Molecular basis of endosomal-membrane association for the dengue virus envelope protein

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, David M.; Kent, Michael S.; Rempe, Susan B.

    Dengue virus is coated by an icosahedral shell of 90 envelope protein dimers that convert to trimers at low pH and promote fusion of its membrane with the membrane of the host endosome. We provide the first estimates for the free energy barrier and minimum for two key steps in this process: host membrane bending and protein–membrane binding. Both are studied using complementary membrane elastic, continuum electrostatics and all-atom molecular dynamics simulations. The predicted host membrane bending required to form an initial fusion stalk presents a 22–30 kcal/mol free energy barrier according to a constrained membrane elastic model. Combined continuummore » and molecular dynamics results predict a 15 kcal/mol free energy decrease on binding of each trimer of dengue envelope protein to a membrane with 30% anionic phosphatidylglycerol lipid. The bending cost depends on the preferred curvature of the lipids composing the host membrane leaflets, while the free energy gained for protein binding depends on the surface charge density of the host membrane. The fusion loop of the envelope protein inserts exactly at the level of the interface between the membrane's hydrophobic and head-group regions. As a result, the methods used in this work provide a means for further characterization of the structures and free energies of protein-assisted membrane fusion.« less

  4. Molecular basis of endosomal-membrane association for the dengue virus envelope protein

    DOE PAGES

    Rogers, David M.; Kent, Michael S.; Rempe, Susan B.

    2015-01-02

    Dengue virus is coated by an icosahedral shell of 90 envelope protein dimers that convert to trimers at low pH and promote fusion of its membrane with the membrane of the host endosome. We provide the first estimates for the free energy barrier and minimum for two key steps in this process: host membrane bending and protein–membrane binding. Both are studied using complementary membrane elastic, continuum electrostatics and all-atom molecular dynamics simulations. The predicted host membrane bending required to form an initial fusion stalk presents a 22–30 kcal/mol free energy barrier according to a constrained membrane elastic model. Combined continuummore » and molecular dynamics results predict a 15 kcal/mol free energy decrease on binding of each trimer of dengue envelope protein to a membrane with 30% anionic phosphatidylglycerol lipid. The bending cost depends on the preferred curvature of the lipids composing the host membrane leaflets, while the free energy gained for protein binding depends on the surface charge density of the host membrane. The fusion loop of the envelope protein inserts exactly at the level of the interface between the membrane's hydrophobic and head-group regions. As a result, the methods used in this work provide a means for further characterization of the structures and free energies of protein-assisted membrane fusion.« less

  5. An Optimization-Based Method for Feature Ranking in Nonlinear Regression Problems.

    PubMed

    Bravi, Luca; Piccialli, Veronica; Sciandrone, Marco

    2017-04-01

    In this paper, we consider the feature ranking problem, where, given a set of training instances, the task is to associate a score with the features in order to assess their relevance. Feature ranking is a very important tool for decision support systems, and may be used as an auxiliary step of feature selection to reduce the high dimensionality of real-world data. We focus on regression problems by assuming that the process underlying the generated data can be approximated by a continuous function (for instance, a feedforward neural network). We formally state the notion of relevance of a feature by introducing a minimum zero-norm inversion problem of a neural network, which is a nonsmooth, constrained optimization problem. We employ a concave approximation of the zero-norm function, and we define a smooth, global optimization problem to be solved in order to assess the relevance of the features. We present the new feature ranking method based on the solution of instances of the global optimization problem depending on the available training data. Computational experiments on both artificial and real data sets are performed, and point out that the proposed feature ranking method is a valid alternative to existing methods in terms of effectiveness. The obtained results also show that the method is costly in terms of CPU time, and this may be a limitation in the solution of large-dimensional problems.

  6. Dynamic Modeling, Model-Based Control, and Optimization of Solid Oxide Fuel Cells

    NASA Astrophysics Data System (ADS)

    Spivey, Benjamin James

    2011-07-01

    Solid oxide fuel cells are a promising option for distributed stationary power generation that offers efficiencies ranging from 50% in stand-alone applications to greater than 80% in cogeneration. To advance SOFC technology for widespread market penetration, the SOFC should demonstrate improved cell lifetime and load-following capability. This work seeks to improve lifetime through dynamic analysis of critical lifetime variables and advanced control algorithms that permit load-following while remaining in a safe operating zone based on stress analysis. Control algorithms typically have addressed SOFC lifetime operability objectives using unconstrained, single-input-single-output control algorithms that minimize thermal transients. Existing SOFC controls research has not considered maximum radial thermal gradients or limits on absolute temperatures in the SOFC. In particular, as stress analysis demonstrates, the minimum cell temperature is the primary thermal stress driver in tubular SOFCs. This dissertation presents a dynamic, quasi-two-dimensional model for a high-temperature tubular SOFC combined with ejector and prereformer models. The model captures dynamics of critical thermal stress drivers and is used as the physical plant for closed-loop control simulations. A constrained, MIMO model predictive control algorithm is developed and applied to control the SOFC. Closed-loop control simulation results demonstrate effective load-following, constraint satisfaction for critical lifetime variables, and disturbance rejection. Nonlinear programming is applied to find the optimal SOFC size and steady-state operating conditions to minimize total system costs.

  7. RIGHT-SIZED OR DOWNSIZED: COMPARISON OF COSTS VERSUS CAPABILITY TO MAINTAIN INTRA-THEATER AIRLIFT

    DTIC Science & Technology

    2015-12-01

    AU/ACSC/2015 AIR COMMAND AND STAFF COLLEGE DISTANCE LEARNING AIR UNIVERSITY RIGHT -SIZED OR DOWNSIZED: COMPARISON OF COSTS VERSUS...aircraft and “ right -sizing” units will allow the Air Force to realize the cost efficiencies it is hoping to gain. Balancing this with maintaining...minimum of two aircraft available for surge capabilities are necessary considerations when determining the “ right -size” of a unit. Possessing too

  8. Design for minimum energy in interstellar communication

    NASA Astrophysics Data System (ADS)

    Messerschmitt, David G.

    2015-02-01

    Microwave digital communication at interstellar distances is the foundation of extraterrestrial civilization (SETI and METI) communication of information-bearing signals. Large distances demand large transmitted power and/or large antennas, while the propagation is transparent over a wide bandwidth. Recognizing a fundamental tradeoff, reduced energy delivered to the receiver at the expense of wide bandwidth (the opposite of terrestrial objectives) is advantageous. Wide bandwidth also results in simpler design and implementation, allowing circumvention of dispersion and scattering arising in the interstellar medium and motion effects and obviating any related processing. The minimum energy delivered to the receiver per bit of information is determined by cosmic microwave background alone. By mapping a single bit onto a carrier burst, the Morse code invented for the telegraph in 1836 comes closer to this minimum energy than approaches used in modern terrestrial radio. Rather than the terrestrial approach of adding phases and amplitudes increases information capacity while minimizing bandwidth, adding multiple time-frequency locations for carrier bursts increases capacity while minimizing energy per information bit. The resulting location code is simple and yet can approach the minimum energy as bandwidth is expanded. It is consistent with easy discovery, since carrier bursts are energetic and straightforward modifications to post-detection pattern recognition can identify burst patterns. Time and frequency coherence constraints leading to simple signal discovery are addressed, and observations of the interstellar medium by transmitter and receiver constrain the burst parameters and limit the search scope.

  9. Effect of load introduction on graphite epoxy compression specimens

    NASA Technical Reports Server (NTRS)

    Reiss, R.; Yao, T. M.

    1981-01-01

    Compression testing of modern composite materials is affected by the manner in which the compressive load is introduced. Two such effects are investigated: (1) the constrained edge effect which prevents transverse expansion and is common to all compression testing in which the specimen is gripped in the fixture; and (2) nonuniform gripping which induces bending into the specimen. An analytical model capable of quantifying these foregoing effects was developed which is based upon the principle of minimum complementary energy. For pure compression, the stresses are approximated by Fourier series. For pure bending, the stresses are approximated by Legendre polynomials.

  10. Constrained Burn Optimization for the International Space Station

    NASA Technical Reports Server (NTRS)

    Brown, Aaron J.; Jones, Brandon A.

    2017-01-01

    In long-term trajectory planning for the International Space Station (ISS), translational burns are currently targeted sequentially to meet the immediate trajectory constraints, rather than simultaneously to meet all constraints, do not employ gradient-based search techniques, and are not optimized for a minimum total deltav (v) solution. An analytic formulation of the constraint gradients is developed and used in an optimization solver to overcome these obstacles. Two trajectory examples are explored, highlighting the advantage of the proposed method over the current approach, as well as the potential v and propellant savings in the event of propellant shortages.

  11. Minimum impulse transfers to rotate the line of apsides

    NASA Technical Reports Server (NTRS)

    Phong, Connie; Sweetser, Theodore H.

    2005-01-01

    Transfer between two coplanar orbits can be accomplished via a single impulse if the two orbits intersect. Optimization of a single-impulse transfer, however, is not possible since the transfer orbit is completely constrained by the initial and final orbits. On the other hand, two-impulse transfers are possible between any two terminal orbits. While optimal scenarios are not known for the general two-impulse case, there are various approximate solutions to many special cases. We consider the problem of an inplane rotation of the line of apsides, leaving the size and shape of the orbit unaffected.

  12. On the functional optimization of a certain class of nonstationary spatial functions

    USGS Publications Warehouse

    Christakos, G.; Paraskevopoulos, P.N.

    1987-01-01

    Procedures are developed in order to obtain optimal estimates of linear functionals for a wide class of nonstationary spatial functions. These procedures rely on well-established constrained minimum-norm criteria, and are applicable to multidimensional phenomena which are characterized by the so-called hypothesis of inherentity. The latter requires elimination of the polynomial, trend-related components of the spatial function leading to stationary quantities, and also it generates some interesting mathematics within the context of modelling and optimization in several dimensions. The arguments are illustrated using various examples, and a case study computed in detail. ?? 1987 Plenum Publishing Corporation.

  13. Cosmic 21 cm delensing of microwave background polarization and the minimum detectable energy scale of inflation.

    PubMed

    Sigurdson, Kris; Cooray, Asantha

    2005-11-18

    We propose a new method for removing gravitational lensing from maps of cosmic microwave background (CMB) polarization anisotropies. Using observations of anisotropies or structures in the cosmic 21 cm radiation, emitted or absorbed by neutral hydrogen atoms at redshifts 10 to 200, the CMB can be delensed. We find this method could allow CMB experiments to have increased sensitivity to a background of inflationary gravitational waves (IGWs) compared to methods relying on the CMB alone and may constrain models of inflation which were heretofore considered to have undetectable IGW amplitudes.

  14. Beamforming approaches for untethered, ultrasonic neural dust motes for cortical recording: a simulation study.

    PubMed

    Bertrand, Alexander; Seo, Dongjin; Maksimovic, Filip; Carmena, Jose M; Maharbiz, Michel M; Alon, Elad; Rabaey, Jan M

    2014-01-01

    In this paper, we examine the use of beamforming techniques to interrogate a multitude of neural implants in a distributed, ultrasound-based intra-cortical recording platform known as Neural Dust. We propose a general framework to analyze system design tradeoffs in the ultrasonic beamformer that extracts neural signals from modulated ultrasound waves that are backscattered by free-floating neural dust (ND) motes. Simulations indicate that high-resolution linearly-constrained minimum variance beamforming sufficiently suppresses interference from unselected ND motes and can be incorporated into the ND-based cortical recording system.

  15. Novel methods for Solving Economic Dispatch of Security-Constrained Unit Commitment Based on Linear Programming

    NASA Astrophysics Data System (ADS)

    Guo, Sangang

    2017-09-01

    There are two stages in solving security-constrained unit commitment problems (SCUC) within Lagrangian framework: one is to obtain feasible units’ states (UC), the other is power economic dispatch (ED) for each unit. The accurate solution of ED is more important for enhancing the efficiency of the solution to SCUC for the fixed feasible units’ statues. Two novel methods named after Convex Combinatorial Coefficient Method and Power Increment Method respectively based on linear programming problem for solving ED are proposed by the piecewise linear approximation to the nonlinear convex fuel cost functions. Numerical testing results show that the methods are effective and efficient.

  16. Manpower Mix for Health Services

    PubMed Central

    Shuman, Larry J.; Young, John P.; Naddor, Eliezer

    1971-01-01

    A model is formulated to determine the mix of manpower and technology needed to provide health services of acceptable quality at a minimum total cost to the community. Total costs include both the direct costs associated with providing the services and with developing additional manpower and the indirect costs (shortage costs) resulting from not providing needed services. The model is applied to a hypothetical neighborhood health center, and its sensitivity to alternative policies is investigated by cost-benefit analyses. Possible extensions of the model to include dynamic elements in health delivery systems are discussed, as is its adaptation for use in hospital planning, with a changed objective function. PMID:5095652

  17. Energy conversion approaches and materials for high-efficiency photovoltaics.

    PubMed

    Green, Martin A; Bremner, Stephen P

    2016-12-20

    The past five years have seen significant cost reductions in photovoltaics and a correspondingly strong increase in uptake, with photovoltaics now positioned to provide one of the lowest-cost options for future electricity generation. What is becoming clear as the industry develops is that area-related costs, such as costs of encapsulation and field-installation, are increasingly important components of the total costs of photovoltaic electricity generation, with this trend expected to continue. Improved energy-conversion efficiency directly reduces such costs, with increased manufacturing volume likely to drive down the additional costs associated with implementing higher efficiencies. This suggests the industry will evolve beyond the standard single-junction solar cells that currently dominate commercial production, where energy-conversion efficiencies are fundamentally constrained by Shockley-Queisser limits to practical values below 30%. This Review assesses the overall prospects for a range of approaches that can potentially exceed these limits, based on ultimate efficiency prospects, material requirements and developmental outlook.

  18. Controlling Healthcare Costs: Just Cost Effectiveness or "Just" Cost Effectiveness?

    PubMed

    Fleck, Leonard M

    2018-04-01

    Meeting healthcare needs is a matter of social justice. Healthcare needs are virtually limitless; however, resources, such as money, for meeting those needs, are limited. How then should we (just and caring citizens and policymakers in such a society) decide which needs must be met as a matter of justice with those limited resources? One reasonable response would be that we should use cost effectiveness as our primary criterion for making those choices. This article argues instead that cost-effectiveness considerations must be constrained by considerations of healthcare justice. The goal of this article will be to provide a preliminary account of how we might distinguish just from unjust or insufficiently just applications of cost-effectiveness analysis to some healthcare rationing problems; specifically, problems related to extraordinarily expensive targeted cancer therapies. Unconstrained compassionate appeals for resources for the medically least well-off cancer patients will be neither just nor cost effective.

  19. Global solar wind variations over the last four centuries.

    PubMed

    Owens, M J; Lockwood, M; Riley, P

    2017-01-31

    The most recent "grand minimum" of solar activity, the Maunder minimum (MM, 1650-1710), is of great interest both for understanding the solar dynamo and providing insight into possible future heliospheric conditions. Here, we use nearly 30 years of output from a data-constrained magnetohydrodynamic model of the solar corona to calibrate heliospheric reconstructions based solely on sunspot observations. Using these empirical relations, we produce the first quantitative estimate of global solar wind variations over the last 400 years. Relative to the modern era, the MM shows a factor 2 reduction in near-Earth heliospheric magnetic field strength and solar wind speed, and up to a factor 4 increase in solar wind Mach number. Thus solar wind energy input into the Earth's magnetosphere was reduced, resulting in a more Jupiter-like system, in agreement with the dearth of auroral reports from the time. The global heliosphere was both smaller and more symmetric under MM conditions, which has implications for the interpretation of cosmogenic radionuclide data and resulting total solar irradiance estimates during grand minima.

  20. CCOMP: An efficient algorithm for complex roots computation of determinantal equations

    NASA Astrophysics Data System (ADS)

    Zouros, Grigorios P.

    2018-01-01

    In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.

  1. A closed-form trim solution yielding minimum trim drag for airplanes with multiple longitudinal-control effectors

    NASA Technical Reports Server (NTRS)

    Goodrich, Kenneth H.; Sliwa, Steven M.; Lallman, Frederick J.

    1989-01-01

    Airplane designs are currently being proposed with a multitude of lifting and control devices. Because of the redundancy in ways to generate moments and forces, there are a variety of strategies for trimming each airplane. A linear optimum trim solution (LOTS) is derived using a Lagrange formulation. LOTS enables the rapid calculation of the longitudinal load distribution resulting in the minimum trim drag in level, steady-state flight for airplanes with a mixture of three or more aerodynamic surfaces and propulsive control effectors. Comparisons of the trim drags obtained using LOTS, a direct constrained optimization method, and several ad hoc methods are presented for vortex-lattice representations of a three-surface airplane and two-surface airplane with thrust vectoring. These comparisons show that LOTS accurately predicts the results obtained from the nonlinear optimization and that the optimum methods result in trim drag reductions of up to 80 percent compared to the ad hoc methods.

  2. Fitness costs of animal medication: antiparasitic plant chemicals reduce fitness of monarch butterfly hosts.

    PubMed

    Tao, Leiling; Hoang, Kevin M; Hunter, Mark D; de Roode, Jacobus C

    2016-09-01

    The emerging field of ecological immunology demonstrates that allocation by hosts to immune defence against parasites is constrained by the costs of those defences. However, the costs of non-immunological defences, which are important alternatives to canonical immune systems, are less well characterized. Estimating such costs is essential for our understanding of the ecology and evolution of alternative host defence strategies. Many animals have evolved medication behaviours, whereby they use antiparasitic compounds from their environment to protect themselves or their kin from parasitism. Documenting the costs of medication behaviours is complicated by natural variation in the medicinal components of diets and their covariance with other dietary components, such as macronutrients. In the current study, we explore the costs of the usage of antiparasitic compounds in monarch butterflies (Danaus plexippus), using natural variation in concentrations of antiparasitic compounds among plants. Upon infection by their specialist protozoan parasite Ophryocystis elektroscirrha, monarch butterflies can selectively oviposit on milkweed with high foliar concentrations of cardenolides, secondary chemicals that reduce parasite growth. Here, we show that these antiparasitic cardenolides can also impose significant costs on both uninfected and infected butterflies. Among eight milkweed species that vary substantially in their foliar cardenolide concentration and composition, we observed the opposing effects of cardenolides on monarch fitness traits. While high foliar cardenolide concentrations increased the tolerance of monarch butterflies to infection, they reduced the survival rate of caterpillars to adulthood. Additionally, although non-polar cardenolide compounds decreased the spore load of infected butterflies, they also reduced the life span of uninfected butterflies, resulting in a hump-shaped curve between cardenolide non-polarity and the life span of infected butterflies. Overall, our results suggest that the use of antiparasitic compounds carries substantial costs, which could constrain host investment in medication behaviours. © 2016 The Authors. Journal of Animal Ecology © 2016 British Ecological Society.

  3. Option pricing: a flexible tool to disseminate shared savings contracts.

    PubMed

    Friedberg, Mark W; Buendia, Anthony M; Lauderdale, Katherine E; Hussey, Peter S

    2013-08-01

    Due to volatility in healthcare costs, shared savings contracts can create systematic financial losses for payers, especially when contracting with smaller providers. To improve the business case for shared savings, we calculated the prices of financial options that payers can "sell" to providers to offset these losses. Using 2009 to 2010 member-level total cost of care data from a large commercial health plan, we calculated option prices by applying a bootstrap simulation procedure. We repeated these simulations for providers of sizes ranging from 500 to 60,000 patients and for shared savings contracts with and without key design features (minimum savings thresholds,bonus caps, cost outlier truncation, and downside risk) and under assumptions of zero, 1%, and 2% real cost reductions due to the shared savings contracts. Assuming no real cost reduction and a 50% shared savings rate, per patient option prices ranged from $225 (3.1% of overall costs) for 500-patient providers to $23 (0.3%) for 60,000-patient providers. Introducing minimum savings thresholds, bonus caps, cost outlier truncation, and downside risk reduced these option prices. Option prices were highly sensitive to the magnitude of real cost reductions. If shared savings contracts cause 2% reductions in total costs, option prices fall to zero for all but the smallest providers. Calculating the prices of financial options that protect payers and providers from downside risk can inject flexibility into shared savings contracts, extend such contracts to smaller providers, and clarify the tradeoffs between different contract designs, potentially speeding the dissemination of shared savings.

  4. Satellite power system (SPS) concept definition study. Volume 3: Experimental verification definition

    NASA Technical Reports Server (NTRS)

    Hanley, G. M.

    1980-01-01

    An evolutionary Satellite Power Systems development plan was prepared. Planning analysis was directed toward the evolution of a scenario that met the stated objectives, was technically possible and economically attractive, and took into account constraining considerations, such as requirements for very large scale end-to-end demonstration in a compressed time frame, the relative cost/technical merits of ground testing versus space testing, and the need for large mass flow capability to low Earth orbit and geosynchronous orbit at reasonable cost per pound.

  5. A Goal Programming/Constrained Regression Review of the Bell System Breakup.

    DTIC Science & Technology

    1985-05-01

    characteristically employ. 4 .- - -. . ,. - - ;--.. . . .. 2. MULTI-PRODUCT COST MODEL AND DATA DETAILS When technical efficiency (i.e. zero waste ) can be assumed...assumming, but we believe that it was probably technical (= zero waste ) efficiency by virtue of the following reasons. Scale efficien- cy was a

  6. A PRIOR EVALUATION OF TWO-STAGE CLUSTER SAMPLING FOR ACCURACY ASSESSMENT OF LARGE-AREA LAND-COVER MAPS

    EPA Science Inventory

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, withi...

  7. Blueberry producers' attitudes toward harvest mechanization for fresh market

    USDA-ARS?s Scientific Manuscript database

    The availability and cost of agricultural labor is constraining the specialty crop industry throughout the United States. Most soft fruits destined for the fresh market are fragile and must be hand harvested to maintain optimal quality and postharvest longevity. However, due to labor shortages, ma...

  8. APEX Model Simulation for Row Crop Watersheds with Agroforestry and Grass Buffers

    USDA-ARS?s Scientific Manuscript database

    Watershed model simulation has become an important tool in studying ways and means to reduce transport of agricultural pollutants. Conducting field experiments to assess buffer influences on water quality are constrained by the large-scale nature of watersheds, high experimental costs, private owner...

  9. Multidimensionally constrained relativistic mean-field study of triple-humped barriers in actinides

    NASA Astrophysics Data System (ADS)

    Zhao, Jie; Lu, Bing-Nan; Vretenar, Dario; Zhao, En-Guang; Zhou, Shan-Gui

    2015-01-01

    Background: Potential energy surfaces (PES's) of actinide nuclei are characterized by a two-humped barrier structure. At large deformations beyond the second barrier, the occurrence of a third barrier was predicted by macroscopic-microscopic model calculations in the 1970s, but contradictory results were later reported by a number of studies that used different methods. Purpose: Triple-humped barriers in actinide nuclei are investigated in the framework of covariant density functional theory (CDFT). Methods: Calculations are performed using the multidimensionally constrained relativistic mean field (MDC-RMF) model, with the nonlinear point-coupling functional PC-PK1 and the density-dependent meson exchange functional DD-ME2 in the particle-hole channel. Pairing correlations are treated in the BCS approximation with a separable pairing force of finite range. Results: Two-dimensional PES's of 226,228,230,232Th and 232,235,236,238U are mapped and the third minima on these surfaces are located. Then one-dimensional potential energy curves along the fission path are analyzed in detail and the energies of the second barrier, the third minimum, and the third barrier are determined. The functional DD-ME2 predicts the occurrence of a third barrier in all Th nuclei and 238U . The third minima in 230 ,232Th are very shallow, whereas those in 226 ,228Th and 238U are quite prominent. With the functional PC-PK1 a third barrier is found only in 226 ,228 ,230Th . Single-nucleon levels around the Fermi surface are analyzed in 226Th, and it is found that the formation of the third minimum is mainly due to the Z =90 proton energy gap at β20≈1.5 and β30≈0.7 . Conclusions: The possible occurrence of a third barrier on the PES's of actinide nuclei depends on the effective interaction used in multidimensional CDFT calculations. More pronounced minima are predicted by the DD-ME2 functional, as compared to the functional PC-PK1. The depth of the third well in Th isotopes decreases with increasing neutron number. The origin of the third minimum is due to the proton Z =90 shell gap at relevant deformations.

  10. Fat water decomposition using globally optimal surface estimation (GOOSE) algorithm.

    PubMed

    Cui, Chen; Wu, Xiaodong; Newell, John D; Jacob, Mathews

    2015-03-01

    This article focuses on developing a novel noniterative fat water decomposition algorithm more robust to fat water swaps and related ambiguities. Field map estimation is reformulated as a constrained surface estimation problem to exploit the spatial smoothness of the field, thus minimizing the ambiguities in the recovery. Specifically, the differences in the field map-induced frequency shift between adjacent voxels are constrained to be in a finite range. The discretization of the above problem yields a graph optimization scheme, where each node of the graph is only connected with few other nodes. Thanks to the low graph connectivity, the problem is solved efficiently using a noniterative graph cut algorithm. The global minimum of the constrained optimization problem is guaranteed. The performance of the algorithm is compared with that of state-of-the-art schemes. Quantitative comparisons are also made against reference data. The proposed algorithm is observed to yield more robust fat water estimates with fewer fat water swaps and better quantitative results than other state-of-the-art algorithms in a range of challenging applications. The proposed algorithm is capable of considerably reducing the swaps in challenging fat water decomposition problems. The experiments demonstrate the benefit of using explicit smoothness constraints in field map estimation and solving the problem using a globally convergent graph-cut optimization algorithm. © 2014 Wiley Periodicals, Inc.

  11. Object-oriented and pixel-based classification approach for land cover using airborne long-wave infrared hyperspectral data

    NASA Astrophysics Data System (ADS)

    Marwaha, Richa; Kumar, Anil; Kumar, Arumugam Senthil

    2015-01-01

    Our primary objective was to explore a classification algorithm for thermal hyperspectral data. Minimum noise fraction is applied to thermal hyperspectral data and eight pixel-based classifiers, i.e., constrained energy minimization, matched filter, spectral angle mapper (SAM), adaptive coherence estimator, orthogonal subspace projection, mixture-tuned matched filter, target-constrained interference-minimized filter, and mixture-tuned target-constrained interference minimized filter are tested. The long-wave infrared (LWIR) has not yet been exploited for classification purposes. The LWIR data contain emissivity and temperature information about an object. A highest overall accuracy of 90.99% was obtained using the SAM algorithm for the combination of thermal data with a colored digital photograph. Similarly, an object-oriented approach is applied to thermal data. The image is segmented into meaningful objects based on properties such as geometry, length, etc., which are grouped into pixels using a watershed algorithm and an applied supervised classification algorithm, i.e., support vector machine (SVM). The best algorithm in the pixel-based category is the SAM technique. SVM is useful for thermal data, providing a high accuracy of 80.00% at a scale value of 83 and a merge value of 90, whereas for the combination of thermal data with a colored digital photograph, SVM gives the highest accuracy of 85.71% at a scale value of 82 and a merge value of 90.

  12. Evaluating data worth for ground-water management under uncertainty

    USGS Publications Warehouse

    Wagner, B.J.

    1999-01-01

    A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information - i.e., the projected reduction in management costs - with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.

  13. Theoretical study of network design methodologies for the aerial relay system. [energy consumption and air traffic control

    NASA Technical Reports Server (NTRS)

    Rivera, J. M.; Simpson, R. W.

    1980-01-01

    The aerial relay system network design problem is discussed. A generalized branch and bound based algorithm is developed which can consider a variety of optimization criteria, such as minimum passenger travel time and minimum liner and feeder operating costs. The algorithm, although efficient, is basically useful for small size networks, due to its nature of exponentially increasing computation time with the number of variables.

  14. Localization of MEG human brain responses to retinotopic visual stimuli with contrasting source reconstruction approaches

    PubMed Central

    Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine

    2014-01-01

    Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268

  15. Experiment module concepts study. Volume 1: Management summary

    NASA Technical Reports Server (NTRS)

    1970-01-01

    The minimum number of standardized (common) module concepts that will satisfy the experiment program for manned space stations at least cost is investigated. The module interfaces with other elements such as the space shuttle, ground stations, and the experiments themselves are defined. The total experiment module program resource and test requirements are also considered. The minimum number of common module concepts that will satisfy the program at least cost is found to be three, plus a propulsion slice and certain experiment-peculiar integration hardware. The experiment modules rely on the space station for operational, maintenance, and logistic support. They are compatible with both expendable and shuttle launch vehicles, and with servicing by shuttle, tug, or directly from the space station. A total experiment module program cost of approximately $2319M under the study assumptions is indicated. This total is made up of $838M for experiment module development and production, $806M for experiment equipment, and $675M for interface hardware, experiment integration, launch and flight operations, and program management and support.

  16. Methodology for the optimal design of an integrated first and second generation ethanol production plant combined with power cogeneration.

    PubMed

    Bechara, Rami; Gomez, Adrien; Saint-Antonin, Valérie; Schweitzer, Jean-Marc; Maréchal, François

    2016-08-01

    The application of methodologies for the optimal design of integrated processes has seen increased interest in literature. This article builds on previous works and applies a systematic methodology to an integrated first and second generation ethanol production plant with power cogeneration. The methodology breaks into process simulation, heat integration, thermo-economic evaluation, exergy efficiency vs. capital costs, multi-variable, evolutionary optimization, and process selection via profitability maximization. Optimization generated Pareto solutions with exergy efficiency ranging between 39.2% and 44.4% and capital costs from 210M$ to 390M$. The Net Present Value was positive for only two scenarios and for low efficiency, low hydrolysis points. The minimum cellulosic ethanol selling price was sought to obtain a maximum NPV of zero for high efficiency, high hydrolysis alternatives. The obtained optimal configuration presented maximum exergy efficiency, hydrolyzed bagasse fraction, capital costs and ethanol production rate, and minimum cooling water consumption and power production rate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Use of the Collaborative Optimization Architecture for Launch Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Moore, A. A.; Kroo, I. M.

    1996-01-01

    Collaborative optimization is a new design architecture specifically created for large-scale distributed-analysis applications. In this approach, problem is decomposed into a user-defined number of subspace optimization problems that are driven towards interdisciplinary compatibility and the appropriate solution by a system-level coordination process. This decentralized design strategy allows domain-specific issues to be accommodated by disciplinary analysts, while requiring interdisciplinary decisions to be reached by consensus. The present investigation focuses on application of the collaborative optimization architecture to the multidisciplinary design of a single-stage-to-orbit launch vehicle. Vehicle design, trajectory, and cost issues are directly modeled. Posed to suit the collaborative architecture, the design problem is characterized by 5 design variables and 16 constraints. Numerous collaborative solutions are obtained. Comparison of these solutions demonstrates the influence which an priori ascent-abort criterion has on development cost. Similarly, objective-function selection is discussed, demonstrating the difference between minimum weight and minimum cost concepts. The operational advantages of the collaborative optimization

  18. Developing the concept of a geostationary platform. [for communication services

    NASA Technical Reports Server (NTRS)

    Carey, W. T.; Bowman, R. M.; Stone, G. R.

    1980-01-01

    A geostationary platform concept with a proliferation of low-cost earth stations is discussed. Candidate platform concepts, servicing, life, and Orbital Transfer Vehicle (OTV) options are considered. A Life Cycle Costing model is used to select the minimum cost concept meeting program criteria. It is concluded that the geostationary platform concept is a practical and economical approach to providing expanding communication services within the limitations imposed by the available frequency spectrum and orbital arc.

  19. An Observationally Constrained Evaluation of the Oxidative Capacity in the Tropical Western Pacific Troposphere

    NASA Technical Reports Server (NTRS)

    Nicely, Julie M.; Anderson, Daniel C.; Canty, Timothy P.; Salawitch, Ross J.; Wolfe, Glenn M.; Apel, Eric C.; Arnold, Steve R.; Atlas, Elliot L.; Blake, Nicola J.; Bresch, James F.; hide

    2016-01-01

    Hydroxyl radical (OH) is the main daytime oxidant in the troposphere and determines the atmospheric lifetimes of many compounds. We use aircraft measurements of O3, H2O, NO, and other species from the Convective Transport of Active Species in the Tropics (CONTRAST) field campaign, which occurred in the tropical western Pacific (TWP) during January-February 2014, to constrain a photochemical box model and estimate concentrations of OH throughout the troposphere. We find that tropospheric column OH (OHCOL) inferred from CONTRAST observations is 12 to 40% higher than found in chemical transport models (CTMs), including CAM-chem-SD run with 2014 meteorology as well as eight models that participated in POLMIP (2008 meteorology). Part of this discrepancy is due to a clear-sky sampling bias that affects CONTRAST observations; accounting for this bias and also for a small difference in chemical mechanism results in our empirically based value of OHCOL being 0 to 20% larger than found within global models. While these global models simulate observed O3 reasonably well, they underestimate NOx (NO +NO2) by a factor of 2, resulting in OHCOL approx.30% lower than box model simulations constrained by observed NO. Underestimations by CTMs of observed CH3CHO throughout the troposphere and of HCHO in the upper troposphere further contribute to differences between our constrained estimates of OH and those calculated by CTMs. Finally, our calculations do not support the prior suggestion of the existence of a tropospheric OH minimum in the TWP, because during January-February 2014 observed levels of O3 and NO were considerably larger than previously reported values in the TWP.

  20. Cost-Effectiveness of Helicopter Versus Ground Emergency Medical Services for Trauma Scene Transport in the United States

    PubMed Central

    Delgado, M. Kit; Staudenmayer, Kristan L.; Wang, N. Ewen; Spain, David A.; Weir, Sharada; Owens, Douglas K.; Goldhaber-Fiebert, Jeremy D.

    2014-01-01

    Objective We determined the minimum mortality reduction that helicopter emergency medical services (HEMS) should provide relative to ground EMS for the scene transport of trauma victims to offset higher costs, inherent transport risks, and inevitable overtriage of minor injury patients. Methods We developed a decision-analytic model to compare the costs and outcomes of helicopter versus ground EMS transport to a trauma center from a societal perspective over a patient's lifetime. We determined the mortality reduction needed to make helicopter transport cost less than $100,000 and $50,000 per quality adjusted life year (QALY) gained compared to ground EMS. Model inputs were derived from the National Study on the Costs and Outcomes of Trauma (NSCOT), National Trauma Data Bank, Medicare reimbursements, and literature. We assessed robustness with probabilistic sensitivity analyses. Results HEMS must provide a minimum of a 17% relative risk reduction in mortality (1.6 lives saved/100 patients with the mean characteristics of the NSCOT cohort) to cost less than $100,000 per QALY gained and a reduction of at least 33% (3.7 lives saved/100 patients) to cost less than $50,000 per QALY. HEMS becomes more cost-effective with significant reductions in minor injury patients triaged to air transport or if long-term disability outcomes are improved. Conclusions HEMS needs to provide at least a 17% mortality reduction or a measurable improvement in long-term disability to compare favorably to other interventions considered cost-effective. Given current evidence, it is not clear that HEMS achieves this mortality or disability reduction. Reducing overtriage of minor injury patients to HEMS would improve its cost-effectiveness. PMID:23582619

  1. Rising cost of antidotes in the U.S.: cost comparison from 2010 to 2015.

    PubMed

    Heindel, Gregory A; Trella, Jeanette D; Osterhoudt, Kevin C

    2017-06-01

    Our poison control center observed a large increase in the cost of many antidotes over the past several years. The high cost of antidotes has previously been cited as a factor leading to inadequate antidote supply at some hospitals. Continued increases in the cost of antidotes may lead to further reductions in antidote supply and represent serious concerns. This research aims to quantify recent trends in the costs of antidotes in the U.S. Antidotes and minimum stocking recommendations were retrieved from published guidelines. RED BOOK Online ® was used to identify the U.S. average wholesale price (AWP) of each antidote in 2010 and 2015. The AWP in 2010 was adjusted using the U.S. Consumer Price Index to adjust for inflation. The cost of minimum stocking levels for each antidote was calculated and compared between the year 2010 and 2015. The cost of stocking many antidotes demonstrated a large increase in AWP from 2010 to 2015. Of the antidotes evaluated, 15 out of 33 had greater than 50% increase in AWP and 8 out of 33 had greater than $1000 increase in AWP. Only four antidotes demonstrated decreases in AWP greater than 10% and only one antidote had its cost of stocking decrease in AWP by more than $1000. The price increase over the last 5 years may further hinder the willingness of hospitals to stock recommended antidotes at adequate quantities. This may impede timely treatment of patients, and negatively impact poisoning outcomes. The price of many antidotes substantially increased in the United States from 2010 to 2015. Strategies should be investigated to help decrease the cost associated with stocking and use of antidotes, including dose rounding, consignment, and regional sharing.

  2. Automated array assembly task, phase 1

    NASA Technical Reports Server (NTRS)

    Carbajal, B. G.

    1977-01-01

    An assessment of state-of-the-art technologies that are applicable to silicon solar cell and solar cell module fabrication is provided. The assessment consists of a technical feasibility evaluation and a cost projection for high-volume production of silicon solar cell modules. The cost projection was approached from two directions; a design-to-cost analysis assigned cost goals to each major process element in the fabrication scheme, and a cost analysis built up projected costs for alternate technologies for each process element. A technical evaluation was used in combination with the cost analysis to identify a baseline low cost process. A novel approach to metal pattern design based on minimum power loss was developed. These design equations were used as a tool in the evaluation of metallization technologies.

  3. Estimation of the full marginal costs of port related truck traffic.

    PubMed

    Berechman, Joseph

    2009-11-01

    NY region is expected to grow by additional 1 million people by 2020, which translates into roughly 70 million more tons of goods to be delivered annually. Due to lack of rail capacity, mainly trucks will haul this volume of freight, challenging an already much constrained highway network. What are the total costs associated with this additional traffic, in particular, congestion, safety and emission? Since a major source of this expected flow is the Port of New York-New Jersey, this paper focuses on the estimation of the full marginal costs of truck traffic resulting from the further expansion of the port's activities.

  4. Who Purchases Low-Cost Alcohol in Australia?

    PubMed

    Callinan, Sarah; Room, Robin; Livingston, Michael; Jiang, Heng

    2015-11-01

    Debates surrounding potential price-based polices aimed at reducing alcohol-related harms tend to focus on the debate concerning who would be most affected-harmful or low-income drinkers. This study will investigate the characteristics of people who purchase low-cost alcohol using data from the Australian arm of the International Alcohol Control study. 1681 Australians aged 16 and over who had consumed alcohol and purchased it in off-licence premises were asked detailed questions about both practices. Low-cost alcohol was defined using cut-points of 80¢, $1.00 or $1.25 per Australian standard drink. With a $1.00 cut-off low income (OR = 2.1) and heavy drinkers (OR = 1.7) were more likely to purchase any low-cost alcohol. Harmful drinkers purchased more, and low-income drinkers less, alcohol priced at less than $1.00 per drink than high income and moderate drinkers respectively. The relationship between the proportion of units purchased at low cost and both drinker category and income is less clear, with hazardous, but not harmful, drinkers purchasing a lower proportion of units at low cost than moderate drinkers. The impact of minimum pricing on low income and harmful drinkers will depend on whether the proportion or total quantity of all alcohol purchased at low cost is considered. Based on absolute units of alcohol, minimum unit pricing could be differentially effective for heavier drinkers compared to other drinkers, particularly for young males. © The Author 2015. Medical Council on Alcohol and Oxford University Press. All rights reserved.

  5. Assessment of Energy Savings Potential from the Use of Demand Control Ventilation Systems in General Office Spaces in California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Fisk, William J.

    2009-07-08

    Demand controlled ventilation (DCV) was evaluated for general office spaces in California. A medium size office building meeting the prescriptive requirements of the 2008 California building energy efficiency standards (CEC 2008) was assumed in the building energy simulations performed with the EnergyPlus program to calculate the DCV energy savings potential in five typical California climates. Three design occupancy densities and two minimum ventilation rates were used as model inputs to cover a broader range of design variations. The assumed values of minimum ventilation rates in offices without DCV, based on two different measurement methods, were 81 and 28 cfm per occupant. These rates are based on the co-author's unpublished analyses of data from EPA's survey of 100 U.S. office buildings. These minimum ventilation rates exceed the 15 to 20 cfm per person required in most ventilation standards for offices. The cost effectiveness of applying DCV in general office spaces was estimated via a life cycle cost analyses that considered system costs and energy cost reductions. The results of the energy modeling indicate that the energy savings potential of DCV is largest in the desert area of California (climate zone 14), followed by Mountains (climate zone 16), Central Valley (climate zone 12), North Coast (climate zone 3), and South Coast (climate zone 6). The results of the life cycle cost analysis show DCV is cost effective for office spaces if the typical minimum ventilation rates without DCV is 81 cfm per person, except at the low design occupancy of 10 people per 1000 ft{sup 2} in climate zones 3 and 6. At the low design occupancy of 10 people per 1000 ft{sup 2}, the greatest DCV life cycle cost savings is a net present value (NPV) ofmore » $$0.52/ft{sup 2} in climate zone 14, followed by $$0.32/ft{sup 2} in climate zone 16 and $$0.19/ft{sup 2} in climate zone 12. At the medium design occupancy of 15 people per 1000 ft{sup 2}, the DCV savings are higher with a NPV $$0.93/ft{sup 2} in climate zone 14, followed by $$0.55/ft{sup 2} in climate zone 16, $$0.46/ft{sup 2} in climate zone 12, $$0.30/ft{sup 2} in climate zone 3, $$0.16/ft{sup 2} in climate zone 3. At the high design occupancy of 20 people per 1000 ft{sup 2}, the DCV savings are even higher with a NPV $$1.37/ft{sup 2} in climate zone 14, followed by $$0.86/ft{sup 2} in climate zone 16, $$0.84/ft{sup 2} in climate zone 3, $$0.82/ft{sup 2} in climate zone 12, and $0.65/ft{sup 2} in climate zone 6. DCV was not found to be cost effective if the typical minimum ventilation rate without DCV is 28 cfm per occupant, except at high design occupancy of 20 people per 1000 ft{sup 2} in climate zones 14 and 16. Until the large uncertainties about the base case ventilation rates in offices without DCV are reduced, the case for requiring DCV in general office spaces will be a weak case.« less

  6. Microsystems and Nanoscience for Biomedical Applications: A View to the Future

    ERIC Educational Resources Information Center

    Pilarski, Linda M.; Mehta, Michael D.; Caulfield, Timothy; Kaler, Karan V. I. S.; Backhouse, Christopher J.

    2004-01-01

    At present there is an enormous discrepancy between our nanotechnological capabilities (particularly our nanobiotechnologies), our social wisdom, and consensus on how to apply them. To date, cost considerations have greatly constrained our application of nanotechnologies. However, novel advances in microsystem platform technologies are about to…

  7. Changing Public Perceptions of Higher Ed

    ERIC Educational Resources Information Center

    Harney, John O.

    2018-01-01

    The benefits of going to college and the importance of higher education institutions were once held to be a creed as American as apple pie. But recurring state budget challenges have constrained investment. Consistently rising tuitions--fueled by increasing college costs--have alarmed many. Politics and free-speech controversies have raised…

  8. A Better Budget Rule

    ERIC Educational Resources Information Center

    Dothan, Michael; Thompson, Fred

    2009-01-01

    Debt limits, interest coverage ratios, one-off balanced budget requirements, pay-as-you-go rules, and tax and expenditure limits are among the most important fiscal rules for constraining intertemporal transfers. There is considerable evidence that the least costly and most effective of such rules are those that focus directly on the rate of…

  9. Achieving an Effective National Security Posture in an Age of Austerity

    DTIC Science & Technology

    2014-05-14

    then the packages move through 300 miles of conveyor sorting- belts  Wal-Mart and Dell distinguish themselves based on their “sense and respond...Require “cost” as a design /military “requirement” (because cost, in a resource-constrained environment, is numbers; and, per Lanchester, numbers are

  10. Financial Management of Canadian Universities: Adaptive Strategies to Fiscal Constraints

    ERIC Educational Resources Information Center

    Deering, Darren; Sá, Creso M.

    2014-01-01

    Decreasing government funding and regulated tuition policies have created a financially constrained environment for Canada's universities. The conventional response to such conditions is to cut programme offerings and services in an attempt to lower costs throughout the institution. However, we argue that three Canadian universities have reacted…

  11. Comprehensive Benefit Platforms to Simplify Complex HR Processes

    ERIC Educational Resources Information Center

    Ehrsam, Hank

    2012-01-01

    Paying for employee turnover costs, data storage, and multiple layers of benefits can be difficult for fiscally constrained institutions, especially as budget cuts and finance-limiting legislation abound in school districts across the country. Many traditional paper-based systems have been replaced with automated, software-based services, helping…

  12. Pooled nucleic acid testing to identify antiretroviral treatment failure during HIV infection.

    PubMed

    May, Susanne; Gamst, Anthony; Haubrich, Richard; Benson, Constance; Smith, Davey M

    2010-02-01

    Pooling strategies have been used to reduce the costs of polymerase chain reaction-based screening for acute HIV infection in populations in which the prevalence of acute infection is low (less than 1%). Only limited research has been done for conditions in which the prevalence of screening positivity is higher (greater than 1%). We present data on a variety of pooling strategies that incorporate the use of polymerase chain reaction-based quantitative measures to monitor for virologic failure among HIV-infected patients receiving antiretroviral therapy. For a prevalence of virologic failure between 1% and 25%, we demonstrate relative efficiency and accuracy of various strategies. These results could be used to choose the best strategy based on the requirements of individual laboratory and clinical settings such as required turnaround time of results and availability of resources. Virologic monitoring during antiretroviral therapy is not currently being performed in many resource-constrained settings largely because of costs. The presented pooling strategies may be used to significantly reduce the cost compared with individual testing, make such monitoring feasible, and limit the development and transmission of HIV drug resistance in resource-constrained settings. They may also be used to design efficient pooling strategies for other settings with quantitative screening measures.

  13. Planning of water resources management and pollution control for Heshui River watershed, China: A full credibility-constrained programming approach.

    PubMed

    Zhang, Y M; Huang, G; Lu, H W; He, Li

    2015-08-15

    A key issue facing integrated water resources management and water pollution control is to address the vague parametric information. A full credibility-based chance-constrained programming (FCCP) method is thus developed by introducing the new concept of credibility into the modeling framework. FCCP can deal with fuzzy parameters appearing concurrently in the objective and both sides of the constraints of the model, but also provide a credibility level indicating how much confidence one can believe the optimal modeling solutions. The method is applied to Heshui River watershed in the south-central China for demonstration. Results from the case study showed that groundwater would make up for the water shortage in terms of the shrinking surface water and rising water demand, and the optimized total pumpage of groundwater from both alluvial and karst aquifers would exceed 90% of its maximum allowable levels when credibility level is higher than or equal to 0.9. It is also indicated that an increase in credibility level would induce a reduction in cost for surface water acquisition, a rise in cost from groundwater withdrawal, and negligible variation in cost for water pollution control. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. 45 CFR 155.1210 - Maintenance of records.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...) of this section include, at a minimum, the following: (1) Information concerning management and..., including cash flow statements, and accounts receivable and matters pertaining to the costs of operations...

  15. System design of the Pioneer Venus spacecraft. Volume 7: Communication subsystem studies

    NASA Technical Reports Server (NTRS)

    Newlands, D. M.

    1973-01-01

    Communications subsystem tradeoffs were undertaken to establish a low cost and low weight design consistent with the mission requirements. Because of the weight constraint of the Thor/Delta launched configuration, minimum weight was emphasized in determining the Thor/Delta design. In contrast, because of the greatly relaxed weight constraint of the Atlas/Centaur launched configuration, minimum cost and off the shelf hardware were emphasized and the attendant weight penalities accepted. Communication subsystem hardware elements identified for study included probe and bus antennas (CM-6, CM-17), power amplifiers (CM-10), and the large probe transponder and small probe stable oscillator required for doppler tracking (CM-11, CM-16). In addition, particular hardware problems associated with the probe high temperature and high-g environment were investigated (CM-7).

  16. Cost of remembering a bit of information

    NASA Astrophysics Data System (ADS)

    Chiuchiù; , D.; López-Suárez, M.; Neri, I.; Diamantini, M. C.; Gammaitoni, L.

    2018-05-01

    In 1961, Landauer [R. Landauer, IBM J. Res. Develop. 5, 183 (1961), 10.1147/rd.53.0183] pointed out that resetting a binary memory requires a minimum energy of kBT ln(2 ) . However, once written, any memory is doomed to lose its content if no action is taken. To avoid memory losses, a refresh procedure is periodically performed. We present a theoretical model and an experiment on a microelectromechanical system to evaluate the minimum energy required to preserve one bit of information over time. Two main conclusions are drawn: (i) in principle, the energetic cost to preserve information for a fixed time duration with a given error probability can be arbitrarily reduced if the refresh procedure is performed often enough, and (ii) the Heisenberg uncertainty principle sets an upper bound on the memory lifetime.

  17. Harvesting Entropy for Random Number Generation for Internet of Things Constrained Devices Using On-Board Sensors

    PubMed Central

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-01-01

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things. PMID:26506357

  18. High resolution near on-axis digital holography using constrained optimization approach with faster convergence

    NASA Astrophysics Data System (ADS)

    Pandiyan, Vimal Prabhu; Khare, Kedar; John, Renu

    2017-09-01

    A constrained optimization approach with faster convergence is proposed to recover the complex object field from a near on-axis digital holography (DH). We subtract the DC from the hologram after recording the object beam and reference beam intensities separately. The DC-subtracted hologram is used to recover the complex object information using a constrained optimization approach with faster convergence. The recovered complex object field is back propagated to the image plane using the Fresnel back-propagation method. The results reported in this approach provide high-resolution images compared with the conventional Fourier filtering approach and is 25% faster than the previously reported constrained optimization approach due to the subtraction of two DC terms in the cost function. We report this approach in DH and digital holographic microscopy using the U.S. Air Force resolution target as the object to retrieve the high-resolution image without DC and twin image interference. We also demonstrate the high potential of this technique in transparent microelectrode patterned on indium tin oxide-coated glass, by reconstructing a high-resolution quantitative phase microscope image. We also demonstrate this technique by imaging yeast cells.

  19. Harvesting entropy for random number generation for internet of things constrained devices using on-board sensors.

    PubMed

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-10-22

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things.

  20. An objective method to determine the probability distribution of the minimum apparent age of a sample of radio-isotopic dates

    NASA Astrophysics Data System (ADS)

    Ickert, R. B.; Mundil, R.

    2012-12-01

    Dateable minerals (especially zircon U-Pb) that crystallized at high temperatures but have been redeposited, pose both unique opportunities and challenges for geochronology. Although they have the potential to provide useful information on the depositional age of their host rocks, their relationship to the host is not always well constrained. For example, primary volcanic deposits will often have a lag time (time between eruption and deposition) that is smaller than can be resolved using radiometric techniques, and the age of eruption and of deposition will be coincident within uncertainty. Alternatively, ordinary clastic sedimentary rocks will usually have a long and variable lag time, even for the youngest minerals. Intermediate cases, for example moderately reworked volcanogenic material, will have a short, but unknown lag time. A compounding problem with U-Pb zircon is that the residence time of crystals in their host magma chamber (time between crystallization and eruption) can be high and is variable, even within the products of a single eruption. In cases where the lag and/or residence time suspected to be large relative to the precision of the date, a common objective is to determine the minimum age of a sample of dates, in order to constrain the maximum age of the deposition of the host rock. However, both the extraction of that age as well as assignment of a meaningful uncertainty is not straightforward. A number of ad hoc techniques have been employed in the literature, which may be appropriate for particular data sets or specific problems, but may yield biased or misleading results. Ludwig (2012) has developed an objective, statistically justified method for the determination of the distribution of the minimum age, but it has not been widely adopted. Here we extend this algorithm with a bootstrap (which can show the effect - if any - of the sampling distribution itself). This method has a number of desirable characteristics: It can incorporate all data points while being resistant to outliers, it utilizes the measurement uncertainties, and it does not require the assumption that any given cluster of data represents a single geological event. In brief, the technique generates a synthetic distribution from the input data by resampling with replacement (a bootstrap). Each resample is a random selection from a Gaussian distribution defined by the mean and uncertainty of the data point. For this distribution, the minimum value is calculated. This procedure is repeated many times (>1000) and a distribution of minimum values is generated, from which a confidence interval can be constructed. We demonstrate the application of this technique using natural and synthetic datasets, show the advantages and limitations, and relate it to other methods. We emphasize that this estimate remains strictly a minimum age - as with any other estimate that does not explicitly incorporate lag or residence time, it will not reflect a depositional age if the lag/residence time is larger than the uncertainty of the estimate. We recommend that this or similar techniques be considered by geochronologists. Ludwig, K.R., 2012. Isoplot 3.75, A geochronological toolkit for Microsoft Excel; Berkeley Geochronology Center Special Publication no. 5

  1. Optimisation modelling to assess cost of dietary improvement in remote Aboriginal Australia.

    PubMed

    Brimblecombe, Julie; Ferguson, Megan; Liberato, Selma C; O'Dea, Kerin; Riley, Malcolm

    2013-01-01

    The cost and dietary choices required to fulfil nutrient recommendations defined nationally, need investigation, particularly for disadvantaged populations. We used optimisation modelling to examine the dietary change required to achieve nutrient requirements at minimum cost for an Aboriginal population in remote Australia, using where possible minimally-processed whole foods. A twelve month cross-section of population-level purchased food, food price and nutrient content data was used as the baseline. Relative amounts from 34 food group categories were varied to achieve specific energy and nutrient density goals at minimum cost while meeting model constraints intended to minimise deviation from the purchased diet. Simultaneous achievement of all nutrient goals was not feasible. The two most successful models (A & B) met all nutrient targets except sodium (146.2% and 148.9% of the respective target) and saturated fat (12.0% and 11.7% of energy). Model A was achieved with 3.2% lower cost than the baseline diet (which cost approximately AUD$13.01/person/day) and Model B at 7.8% lower cost but with a reduction in energy of 4.4%. Both models required very large reductions in sugar sweetened beverages (-90%) and refined cereals (-90%) and an approximate four-fold increase in vegetables, fruit, dairy foods, eggs, fish and seafood, and wholegrain cereals. This modelling approach suggested population level dietary recommendations at minimal cost based on the baseline purchased diet. Large shifts in diet in remote Aboriginal Australian populations are needed to achieve national nutrient targets. The modeling approach used was not able to meet all nutrient targets at less than current food expenditure.

  2. Analytical investigations in aircraft and spacecraft trajectory optimization and optimal guidance

    NASA Technical Reports Server (NTRS)

    Markopoulos, Nikos; Calise, Anthony J.

    1995-01-01

    A collection of analytical studies is presented related to unconstrained and constrained aircraft (a/c) energy-state modeling and to spacecraft (s/c) motion under continuous thrust. With regard to a/c unconstrained energy-state modeling, the physical origin of the singular perturbation parameter that accounts for the observed 2-time-scale behavior of a/c during energy climbs is identified and explained. With regard to the constrained energy-state modeling, optimal control problems are studied involving active state-variable inequality constraints. Departing from the practical deficiencies of the control programs for such problems that result from the traditional formulations, a complete reformulation is proposed for these problems which, in contrast to the old formulation, will presumably lead to practically useful controllers that can track an inequality constraint boundary asymptotically, and even in the presence of 2-sided perturbations about it. Finally, with regard to s/c motion under continuous thrust, a thrust program is proposed for which the equations of 2-dimensional motion of a space vehicle in orbit, viewed as a point mass, afford an exact analytic solution. The thrust program arises under the assumption of tangential thrust from the costate system corresponding to minimum-fuel, power-limited, coplanar transfers between two arbitrary conics. The thrust program can be used not only with power-limited propulsion systems, but also with any propulsion system capable of generating continuous thrust of controllable magnitude, and, for propulsion types and classes of transfers for which it is sufficiently optimal the results of this report suggest a method of maneuvering during planetocentric or heliocentric orbital operations, requiring a minimum amount of computation; thus uniquely suitable for real-time feedback guidance implementations.

  3. Pleistocene Thermocline Reconstruction and Oxygen Minimum Zone Evolution in the Maldives

    NASA Astrophysics Data System (ADS)

    Yu, S. M.; Wright, J.

    2017-12-01

    Drift deposits of the southern flank the Kardiva Channel in the eastern Inner Sea of the Maldives provide a complete record of Pleistocene water column changes in conjunction with monsoon cyclicity and fluctuations in the current system. We sampled IODP Site 359-U1467 to reconstruct water column using foraminiferal stable isotope records. This unlithified lithostratigraphic unit is rich in well-preserved microfossils and has an average sedimentation rate of 3.4 cm/yr. Marine Isotope Stages 1-6 were identified and show higher sedimentation rates during the interglacial sections approaching 6 cm/kyr. We present the δ13C and δ18O record of planktonic and benthic foraminiferal species taken at intervals of 3 cm. Globigerinoides ruber was used to constrain surface conditions. The thermocline dwelling species, Globorotalia menardii, was chosen to monitor fluctuations in the thermocline compared to the mixed layer. Lastly, the δ13C of the benthic species, Cibicidoides subhaidingerii and Planulina renzi, reveal changes to the bottom water ventilation and expansion of oxygen minimum zones over time. All three taxa recorded similar changes in δ18O over the glacial/interglacial cycles which is remarkable given the large sea level change ( 120 m) and the relatively shallow water depth ( 450 m). There is a small increase in the δ13C gradient during the glacial intervals which might reflect less ventilated bottom waters in the Inner Sea. This multispecies approach allows us to better constrain the thermocline hydrography and suggests that changes in the OMZ thickness are driven by the intensification of the monsoon cycles while painting a more cohesive picture to the changes in the water column structure.

  4. Intraclass reliability for assessing how well Taiwan constrained hospital-provided medical services using statistical process control chart techniques

    PubMed Central

    2012-01-01

    Background Few studies discuss the indicators used to assess the effect on cost containment in healthcare across hospitals in a single-payer national healthcare system with constrained medical resources. We present the intraclass correlation coefficient (ICC) to assess how well Taiwan constrained hospital-provided medical services in such a system. Methods A custom Excel-VBA routine to record the distances of standard deviations (SDs) from the central line (the mean over the previous 12 months) of a control chart was used to construct and scale annual medical expenditures sequentially from 2000 to 2009 for 421 hospitals in Taiwan to generate the ICC. The ICC was then used to evaluate Taiwan’s year-based convergent power to remain unchanged in hospital-provided constrained medical services. A bubble chart of SDs for a specific month was generated to present the effects of using control charts in a national healthcare system. Results ICCs were generated for Taiwan’s year-based convergent power to constrain its medical services from 2000 to 2009. All hospital groups showed a gradually well-controlled supply of services that decreased from 0.772 to 0.415. The bubble chart identified outlier hospitals that required investigation of possible excessive reimbursements in a specific time period. Conclusion We recommend using the ICC to annually assess a nation’s year-based convergent power to constrain medical services across hospitals. Using sequential control charts to regularly monitor hospital reimbursements is required to achieve financial control in a single-payer nationwide healthcare system. PMID:22587736

  5. Intraclass reliability for assessing how well Taiwan constrained hospital-provided medical services using statistical process control chart techniques.

    PubMed

    Chien, Tsair-Wei; Chou, Ming-Ting; Wang, Wen-Chung; Tsai, Li-Shu; Lin, Weir-Sen

    2012-05-15

    Few studies discuss the indicators used to assess the effect on cost containment in healthcare across hospitals in a single-payer national healthcare system with constrained medical resources. We present the intraclass correlation coefficient (ICC) to assess how well Taiwan constrained hospital-provided medical services in such a system. A custom Excel-VBA routine to record the distances of standard deviations (SDs) from the central line (the mean over the previous 12 months) of a control chart was used to construct and scale annual medical expenditures sequentially from 2000 to 2009 for 421 hospitals in Taiwan to generate the ICC. The ICC was then used to evaluate Taiwan's year-based convergent power to remain unchanged in hospital-provided constrained medical services. A bubble chart of SDs for a specific month was generated to present the effects of using control charts in a national healthcare system. ICCs were generated for Taiwan's year-based convergent power to constrain its medical services from 2000 to 2009. All hospital groups showed a gradually well-controlled supply of services that decreased from 0.772 to 0.415. The bubble chart identified outlier hospitals that required investigation of possible excessive reimbursements in a specific time period. We recommend using the ICC to annually assess a nation's year-based convergent power to constrain medical services across hospitals. Using sequential control charts to regularly monitor hospital reimbursements is required to achieve financial control in a single-payer nationwide healthcare system.

  6. Robustification and Optimization in Repetitive Control For Minimum Phase and Non-Minimum Phase Systems

    NASA Astrophysics Data System (ADS)

    Prasitmeeboon, Pitcha

    Repetitive control (RC) is a control method that specifically aims to converge to zero tracking error of a control systems that execute a periodic command or have periodic disturbances of known period. It uses the error of one period back to adjust the command in the present period. In theory, RC can completely eliminate periodic disturbance effects. RC has applications in many fields such as high-precision manufacturing in robotics, computer disk drives, and active vibration isolation in spacecraft. The first topic treated in this dissertation develops several simple RC design methods that are somewhat analogous to PID controller design in classical control. From the early days of digital control, emulation methods were developed based on a Forward Rule, a Backward Rule, Tustin's Formula, a modification using prewarping, and a pole-zero mapping method. These allowed one to convert a candidate controller design to discrete time in a simple way. We investigate to what extent they can be used to simplify RC design. A particular design is developed from modification of the pole-zero mapping rules, which is simple and sheds light on the robustness of repetitive control designs. RC convergence requires less than 90 degree model phase error at all frequencies up to Nyquist. A zero-phase cutoff filter is normally used to robustify to high frequency model error when this limit is exceeded. The result is stabilization at the expense of failure to cancel errors above the cutoff. The second topic investigates a series of methods to use data to make real time updates of the frequency response model, allowing one to increase or eliminate the frequency cutoff. These include the use of a moving window employing a recursive discrete Fourier transform (DFT), and use of a real time projection algorithm from adaptive control for each frequency. The results can be used directly to make repetitive control corrections that cancel each error frequency, or they can be used to update a repetitive control FIR compensator. The aim is to reduce the final error level by using real time frequency response model updates to successively increase the cutoff frequency, each time creating the improved model needed to produce convergence zero error up to the higher cutoff. Non-minimum phase systems present a difficult design challenge to the sister field of Iterative Learning Control. The third topic investigates to what extent the same challenges appear in RC. One challenge is that the intrinsic non-minimum phase zero mapped from continuous time is close to the pole of repetitive controller at +1 creating behavior similar to pole-zero cancellation. The near pole-zero cancellation causes slow learning at DC and low frequencies. The Min-Max cost function over the learning rate is presented. The Min-Max can be reformulated as a Quadratically Constrained Linear Programming problem. This approach is shown to be an RC design approach that addresses the main challenge of non-minimum phase systems to have a reasonable learning rate at DC. Although it was illustrated that using the Min-Max objective improves learning at DC and low frequencies compared to other designs, the method requires model accuracy at high frequencies. In the real world, models usually have error at high frequencies. The fourth topic addresses how one can merge the quadratic penalty to the Min-Max cost function to increase robustness at high frequencies. The topic also considers limiting the Min-Max optimization to some frequencies interval and applying an FIR zero-phase low-pass filter to cutoff the learning for frequencies above that interval.

  7. Optimization of aircraft seat cushion fire blocking layers

    NASA Technical Reports Server (NTRS)

    Kourtides, D. A.; Parker, J. A.; Ling, A. C.; Hovatter, W. R.

    1983-01-01

    This report describes work completed by the National Aeronautics and Space Administration - for the Federal Aviation Administration Technical Center. The purpose of this work was to examine the potential of fire blocking mechanisms for aircraft seat cushions in order to provide an optimized seat configuration with adequate fire protection and minimum weight. Aluminized thermally stable fabrics were found to provide adequate fire protection when used in conjunction with urethane foams, while maintaining minimum weight and cost penalty.

  8. 45 CFR 2551.47 - May the cost reimbursements of a Senior Companion be subject to any tax or charge, be treated as...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 4 2010-10-01 2010-10-01 false May the cost reimbursements of a Senior Companion... compensation, temporary disability, retirement, public assistance, or similar benefit payments or minimum wage... receive assistance from other programs? 2551.47 Section 2551.47 Public Welfare Regulations Relating to...

  9. Assessment of Adaptive Guidance for Responsive Launch Vehicles and Spacecraft

    DTIC Science & Technology

    2009-04-29

    Figures 1 Earth centered inertial and launch plumbline coordinate systems . . . . . . . 7 2 Geodetic and geocentric latitude...Dramatically reduced reoccurring costs related to guidance. The same features of the closed-loop ascent guidance that provide operational flexibility...also result in greatly reduced need for human intervention. Thus the operational costs related to ascent guidance could be reduced to minimum

  10. The Weak Link HP-41C hand-held calculator program

    Treesearch

    Ross A. Phillips; Penn A. Peters; Gary D. Falk

    1982-01-01

    The Weak Link hand-held calculator program (HP-41C) quickly analyzes a system for logging production and costs. The production equations model conventional chain saw, skidder, loader, and tandemaxle truck operations in eastern mountain areas. Production of each function of the logging system may be determined so that the system may be balanced for minimum cost. The...

  11. Quality analysis, miceller behavior, and environmental impact of some laundry detergents available in Bangladesh.

    PubMed

    Nur-E-Alam, M; Islam, M Monirul; Islam, M Nazrul; Rima, Farhana Rahman; Islam, M Nurul

    2016-03-01

    The cleansing efficiencies of laundry detergents depend on composition and variation of ingredients such as surfactants, phosphate, and co-builders. Among these ingredients, surfactants and phosphate are considered as hazardous materials. Knowledge on compositions and micellar behavior is very useful for understanding their cleansing efficiencies and environmental impact. With this view, composition, critical micelle concentration, and dissolved oxygen level in aqueous solution of some laundry detergents available in Bangladesh such as keya, Wheel Power White, Tibet, Surf Excel, and Chaka were determined. Surfactant and phosphate were found to be maximum in Surf Excel and Wheel Power White, respectively, while both of the ingredients were found to be minimum in Tibet. The critical micelle concentration decreased with increasing surfactant content. The amount of laundry detergents required for efficient cleansing was found to be minimum for Surf Excel and maximum for Chaka; however, cleansing cost was the highest for Surf Excel and the lowest for Tibet. The maximum amount of surfactants and phosphate was discharged by Surf Excel and Wheel Power White, respectively, while discharges of both of the ingredients were minimum for Tibet. The maximum decrease of dissolved oxygen level was caused by Surf Excel and the minimum by Tibet. Therefore, it can be concluded that Tibet is cost-effective and environment friendly, whereas Surf Excel and Wheel Power White are expensive and pose a threat to water environment.

  12. Support Minimized Inversion of Acoustic and Elastic Wave Scattering

    NASA Astrophysics Data System (ADS)

    Safaeinili, Ali

    Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).

  13. The 1987-88 Accountable Costs Study: A Report to the Governor, the Lieutenant Governor, and Members of the Seventy-First Legislature from the State Board of Education.

    ERIC Educational Resources Information Center

    Texas State Board of Education, Austin.

    The State Board of Education is required by the Texas Education Code Section 16.201 to make recommendations to the legislature concerning the cost of education. This report is a summation of the State Board of Education findings. After more than a year of study, the board has determined the minimum basic program costs to be $2,197 per student in…

  14. Energy Savings Measure Packages. Existing Homes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casey, Sean; Booten, Chuck

    2011-11-01

    This document presents the most cost effective Energy Savings Measure Packages (ESMP) for existing mixed-fuel and all electric homes to achieve 15% and 30% savings for each BetterBuildings grantee location across the United States. These packages are optimized for minimum cost to homeowners for source energy savings given the local climate and prevalent building characteristics (i.e. foundation types). Maximum cost savings are typically found between 30% and 50% energy savings over the reference home; this typically amounts to $300 - $700/year.

  15. Reducing maintenance costs in agreement with CNC machine tools reliability

    NASA Astrophysics Data System (ADS)

    Ungureanu, A. L.; Stan, G.; Butunoi, P. A.

    2016-08-01

    Aligning maintenance strategy with reliability is a challenge due to the need to find an optimal balance between them. Because the various methods described in the relevant literature involve laborious calculations or use of software that can be costly, this paper proposes a method that is easier to implement on CNC machine tools. The new method, called the Consequence of Failure Analysis (CFA) is based on technical and economic optimization, aimed at obtaining a level of required performance with minimum investment and maintenance costs.

  16. Cost-Effectiveness of Preventive Interventions to Reduce Alcohol Consumption in Denmark

    PubMed Central

    Holm, Astrid Ledgaard; Veerman, Lennert; Cobiac, Linda; Ekholm, Ola; Diderichsen, Finn

    2014-01-01

    Introduction Excessive alcohol consumption increases the risk of many diseases and injuries, and the Global Burden of Disease 2010 study estimated that 6% of the burden of disease in Denmark is due to alcohol consumption. Alcohol consumption thus places a considerable economic burden on society. Methods We analysed the cost-effectiveness of six interventions aimed at preventing alcohol abuse in the adult Danish population: 30% increased taxation, increased minimum legal drinking age, advertisement bans, limited hours of retail sales, and brief and longer individual interventions. Potential health effects were evaluated as changes in incidence, prevalence and mortality of alcohol-related diseases and injuries. Net costs were calculated as the sum of intervention costs and cost offsets related to treatment of alcohol-related outcomes, based on health care costs from Danish national registers. Cost-effectiveness was evaluated by calculating incremental cost-effectiveness ratios (ICERs) for each intervention. We also created an intervention pathway to determine the optimal sequence of interventions and their combined effects. Results Three of the analysed interventions (advertising bans, limited hours of retail sales and taxation) were cost-saving, and the remaining three interventions were all cost-effective. Net costs varied from € -17 million per year for advertisement ban to € 8 million for longer individual intervention. Effectiveness varied from 115 disability-adjusted life years (DALY) per year for minimum legal drinking age to 2,900 DALY for advertisement ban. The total annual effect if all interventions were implemented would be 7,300 DALY, with a net cost of € -30 million. Conclusion Our results show that interventions targeting the whole population were more effective than individual-focused interventions. A ban on alcohol advertising, limited hours of retail sale and increased taxation had the highest probability of being cost-saving and should thus be first priority for implementation. PMID:24505370

  17. Setting new constrains on the age of crustal-scale extensional shear zone (Vivero fault): implications for the evolution of Variscan orogeny in the Iberian massif

    NASA Astrophysics Data System (ADS)

    Lopez-Sanchez, Marco A.; Marcos, Alberto; Martínez, Francisco J.; Iriondo, Alexander; Llana-Fúnez, Sergio

    2015-06-01

    The Vivero fault is crustal-scale extensional shear zone parallel to the Variscan orogen in the Iberian massif belt with an associated dip-slip movement toward the hinterland. To constrain the timing of the extension accommodated by this structure, we performed zircon U-Pb LA-ICP-MS geochronology in several deformed plutons: some of them emplaced syntectonically. The different crystallization ages obtained indicate that the fault was active at least between 303 ± 2 and 287 ± 3 Ma, implying a minimum tectonic activity of 16 ± 5 Ma along the fault. The onset of the faulting is established to have occurred later than 314 ± 2 Ma. The geochronological data confirm that the Vivero fault postdates the main Variscan deformation events in the NW of the Iberian massif and that the extension direction of the Late Carboniferous-Early Permian crustal-scale extensional shear zones along the Ibero-Armorican Arc was consistently perpendicular to the general arcuate trend of the belt in SW Europe.

  18. Scheduling Aircraft Landings under Constrained Position Shifting

    NASA Technical Reports Server (NTRS)

    Balakrishnan, Hamsa; Chandran, Bala

    2006-01-01

    Optimal scheduling of airport runway operations can play an important role in improving the safety and efficiency of the National Airspace System (NAS). Methods that compute the optimal landing sequence and landing times of aircraft must accommodate practical issues that affect the implementation of the schedule. One such practical consideration, known as Constrained Position Shifting (CPS), is the restriction that each aircraft must land within a pre-specified number of positions of its place in the First-Come-First-Served (FCFS) sequence. We consider the problem of scheduling landings of aircraft in a CPS environment in order to maximize runway throughput (minimize the completion time of the landing sequence), subject to operational constraints such as FAA-specified minimum inter-arrival spacing restrictions, precedence relationships among aircraft that arise either from airline preferences or air traffic control procedures that prevent overtaking, and time windows (representing possible control actions) during which each aircraft landing can occur. We present a Dynamic Programming-based approach that scales linearly in the number of aircraft, and describe our computational experience with a prototype implementation on realistic data for Denver International Airport.

  19. An approximation function for frequency constrained structural optimization

    NASA Technical Reports Server (NTRS)

    Canfield, R. A.

    1989-01-01

    The purpose is to examine a function for approximating natural frequency constraints during structural optimization. The nonlinearity of frequencies has posed a barrier to constructing approximations for frequency constraints of high enough quality to facilitate efficient solutions. A new function to represent frequency constraints, called the Rayleigh Quotient Approximation (RQA), is presented. Its ability to represent the actual frequency constraint results in stable convergence with effectively no move limits. The objective of the optimization problem is to minimize structural weight subject to some minimum (or maximum) allowable frequency and perhaps subject to other constraints such as stress, displacement, and gage size, as well. A reason for constraining natural frequencies during design might be to avoid potential resonant frequencies due to machinery or actuators on the structure. Another reason might be to satisy requirements of an aircraft or spacecraft's control law. Whatever the structure supports may be sensitive to a frequency band that must be avoided. Any of these situations or others may require the designer to insure the satisfaction of frequency constraints. A further motivation for considering accurate approximations of natural frequencies is that they are fundamental to dynamic response constraints.

  20. Reduced probability of ice-free summers for 1.5 °C compared to 2 °C warming

    NASA Astrophysics Data System (ADS)

    Jahn, Alexandra

    2018-05-01

    Arctic sea ice has declined rapidly with increasing global temperatures. However, it is largely unknown how Arctic summer sea-ice impacts would vary under the 1.5 °C Paris target compared to scenarios with greater warming. Using the Community Earth System Model, I show that constraining warming to 1.5 °C rather than 2.0 °C reduces the probability of any summer ice-free conditions by 2100 from 100% to 30%. It also reduces the late-century probability of an ice cover below the 2012 record minimum from 98% to 55%. For warming above 2 °C, frequent ice-free conditions can be expected, potentially for several months per year. Although sea-ice loss is generally reversible for decreasing temperatures, sea ice will only recover to current conditions if atmospheric CO2 is reduced below present-day concentrations. Due to model biases, these results provide a lower bound on summer sea-ice impacts, but clearly demonstrate the benefits of constraining warming to 1.5 °C.

  1. Constrained dictionary learning and probabilistic hypergraph ranking for person re-identification

    NASA Astrophysics Data System (ADS)

    He, You; Wu, Song; Pu, Nan; Qian, Li; Xiao, Guoqiang

    2018-04-01

    Person re-identification is a fundamental and inevitable task in public security. In this paper, we propose a novel framework to improve the performance of this task. First, two different types of descriptors are extracted to represent a pedestrian: (1) appearance-based superpixel features, which are constituted mainly by conventional color features and extracted from the supepixel rather than a whole picture and (2) due to the limitation of discrimination of appearance features, the deep features extracted by feature fusion Network are also used. Second, a view invariant subspace is learned by dictionary learning constrained by the minimum negative sample (termed as DL-cMN) to reduce the noise in appearance-based superpixel feature domain. Then, we use deep features and sparse codes transformed by appearancebased features to establish the hyperedges respectively by k-nearest neighbor, rather than jointing different features simply. Finally, a final ranking is performed by probabilistic hypergraph ranking algorithm. Extensive experiments on three challenging datasets (VIPeR, PRID450S and CUHK01) demonstrate the advantages and effectiveness of our proposed algorithm.

  2. Contracting by managed care systems for pharmaceutical products and services.

    PubMed

    Sharp, W T; Strandberg, L R

    1990-11-01

    The health care delivery system has received criticism because of its rapidly increasing costs. In an attempt to control costs, the administrators of managed care organizations are searching for cost control mechanisms. Thus, the administrators of managed care organizations appear to be searching carefully for any alternative method to lower the cost of delivering medical care to plan members. In this environment pharmacists must be extremely careful to study the cost of providing prescription services to managed care organizations, because they will be constrained by the obligations indicated in the contractual relationship. Any decisions to provide pharmaceutical services should be studied in detail after careful discussion with administrators of a managed care organization. Only after a careful analysis should a pharmacist make a decision to offer or not offer pharmaceutical services to a managed care organization.

  3. Financial Impact of Direct-Acting Oral Anticoagulants in Medicaid: Budgetary Assessment Based on Number Needed to Treat.

    PubMed

    Fairman, Kathleen A; Davis, Lindsay E; Kruse, Courtney R; Sclar, David A

    2017-04-01

    Faced with rising healthcare costs, state Medicaid programs need short-term, easily calculated budgetary estimates for new drugs, accounting for medical cost offsets due to clinical advantages. To estimate the budgetary impact of direct-acting oral anticoagulants (DOACs) compared with warfarin, an older, lower-cost vitamin K antagonist, on 12-month Medicaid expenditures for nonvalvular atrial fibrillation (NVAF) using number needed to treat (NNT). Medicaid utilization files, 2009 through second quarter 2015, were used to estimate OAC cost accounting for generic/brand statutory minimum (13/23%) and assumed maximum (13/50%) manufacturer rebates. NNTs were calculated from clinical trial reports to estimate avoided medical events for a hypothetical population of 500,000 enrollees (approximate NVAF prevalence × Medicaid enrollment) under two DOAC market share scenarios: 2015 actual and 50% increase. Medical service costs were based on published sources. Costs were inflation-adjusted (2015 US$). From 2009-2015, OAC reimbursement per claim increased by 173 and 279% under maximum and minimum rebate scenarios, respectively, while DOAC market share increased from 0 to 21%. Compared with a warfarin-only counterfactual, counts of ischemic strokes, intracranial hemorrhages, and systemic embolisms declined by 36, 280, and 111, respectively; counts of gastrointestinal hemorrhages increased by 794. Avoided events and reduced monitoring, respectively, offset 3-5% and 15-24% of increased drug cost. Net of offsets, DOAC-related cost increases were US$258-US$464 per patient per year (PPPY) in 2015 and US$309-US$579 PPPY after market share increase. Avoided medical events offset a small portion of DOAC-related drug cost increase. NNT-based calculations provide a transparent source of budgetary-impact information for new medications.

  4. [Direct hospitalization costs associated with chronic Hepatitis C in the Valencian Community in 2013].

    PubMed

    Barrachina Martínez, Isabel; Giner Durán, Remedios; Vivas-Consuelo, David; López Rodado, Antonio; Maldonado Segura, José Alberto

    2018-04-23

    Hospital costs associated with Chronic Hepatitis C (HCC) arise in the final stages of the disease. Its quantification is very helpful in order to estimate and check the burden of the disease and to make financial decisions for new antivirals. The highest costs are due to the decompensation of cirrosis. Cross-sectional observational study of hospital costs of HCC diagnoses in the Valencian Community in 2013 (n= 4,486 hospital discharges). Information source: Minimum basic set of data/ Basic Minimum Data Set. The costs were considered according to the rates established for the DRG (Diagnosis related group) associated with the episodes with diagnosis of hepatitis C. The average survival of patients since the onset of the decom- pensation of their cirrhosis was estimated by a Markov model, according to the probabilities of evolution of the disease existing in Literatura. There were 4,486 hospital episodes, 1,108 due to complications of HCC, which generated 6,713 stays, readmission rate of 28.2% and mortality of 10.2%. The hospital cost amounted to 8,788,593EUR: 3,306,333EUR corresponded to Cirrhosis (5,273EUR/patient); 1,060,521EUR to Carcinoma (6,350EUR/ patient) and 2,962,873EUR to transplantation (70,544EUR/paciente. Comorbidity was 1,458,866EUR. These costs are maintai- ned for an average of 4 years once the cirrhosis decompensation begins. Cirrhosis due to HCC generates a very high hospitalization's costs. The methodology used in the estimation of these costs from the DRG can be very useful to evaluate the trend and economic impact of this disease.

  5. Robustness of mission plans for unmanned aircraft

    NASA Astrophysics Data System (ADS)

    Niendorf, Moritz

    This thesis studies the robustness of optimal mission plans for unmanned aircraft. Mission planning typically involves tactical planning and path planning. Tactical planning refers to task scheduling and in multi aircraft scenarios also includes establishing a communication topology. Path planning refers to computing a feasible and collision-free trajectory. For a prototypical mission planning problem, the traveling salesman problem on a weighted graph, the robustness of an optimal tour is analyzed with respect to changes to the edge costs. Specifically, the stability region of an optimal tour is obtained, i.e., the set of all edge cost perturbations for which that tour is optimal. The exact stability region of solutions to variants of the traveling salesman problems is obtained from a linear programming relaxation of an auxiliary problem. Edge cost tolerances and edge criticalities are derived from the stability region. For Euclidean traveling salesman problems, robustness with respect to perturbations to vertex locations is considered and safe radii and vertex criticalities are introduced. For weighted-sum multi-objective problems, stability regions with respect to changes in the objectives, weights, and simultaneous changes are given. Most critical weight perturbations are derived. Computing exact stability regions is intractable for large instances. Therefore, tractable approximations are desirable. The stability region of solutions to relaxations of the traveling salesman problem give under approximations and sets of tours give over approximations. The application of these results to the two-neighborhood and the minimum 1-tree relaxation are discussed. Bounds on edge cost tolerances and approximate criticalities are obtainable likewise. A minimum spanning tree is an optimal communication topology for minimizing the cumulative transmission power in multi aircraft missions. The stability region of a minimum spanning tree is given and tolerances, stability balls, and criticalities are derived. This analysis is extended to Euclidean minimum spanning trees. This thesis aims at enabling increased mission performance by providing means of assessing the robustness and optimality of a mission and methods for identifying critical elements. Examples of the application to mission planning in contested environments, cargo aircraft mission planning, multi-objective mission planning, and planning optimal communication topologies for teams of unmanned aircraft are given.

  6. Terrapin technologies manned Mars mission proposal

    NASA Technical Reports Server (NTRS)

    Amato, Michael; Bryant, Heather; Coleman, Rodney; Compy, Chris; Crouse, Patrick; Crunkleton, Joe; Hurtado, Edgar; Iverson, Eirik; Kamosa, Mike; Kraft, Lauri (Editor)

    1990-01-01

    A Manned Mars Mission (M3) design study is proposed. The purpose of M3 is to transport 10 personnel and a habitat with all required support systems and supplies from low Earth orbit (LEO) to the surface of Mars and, after an eight-man surface expedition of 3 months, to return the personnel safely to LEO. The proposed hardware design is based on systems and components of demonstrated high capability and reliability. The mission design builds on past mission experience, but incorporates innovative design approaches to achieve mission priorities. Those priorities, in decreasing order of importance, are safety, reliability, minimum personnel transfer time, minimum weight, and minimum cost. The design demonstrates the feasibility and flexibility of a Waverider transfer module.

  7. Multiple R&D projects scheduling optimization with improved particle swarm algorithm.

    PubMed

    Liu, Mengqi; Shan, Miyuan; Wu, Juan

    2014-01-01

    For most enterprises, in order to win the initiative in the fierce competition of market, a key step is to improve their R&D ability to meet the various demands of customers more timely and less costly. This paper discusses the features of multiple R&D environments in large make-to-order enterprises under constrained human resource and budget, and puts forward a multi-project scheduling model during a certain period. Furthermore, we make some improvements to existed particle swarm algorithm and apply the one developed here to the resource-constrained multi-project scheduling model for a simulation experiment. Simultaneously, the feasibility of model and the validity of algorithm are proved in the experiment.

  8. The importance of operations, risk, and cost assessment to space transfer systems design

    NASA Technical Reports Server (NTRS)

    Ball, J. M.; Komerska, R. J.; Rowell, L. F.

    1992-01-01

    This paper examines several methodologies which contribute to comprehensive subsystem cost estimation. The example of a space-based lunar space transfer vehicle (STV) design is used to illustrate how including both primary and secondary factors into cost affects the decision of whether to use aerobraking or propulsion for earth orbit capture upon lunar return. The expected dominant cost factor in this decision is earth-to-orbit launch cost driven by STV mass. However, to quantify other significant cost factors, this cost comparison included a risk analysis to identify development and testing costs, a Taguchi design of experiments to determine a minimum mass aerobrake design, and a detailed operations analysis. As a result, the predicted cost advantage of aerobraking, while still positive, was subsequently reduced by about 30 percent compared to the simpler mass-based cost estimates.

  9. Optimum testing intervals of building emergency power supply systems in tall buildings in the Hong Kong Special Administrative Region

    NASA Astrophysics Data System (ADS)

    Kwok, Yu Fat

    The main objective of this study is to develop a model for the determination of optimum testing interval (OTI) of non-redundant standby plants. This study focuses on the emergency power generators in tall buildings in Hong Kong. The model for the reliability, which is developed, is applicable to any non-duplicated standby plant. In a tall building, the mobilisation of occupants is constrained by its height and the building internal layout. Occupant's safety, amongst other safety considerations, highly depends on the reliability of the fire detection and protection system, which in turn is dependent on the reliability of the emergency power generation plants. A thorough literature survey shows that the practice used in determining OTI in nuclear plants is generally applicable. Historically, the OTI in these plants is determined by balancing the testing downtime and reliability gained from frequent testing. However, testing downtime does not exist in plants like emergency power generator. Subsequently, sophisticated models have taken repairing downtime into consideration. In this study, the algorithms for the determination of OTI, and hence reliability of standby plants, are reconsidered. A new concept is introduced into the subject. A new model is developed for such purposes which embraces more realistic factors found in practice. System aging and the finite life cycle of the standby plant are considered. Somewhat more pragmatic is that the Optimum Overhauling Interval can also be determined from this new model. System unavailability grow with time, but can be reset by test or overhaul. Contrary to fixed testing intervals, OTI is determined whenever system point unavailability exceeds certain level, which depends on the reliability requirement of the standby system. An optimum testing plan for lowering this level to the 'minimum useful unavailability' level (see section 9.1 for more elaboration) can be determined by the new model presented. Cost effectiveness is accounted for by a new parameter 'tau min', the minimum testing interval (MTI). The MTI optimises the total number of tests and the total number of overhauls, when the costs for each are available. The model sets up criteria for test and overhaul and to 'announce' end of system life. The usefulness of the model is validated by a detailed analysis of the operating parameters from 8,500 maintenance records collected for emergency power generation plants in high rise buildings in Hong Kong. (Abstract shortened by UMI.)

  10. Main propulsion system design recommendations for an advanced Orbit Transfer Vehicle

    NASA Technical Reports Server (NTRS)

    Redd, L.

    1985-01-01

    Various main propulsion system configurations of an advanced OTV are evaluated with respect to the probability of nonindependent failures, i.e., engine failures that disable the entire main propulsion system. Analysis of the life-cycle cost (LCC) indicates that LCC is sensitive to the main propulsion system reliability, vehicle dry weight, and propellant cost; it is relatively insensitive to the number of missions/overhaul, failures per mission, and EVA and IVA cost. In conclusion, two or three engines are recommended in view of their highest reliability, minimum life-cycle cost, and fail operational/fail safe capability.

  11. Cost-Based Optimization of a Papermaking Wastewater Regeneration Recycling System

    NASA Astrophysics Data System (ADS)

    Huang, Long; Feng, Xiao; Chu, Khim H.

    2010-11-01

    Wastewater can be regenerated for recycling in an industrial process to reduce freshwater consumption and wastewater discharge. Such an environment friendly approach will also lead to cost savings that accrue due to reduced freshwater usage and wastewater discharge. However, the resulting cost savings are offset to varying degrees by the costs incurred for the regeneration of wastewater for recycling. Therefore, systematic procedures should be used to determine the true economic benefits for any water-using system involving wastewater regeneration recycling. In this paper, a total cost accounting procedure is employed to construct a comprehensive cost model for a paper mill. The resulting cost model is optimized by means of mathematical programming to determine the optimal regeneration flowrate and regeneration efficiency that will yield the minimum total cost.

  12. Multiple effects of sentential constraint on word processing

    PubMed Central

    Federmeier, Kara D.; Wlotko, Edward W.; De Ochoa-Dewald, Esmeralda; Kutas, Marta

    2009-01-01

    Behavioral and electrophysiological studies have uncovered different patterns of constraint effects on the processing of words in sentences. Whereas response time measures have indicated a reduced scope of facilitation from strongly constraining contexts, event-related brain potential (ERP) measures have instead revealed enhanced facilitation for semantically related endings in such sentences. Given this disparity, and the concomitant possibility of functionally separable stages of context effects, the current study jointly examined expectancy (cloze probability) and constraint effects on the ERP response to words. Expected and unexpected (but plausible) words completed strongly and weakly constraining sentences; unexpected items were matched for contextual fit across the two levels of constraint and were semantically unrelated to the most expected endings. N400 amplitudes were graded by expectancy but unaffected by constraint and seemed to index the benefit of contextual information. However, a later effect, in the form of increased frontal positivity from 500 to 900 ms post-stimulus-onset, indicated a possible cost associated with the processing of unexpected words in strongly constraining contexts. PMID:16901469

  13. Vehicle routing problem with time windows using natural inspired algorithms

    NASA Astrophysics Data System (ADS)

    Pratiwi, A. B.; Pratama, A.; Sa’diyah, I.; Suprajitno, H.

    2018-03-01

    Process of distribution of goods needs a strategy to make the total cost spent for operational activities minimized. But there are several constrains have to be satisfied which are the capacity of the vehicles and the service time of the customers. This Vehicle Routing Problem with Time Windows (VRPTW) gives complex constrains problem. This paper proposes natural inspired algorithms for dealing with constrains of VRPTW which involves Bat Algorithm and Cat Swarm Optimization. Bat Algorithm is being hybrid with Simulated Annealing, the worst solution of Bat Algorithm is replaced by the solution from Simulated Annealing. Algorithm which is based on behavior of cats, Cat Swarm Optimization, is improved using Crow Search Algorithm to make simplier and faster convergence. From the computational result, these algorithms give good performances in finding the minimized total distance. Higher number of population causes better computational performance. The improved Cat Swarm Optimization with Crow Search gives better performance than the hybridization of Bat Algorithm and Simulated Annealing in dealing with big data.

  14. Energy Corner: Heat Reclamation Rescues Wasted Heat.

    ERIC Educational Resources Information Center

    Daugherty, Thomas

    1982-01-01

    Heat reclamation systems added to pre-existing central heating systems provide maximum savings at minimum cost. The benefits of a particular appliance marketed under the brand name "Energizer" are discussed. (Author/MLF)

  15. Nutritional supplementation: the additional costs of managing children infected with HIV in resource-constrained settings.

    PubMed

    Cobb, G; Bland, R M

    2013-01-01

    To explore the financial implications of applying the WHO guidelines for the nutritional management of HIV-infected children in a rural South African HIV programme. WHO guidelines describe Nutritional Care Plans (NCPs) for three categories of HIV-infected children: NCP-A: growing adequately; NCP-B: weight-for-age z-score (WAZ) ≤-2 but no evidence of severe acute malnutrition (SAM), confirmed weight loss/growth curve flattening, or condition with increased nutritional needs (e.g. tuberculosis); NCP-C: SAM. In resource-constrained settings, children requiring NCP-B or NCP-C usually need supplementation to achieve the additional energy recommendation. We estimated the proportion of children initiating antiretroviral treatment (ART) in the Hlabisa HIV Programme who would have been eligible for supplementation in 2010. The cost of supplying 26-weeks supplementation as a proportion of the cost of supplying ART to the same group was calculated. A total of 251 children aged 6 months to 14 years initiated ART. Eighty-eight required 6-month NCP-B, including 41 with a WAZ ≤-2 (no evidence of SAM) and 47 with a WAZ >-2 with co-existent morbidities including tuberculosis. Additionally, 25 children had SAM and required 10-weeks NCP-C followed by 16-weeks NCP-B. Thus, 113 of 251 (45%) children were eligible for nutritional supplementation at an estimated overall cost of $11 136, using 2010 exchange rates. These costs are an estimated additional 11.6% to that of supplying 26-week ART to the 251 children initiated. It is essential to address nutritional needs of HIV-infected children to optimise their health outcomes. Nutritional supplementation should be integral to, and budgeted for, in HIV programmes. © 2012 Blackwell Publishing Ltd.

  16. Nuclear Weapons: NNSA Has a New Approach to Managing the B61-12 Life Extension, but a Constrained Schedule and Other Risks Remain

    DTIC Science & Technology

    2016-02-01

    components. In 2010, they began an LEP to consolidate four versions of a legacy nuclear weapon, the B61 bomb , into a bomb called the B61-12 (see...Force Integrated Master Schedule BIMS Boeing Integrated Master Schedule B61 bomb B61 legacy bomb CD critical decision Cost Guide GAO Cost...are versions of the B61 bomb , an aircraft-delivered weapon that is a key component of the United States’ commitments to the North Atlantic Treaty

  17. Generation of optimum vertical profiles for an advanced flight management system

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Waters, M. H.

    1981-01-01

    Algorithms for generating minimum fuel or minimum cost vertical profiles are derived and examined. The option for fixing the time of flight is included in the concepts developed. These algorithms form the basis for the design of an advanced on-board flight management system. The variations in the optimum vertical profiles (resulting from these concepts) due to variations in wind, takeoff mass, and range-to-destination are presented. Fuel savings due to optimum climb, free cruise altitude, and absorbing delays enroute are examined.

  18. The Preventive Control of a Dengue Disease Using Pontryagin Minimum Principal

    NASA Astrophysics Data System (ADS)

    Ratna Sari, Eminugroho; Insani, Nur; Lestari, Dwi

    2017-06-01

    Behaviour analysis for host-vector model without control of dengue disease is based on the value of basic reproduction number obtained using next generation matrices. Furthermore, the model is further developed involving a preventive control to minimize the contact between host and vector. The purpose is to obtain an optimal preventive strategy with minimal cost. The Pontryagin Minimum Principal is used to find the optimal control analytically. The derived optimality model is then solved numerically to investigate control effort to reduce infected class.

  19. Report of the Defense Science Board/Air Force Scientific Advisory Board Joint Task Force on Acquisition of National Security Space Programs

    DTIC Science & Technology

    2003-05-01

    space requires both contractors---at least until sustainable performance is demonstrated • EELV program has occurred in highly cost constrained...both contractors • Take necessary actions to assure both contractors remain viable---at least until sustainable performance is demonstrated

  20. The Salience of Alcohol-Related Issues across the Adult Lifespan

    ERIC Educational Resources Information Center

    Pettigrew, Simone; Pescud, Melanie

    2016-01-01

    Objective: The growing costs to the community of excessive alcohol consumption have resulted in pressure for governments and non-governmental organisations (NGOs) to develop strategies to address this problem, but they do so in a highly constrained resource environment. To provide evidence of health education approaches that may be effective…

Top