Sample records for cost function minimization

  1. On a cost functional for H2/H(infinity) minimization

    NASA Technical Reports Server (NTRS)

    Macmartin, Douglas G.; Hall, Steven R.; Mustafa, Denis

    1990-01-01

    A cost functional is proposed and investigated which is motivated by minimizing the energy in a structure using only collocated feedback. Defined for an H(infinity)-norm bounded system, this cost functional also overbounds the H2 cost. Some properties of this cost functional are given, and preliminary results on the procedure for minimizing it are presented. The frequency domain cost functional is shown to have a time domain representation in terms of a Stackelberg non-zero sum differential game.

  2. Carpal tunnel syndrome, the search for a cost-effective surgical intervention: a randomised controlled trial.

    PubMed Central

    Lorgelly, Paula K.; Dias, Joseph J.; Bradley, Mary J.; Burke, Frank D.

    2005-01-01

    OBJECTIVE: There is insufficient evidence regarding the clinical and cost-effectiveness of surgical interventions for carpal tunnel syndrome. This study evaluates the cost, effectiveness and cost-effectiveness of minimally invasive surgery compared with conventional open surgery. PATIENTS AND METHODS: 194 sufferers (208 hands) of carpal tunnel syndrome were randomly assigned to each treatment option. A self-administered questionnaire assessed the severity of patients' symptoms and functional status pre- and postoperatively. Treatment costs were estimated from resource use and hospital financial data. RESULTS: Minimally invasive carpal tunnel decompression is marginally more effective than open surgery in terms of functional status, but not significantly so. Little improvement in symptom severity was recorded for either intervention. Minimally invasive surgery was found to be significantly more costly than open surgery. The incremental cost effectiveness ratio for functional status was estimated to be 197 UK pounds, such that a one percentage point improvement in functioning costs 197 UK pounds when using the minimally invasive technique. CONCLUSIONS: Minimally invasive carpal tunnel decompression appears to be more effective but more costly. Initial analysis suggests that the additional expense for such a small improvement in function and no improvement in symptoms would not be regarded as value-for-money, such that minimally invasive carpal tunnel release is unlikely to be considered a cost-effective alternative to the traditional open surgery procedure. PMID:15720906

  3. Exact solution for the optimal neuronal layout problem.

    PubMed

    Chklovskii, Dmitri B

    2004-10-01

    Evolution perfected brain design by maximizing its functionality while minimizing costs associated with building and maintaining it. Assumption that brain functionality is specified by neuronal connectivity, implemented by costly biological wiring, leads to the following optimal design problem. For a given neuronal connectivity, find a spatial layout of neurons that minimizes the wiring cost. Unfortunately, this problem is difficult to solve because the number of possible layouts is often astronomically large. We argue that the wiring cost may scale as wire length squared, reducing the optimal layout problem to a constrained minimization of a quadratic form. For biologically plausible constraints, this problem has exact analytical solutions, which give reasonable approximations to actual layouts in the brain. These solutions make the inverse problem of inferring neuronal connectivity from neuronal layout more tractable.

  4. Replica Approach for Minimal Investment Risk with Cost

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-06-01

    In the present work, the optimal portfolio minimizing the investment risk with cost is discussed analytically, where an objective function is constructed in terms of two negative aspects of investment, the risk and cost. We note the mathematical similarity between the Hamiltonian in the mean-variance model and the Hamiltonians in the Hopfield model and the Sherrington-Kirkpatrick model, show that we can analyze this portfolio optimization problem by using replica analysis, and derive the minimal investment risk with cost and the investment concentration of the optimal portfolio. Furthermore, we validate our proposed method through numerical simulations.

  5. The analytic solution of the firm's cost-minimization problem with box constraints and the Cobb-Douglas model

    NASA Astrophysics Data System (ADS)

    Bayón, L.; Grau, J. M.; Ruiz, M. M.; Suárez, P. M.

    2012-12-01

    One of the most well-known problems in the field of Microeconomics is the Firm's Cost-Minimization Problem. In this paper we establish the analytical expression for the cost function using the Cobb-Douglas model and considering maximum constraints for the inputs. Moreover we prove that it belongs to the class C1.

  6. Optimal dual-fuel propulsion for minimum inert weight or minimum fuel cost

    NASA Technical Reports Server (NTRS)

    Martin, J. A.

    1973-01-01

    An analytical investigation of single-stage vehicles with multiple propulsion phases has been conducted with the phasing optimized to minimize a general cost function. Some results are presented for linearized sizing relationships which indicate that single-stage-to-orbit, dual-fuel rocket vehicles can have lower inert weight than similar single-fuel rocket vehicles and that the advantage of dual-fuel vehicles can be increased if a dual-fuel engine is developed. The results also indicate that the optimum split can vary considerably with the choice of cost function to be minimized.

  7. Life cycle costing with a discount rate

    NASA Technical Reports Server (NTRS)

    Posner, E. C.

    1978-01-01

    This article studies life cycle costing for a capability needed for the indefinite future, and specifically investigates the dependence of optimal policies on the discount rate chosen. The two costs considered are reprocurement cost and maintenance and operations (M and O) cost. The procurement price is assumed known, and the M and O costs are assumed to be a known function, in fact, a non-decreasing function, of the time since last reprocurement. The problem is to choose the optimum reprocurement time so as to minimize the quotient of the total cost over a reprocurement period divided by the period. Or one could assume a discount rate and try to minimize the total discounted costs into the indefinite future. It is shown that the optimum policy in the presence of a small discount rate hardly depends on the discount rate at all, and leads to essentially the same policy as in the case in which discounting is not considered.

  8. Improving the quantum cost of reversible Boolean functions using reorder algorithm

    NASA Astrophysics Data System (ADS)

    Ahmed, Taghreed; Younes, Ahmed; Elsayed, Ashraf

    2018-05-01

    This paper introduces a novel algorithm to synthesize a low-cost reversible circuits for any Boolean function with n inputs represented as a Positive Polarity Reed-Muller expansion. The proposed algorithm applies a predefined rules to reorder the terms in the function to minimize the multi-calculation of common parts of the Boolean function to decrease the quantum cost of the reversible circuit. The paper achieves a decrease in the quantum cost and/or the circuit length, on average, when compared with relevant work in the literature.

  9. Contributions of metabolic and temporal costs to human gait selection.

    PubMed

    Summerside, Erik M; Kram, Rodger; Ahmed, Alaa A

    2018-06-01

    Humans naturally select several parameters within a gait that correspond with minimizing metabolic cost. Much less is understood about the role of metabolic cost in selecting between gaits. Here, we asked participants to decide between walking or running out and back to different gait specific markers. The distance of the walking marker was adjusted after each decision to identify relative distances where individuals switched gait preferences. We found that neither minimizing solely metabolic energy nor minimizing solely movement time could predict how the group decided between gaits. Of our twenty participants, six behaved in a way that tended towards minimizing metabolic energy, while eight favoured strategies that tended more towards minimizing movement time. The remaining six participants could not be explained by minimizing a single cost. We provide evidence that humans consider not just a single movement cost, but instead a weighted combination of these conflicting costs with their relative contributions varying across participants. Individuals who placed a higher relative value on time ran faster than individuals who placed a higher relative value on metabolic energy. Sensitivity to temporal costs also explained variability in an individual's preferred velocity as a function of increasing running distance. Interestingly, these differences in velocity both within and across participants were absent in walking, possibly due to a steeper metabolic cost of transport curve. We conclude that metabolic cost plays an essential, but not exclusive role in gait decisions. © 2018 The Author(s).

  10. Massively parallel GPU-accelerated minimization of classical density functional theory

    NASA Astrophysics Data System (ADS)

    Stopper, Daniel; Roth, Roland

    2017-08-01

    In this paper, we discuss the ability to numerically minimize the grand potential of hard disks in two-dimensional and of hard spheres in three-dimensional space within the framework of classical density functional and fundamental measure theory on modern graphics cards. Our main finding is that a massively parallel minimization leads to an enormous performance gain in comparison to standard sequential minimization schemes. Furthermore, the results indicate that in complex multi-dimensional situations, a heavy parallel minimization of the grand potential seems to be mandatory in order to reach a reasonable balance between accuracy and computational cost.

  11. Mitigation of epidemics in contact networks through optimal contact adaptation *

    PubMed Central

    Youssef, Mina; Scoglio, Caterina

    2013-01-01

    This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights. PMID:23906209

  12. Mitigation of epidemics in contact networks through optimal contact adaptation.

    PubMed

    Youssef, Mina; Scoglio, Caterina

    2013-08-01

    This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights.

  13. Resilience-based optimal design of water distribution network

    NASA Astrophysics Data System (ADS)

    Suribabu, C. R.

    2017-11-01

    Optimal design of water distribution network is generally aimed to minimize the capital cost of the investments on tanks, pipes, pumps, and other appurtenances. Minimizing the cost of pipes is usually considered as a prime objective as its proportion in capital cost of the water distribution system project is very high. However, minimizing the capital cost of the pipeline alone may result in economical network configuration, but it may not be a promising solution in terms of resilience point of view. Resilience of the water distribution network has been considered as one of the popular surrogate measures to address ability of network to withstand failure scenarios. To improve the resiliency of the network, the pipe network optimization can be performed with two objectives, namely minimizing the capital cost as first objective and maximizing resilience measure of the configuration as secondary objective. In the present work, these two objectives are combined as single objective and optimization problem is solved by differential evolution technique. The paper illustrates the procedure for normalizing the objective functions having distinct metrics. Two of the existing resilience indices and power efficiency are considered for optimal design of water distribution network. The proposed normalized objective function is found to be efficient under weighted method of handling multi-objective water distribution design problem. The numerical results of the design indicate the importance of sizing pipe telescopically along shortest path of flow to have enhanced resiliency indices.

  14. Optimizing Segmental Bone Regeneration Using Functionally Graded Scaffolds

    DTIC Science & Technology

    2012-10-01

    Such a model system would allow more realistic assessment of different clinical treatment options in a rapid, cost -efficient, and safe man- ner...along with MichealiseMenten kinetics. Genetic algorithm [37] was adopted to minimize the cost function in Equation (14). Fig. 3 shows that simulated...associated with autografts, such as high cost , requirement of additional surgeries, donor-site morbidity, and limiting autographs for the treatment

  15. Traffic routing for multicomputer networks with virtual cut-through capability

    NASA Technical Reports Server (NTRS)

    Kandlur, Dilip D.; Shin, Kang G.

    1992-01-01

    Consideration is given to the problem of selecting routes for interprocess communication in a network with virtual cut-through capability, while balancing the network load and minimizing the number of times that a message gets buffered. An approach is proposed that formulates the route selection problem as a minimization problem with a link cost function that depends upon the traffic through the link. The form of this cost function is derived using the probability of establishing a virtual cut-through route. The route selection problem is shown to be NP-hard, and an algorithm is developed to incrementally reduce the cost by rerouting the traffic. The performance of this algorithm is exemplified by two network topologies: the hypercube and the C-wrapped hexagonal mesh.

  16. Does the cost function matter in Bayes decision rule?

    PubMed

    Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann

    2012-02-01

    In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.

  17. Quasi-Optimal Elimination Trees for 2D Grids with Singularities

    DOE PAGES

    Paszyńska, A.; Paszyński, M.; Jopek, K.; ...

    2015-01-01

    We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log ⁡ N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less

  18. Quasi-Optimal Elimination Trees for 2D Grids with Singularities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paszyńska, A.; Paszyński, M.; Jopek, K.

    We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log ⁡ N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less

  19. Optimality Principles for Model-Based Prediction of Human Gait

    PubMed Central

    Ackermann, Marko; van den Bogert, Antonie J.

    2010-01-01

    Although humans have a large repertoire of potential movements, gait patterns tend to be stereotypical and appear to be selected according to optimality principles such as minimal energy. When applied to dynamic musculoskeletal models such optimality principles might be used to predict how a patient’s gait adapts to mechanical interventions such as prosthetic devices or surgery. In this paper we study the effects of different performance criteria on predicted gait patterns using a 2D musculoskeletal model. The associated optimal control problem for a family of different cost functions was solved utilizing the direct collocation method. It was found that fatigue-like cost functions produced realistic gait, with stance phase knee flexion, as opposed to energy-related cost functions which avoided knee flexion during the stance phase. We conclude that fatigue minimization may be one of the primary optimality principles governing human gait. PMID:20074736

  20. Sensitivity of Optimal Solutions to Control Problems for Second Order Evolution Subdifferential Inclusions.

    PubMed

    Bartosz, Krzysztof; Denkowski, Zdzisław; Kalita, Piotr

    In this paper the sensitivity of optimal solutions to control problems described by second order evolution subdifferential inclusions under perturbations of state relations and of cost functionals is investigated. First we establish a new existence result for a class of such inclusions. Then, based on the theory of sequential [Formula: see text]-convergence we recall the abstract scheme concerning convergence of minimal values and minimizers. The abstract scheme works provided we can establish two properties: the Kuratowski convergence of solution sets for the state relations and some complementary [Formula: see text]-convergence of the cost functionals. Then these two properties are implemented in the considered case.

  1. Quasi-static ensemble variational data assimilation: a theoretical and numerical study with the iterative ensemble Kalman smoother

    NASA Astrophysics Data System (ADS)

    Fillion, Anthony; Bocquet, Marc; Gratton, Serge

    2018-04-01

    The analysis in nonlinear variational data assimilation is the solution of a non-quadratic minimization. Thus, the analysis efficiency relies on its ability to locate a global minimum of the cost function. If this minimization uses a Gauss-Newton (GN) method, it is critical for the starting point to be in the attraction basin of a global minimum. Otherwise the method may converge to a local extremum, which degrades the analysis. With chaotic models, the number of local extrema often increases with the temporal extent of the data assimilation window, making the former condition harder to satisfy. This is unfortunate because the assimilation performance also increases with this temporal extent. However, a quasi-static (QS) minimization may overcome these local extrema. It accomplishes this by gradually injecting the observations in the cost function. This method was introduced by Pires et al. (1996) in a 4D-Var context. We generalize this approach to four-dimensional strong-constraint nonlinear ensemble variational (EnVar) methods, which are based on both a nonlinear variational analysis and the propagation of dynamical error statistics via an ensemble. This forces one to consider the cost function minimizations in the broader context of cycled data assimilation algorithms. We adapt this QS approach to the iterative ensemble Kalman smoother (IEnKS), an exemplar of nonlinear deterministic four-dimensional EnVar methods. Using low-order models, we quantify the positive impact of the QS approach on the IEnKS, especially for long data assimilation windows. We also examine the computational cost of QS implementations and suggest cheaper algorithms.

  2. Approaching the basis set limit for DFT calculations using an environment-adapted minimal basis with perturbation theory: Formulation, proof of concept, and a pilot implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe

    2016-07-28

    Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set producesmore » <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.« less

  3. Adopting epidemic model to optimize medication and surgical intervention of excess weight

    NASA Astrophysics Data System (ADS)

    Sun, Ruoyan

    2017-01-01

    We combined an epidemic model with an objective function to minimize the weighted sum of people with excess weight and the cost of a medication and surgical intervention in the population. The epidemic model is consisted of ordinary differential equations to describe three subpopulation groups based on weight. We introduced an intervention using medication and surgery to deal with excess weight. An objective function is constructed taking into consideration the cost of the intervention as well as the weight distribution of the population. Using empirical data, we show that fixed participation rate reduces the size of obese population but increases the size for overweight. An optimal participation rate exists and decreases with respect to time. Both theoretical analysis and empirical example confirm the existence of an optimal participation rate, u*. Under u*, the weighted sum of overweight (S) and obese (O) population as well as the cost of the program is minimized. This article highlights the existence of an optimal participation rate that minimizes the number of people with excess weight and the cost of the intervention. The time-varying optimal participation rate could contribute to designing future public health interventions of excess weight.

  4. Wavelength routing beyond the standard graph coloring approach

    NASA Astrophysics Data System (ADS)

    Blankenhorn, Thomas

    2004-04-01

    When lightpaths are routed in the planning stage of transparent optical networks, the textbook approach is to use algorithms that try to minimize the overall number of wavelengths used in the . We demonstrate that this method cannot be expected to minimize actual costs when the marginal cost of instlling more wavelengths is a declining function of the number of wavelengths already installed, as is frequently the case. We further demonstrate how cost optimization can theoretically be improved with algorithms based on Prim"s algorithm. Finally, we test this theory with simulaion on a series of actual network topologies, which confirm the theoretical analysis.

  5. Toxicity Minimized Cryoprotectant Addition and Removal Procedures for Adherent Endothelial Cells

    PubMed Central

    Davidson, Allyson Fry; Glasscock, Cameron; McClanahan, Danielle R.; Benson, James D.; Higgins, Adam Z.

    2015-01-01

    Ice-free cryopreservation, known as vitrification, is an appealing approach for banking of adherent cells and tissues because it prevents dissociation and morphological damage that may result from ice crystal formation. However, current vitrification methods are often limited by the cytotoxicity of the concentrated cryoprotective agent (CPA) solutions that are required to suppress ice formation. Recently, we described a mathematical strategy for identifying minimally toxic CPA equilibration procedures based on the minimization of a toxicity cost function. Here we provide direct experimental support for the feasibility of these methods when applied to adherent endothelial cells. We first developed a concentration- and temperature-dependent toxicity cost function by exposing the cells to a range of glycerol concentrations at 21°C and 37°C, and fitting the resulting viability data to a first order cell death model. This cost function was then numerically minimized in our state constrained optimization routine to determine addition and removal procedures for 17 molal (mol/kg water) glycerol solutions. Using these predicted optimal procedures, we obtained 81% recovery after exposure to vitrification solutions, as well as successful vitrification with the relatively slow cooling and warming rates of 50°C/min and 130°C/min. In comparison, conventional multistep CPA equilibration procedures resulted in much lower cell yields of about 10%. Our results demonstrate the potential for rational design of minimally toxic vitrification procedures and pave the way for extension of our optimization approach to other adherent cell types as well as more complex systems such as tissues and organs. PMID:26605546

  6. The Inactivation Principle: Mathematical Solutions Minimizing the Absolute Work and Biological Implications for the Planning of Arm Movements

    PubMed Central

    Berret, Bastien; Darlot, Christian; Jean, Frédéric; Pozzo, Thierry; Papaxanthis, Charalambos; Gauthier, Jean Paul

    2008-01-01

    An important question in the literature focusing on motor control is to determine which laws drive biological limb movements. This question has prompted numerous investigations analyzing arm movements in both humans and monkeys. Many theories assume that among all possible movements the one actually performed satisfies an optimality criterion. In the framework of optimal control theory, a first approach is to choose a cost function and test whether the proposed model fits with experimental data. A second approach (generally considered as the more difficult) is to infer the cost function from behavioral data. The cost proposed here includes a term called the absolute work of forces, reflecting the mechanical energy expenditure. Contrary to most investigations studying optimality principles of arm movements, this model has the particularity of using a cost function that is not smooth. First, a mathematical theory related to both direct and inverse optimal control approaches is presented. The first theoretical result is the Inactivation Principle, according to which minimizing a term similar to the absolute work implies simultaneous inactivation of agonistic and antagonistic muscles acting on a single joint, near the time of peak velocity. The second theoretical result is that, conversely, the presence of non-smoothness in the cost function is a necessary condition for the existence of such inactivation. Second, during an experimental study, participants were asked to perform fast vertical arm movements with one, two, and three degrees of freedom. Observed trajectories, velocity profiles, and final postures were accurately simulated by the model. In accordance, electromyographic signals showed brief simultaneous inactivation of opposing muscles during movements. Thus, assuming that human movements are optimal with respect to a certain integral cost, the minimization of an absolute-work-like cost is supported by experimental observations. Such types of optimality criteria may be applied to a large range of biological movements. PMID:18949023

  7. Production Functions for Water Delivery Systems: Analysis and Estimation Using Dual Cost Function and Implicit Price Specifications

    NASA Astrophysics Data System (ADS)

    Teeples, Ronald; Glyer, David

    1987-05-01

    Both policy and technical analysis of water delivery systems have been based on cost functions that are inconsistent with or are incomplete representations of the neoclassical production functions of economics. We present a full-featured production function model of water delivery which can be estimated from a multiproduct, dual cost function. The model features implicit prices for own-water inputs and is implemented as a jointly estimated system of input share equations and a translog cost function. Likelihood ratio tests are performed showing that a minimally constrained, full-featured production function is a necessary specification of the water delivery operations in our sample. This, plus the model's highly efficient and economically correct parameter estimates, confirms the usefulness of a production function approach to modeling the economic activities of water delivery systems.

  8. Aircraft parameter estimation

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.

    1987-01-01

    The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.

  9. Optimal feedback control of turbulent channel flow

    NASA Technical Reports Server (NTRS)

    Bewley, Thomas; Choi, Haecheon; Temam, Roger; Moin, Parviz

    1993-01-01

    Feedback control equations were developed and tested for computing wall normal control velocities to control turbulent flow in a channel with the objective of reducing drag. The technique used is the minimization of a 'cost functional' which is constructed to represent some balance of the drag integrated over the wall and the net control effort. A distribution of wall velocities is found which minimizes this cost functional some time shortly in the future based on current observations of the flow near the wall. Preliminary direct numerical simulations of the scheme applied to turbulent channel flow indicates it provides approximately 17 percent drag reduction. The mechanism apparent when the scheme is applied to a simplified flow situation is also discussed.

  10. A multi-objective genetic algorithm for a mixed-model assembly U-line balancing type-I problem considering human-related issues, training, and learning

    NASA Astrophysics Data System (ADS)

    Rabbani, Masoud; Montazeri, Mona; Farrokhi-Asl, Hamed; Rafiei, Hamed

    2016-12-01

    Mixed-model assembly lines are increasingly accepted in many industrial environments to meet the growing trend of greater product variability, diversification of customer demands, and shorter life cycles. In this research, a new mathematical model is presented considering balancing a mixed-model U-line and human-related issues, simultaneously. The objective function consists of two separate components. The first part of the objective function is related to balance problem. In this part, objective functions are minimizing the cycle time, minimizing the number of workstations, and maximizing the line efficiencies. The second part is related to human issues and consists of hiring cost, firing cost, training cost, and salary. To solve the presented model, two well-known multi-objective evolutionary algorithms, namely non-dominated sorting genetic algorithm and multi-objective particle swarm optimization, have been used. A simple solution representation is provided in this paper to encode the solutions. Finally, the computational results are compared and analyzed.

  11. Routing and Scheduling Optimization Model of Sea Transportation

    NASA Astrophysics Data System (ADS)

    barus, Mika debora br; asyrafy, Habib; nababan, Esther; mawengkang, Herman

    2018-01-01

    This paper examines the routing and scheduling optimization model of sea transportation. One of the issues discussed is about the transportation of ships carrying crude oil (tankers) which is distributed to many islands. The consideration is the cost of transportation which consists of travel costs and the cost of layover at the port. Crude oil to be distributed consists of several types. This paper develops routing and scheduling model taking into consideration some objective functions and constraints. The formulation of the mathematical model analyzed is to minimize costs based on the total distance visited by the tanker and minimize the cost of the ports. In order for the model of the problem to be more realistic and the cost calculated to be more appropriate then added a parameter that states the multiplier factor of cost increases as the charge of crude oil is filled.

  12. Development of Activity-based Cost Functions for Cellulase, Invertase, and Other Enzymes

    NASA Astrophysics Data System (ADS)

    Stowers, Chris C.; Ferguson, Elizabeth M.; Tanner, Robert D.

    As enzyme chemistry plays an increasingly important role in the chemical industry, cost analysis of these enzymes becomes a necessity. In this paper, we examine the aspects that affect the cost of enzymes based upon enzyme activity. The basis for this study stems from a previously developed objective function that quantifies the tradeoffs in enzyme purification via the foam fractionation process (Cherry et al., Braz J Chem Eng 17:233-238, 2000). A generalized cost function is developed from our results that could be used to aid in both industrial and lab scale chemical processing. The generalized cost function shows several nonobvious results that could lead to significant savings. Additionally, the parameters involved in the operation and scaling up of enzyme processing could be optimized to minimize costs. We show that there are typically three regimes in the enzyme cost analysis function: the low activity prelinear region, the moderate activity linear region, and high activity power-law region. The overall form of the cost analysis function appears to robustly fit the power law form.

  13. Losses from effluent taxes and quotas under uncertainty

    USGS Publications Warehouse

    Watson, W.D.; Ridker, R.G.

    1984-01-01

    Recent theoretical papers by Adar and Griffin (J. Environ. Econ. Manag.3, 178-188 (1976)), Fishelson (J. Environ. Econ. Manag.3, 189-197 (1976)), and Weitzman (Rev. Econ. Studies41, 477-491 (1974)) show that,different expected social losses arise from using effluent taxes and quotas as alternative control instruments when marginal control costs are uncertain. Key assumptions in these analyses are linear marginal cost and benefit functions and an additive error for the marginal cost function (to reflect uncertainty). In this paper, empirically derived nonlinear functions and more realistic multiplicative error terms are used to estimate expected control and damage costs and to identify (empirically) the mix of control instruments that minimizes expected losses. ?? 1984.

  14. Solar electricity supply isolines of generation capacity and storage.

    PubMed

    Grossmann, Wolf; Grossmann, Iris; Steininger, Karl W

    2015-03-24

    The recent sharp drop in the cost of photovoltaic (PV) electricity generation accompanied by globally rapidly increasing investment in PV plants calls for new planning and management tools for large-scale distributed solar networks. Of major importance are methods to overcome intermittency of solar electricity, i.e., to provide dispatchable electricity at minimal costs. We find that pairs of electricity generation capacity G and storage S that give dispatchable electricity and are minimal with respect to S for a given G exhibit a smooth relationship of mutual substitutability between G and S. These isolines between G and S support the solving of several tasks, including the optimal sizing of generation capacity and storage, optimal siting of solar parks, optimal connections of solar parks across time zones for minimizing intermittency, and management of storage in situations of far below average insolation to provide dispatchable electricity. G-S isolines allow determining the cost-optimal pair (G,S) as a function of the cost ratio of G and S. G-S isolines provide a method for evaluating the effect of geographic spread and time zone coverage on costs of solar electricity.

  15. Solar electricity supply isolines of generation capacity and storage

    PubMed Central

    Grossmann, Wolf; Grossmann, Iris; Steininger, Karl W.

    2015-01-01

    The recent sharp drop in the cost of photovoltaic (PV) electricity generation accompanied by globally rapidly increasing investment in PV plants calls for new planning and management tools for large-scale distributed solar networks. Of major importance are methods to overcome intermittency of solar electricity, i.e., to provide dispatchable electricity at minimal costs. We find that pairs of electricity generation capacity G and storage S that give dispatchable electricity and are minimal with respect to S for a given G exhibit a smooth relationship of mutual substitutability between G and S. These isolines between G and S support the solving of several tasks, including the optimal sizing of generation capacity and storage, optimal siting of solar parks, optimal connections of solar parks across time zones for minimizing intermittency, and management of storage in situations of far below average insolation to provide dispatchable electricity. G−S isolines allow determining the cost-optimal pair (G,S) as a function of the cost ratio of G and S. G−S isolines provide a method for evaluating the effect of geographic spread and time zone coverage on costs of solar electricity. PMID:25755261

  16. A novel edge-preserving nonnegative matrix factorization method for spectral unmixing

    NASA Astrophysics Data System (ADS)

    Bao, Wenxing; Ma, Ruishi

    2015-12-01

    Spectral unmixing technique is one of the key techniques to identify and classify the material in the hyperspectral image processing. A novel robust spectral unmixing method based on nonnegative matrix factorization(NMF) is presented in this paper. This paper used an edge-preserving function as hypersurface cost function to minimize the nonnegative matrix factorization. To minimize the hypersurface cost function, we constructed the updating functions for signature matrix of end-members and abundance fraction respectively. The two functions are updated alternatively. For evaluation purpose, synthetic data and real data have been used in this paper. Synthetic data is used based on end-members from USGS digital spectral library. AVIRIS Cuprite dataset have been used as real data. The spectral angle distance (SAD) and abundance angle distance(AAD) have been used in this research for assessment the performance of proposed method. The experimental results show that this method can obtain more ideal results and good accuracy for spectral unmixing than present methods.

  17. Mathematical Optimization Algorithm for Minimizing the Cost Function of GHG Emission in AS/RS Using Positive Selection Based Clonal Selection Principle

    NASA Astrophysics Data System (ADS)

    Mahalakshmi; Murugesan, R.

    2018-04-01

    This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.

  18. Airport and Airway Costs: Allocation and Recovery in the 1980’s.

    DTIC Science & Technology

    1987-02-01

    1997 [8]. 3*X S.% Volume 4, FAA Cost Recovery Options [9). Volume 5, Econometric Cost Functions for FAA Cost Allocation Model [10]. Volume 6, Users...and relative price elasticities ( Ramsey pricing technique). User fees based on the Ramsey pricing tend to be less burdensome on users and minimize...full discussion of the Ramsey pricing techniques is provided in Allocation of Federal Airport and Airway Costs for FY 1985 [6]. -12- In step 5

  19. Stochastic multi-objective model for optimal energy exchange optimization of networked microgrids with presence of renewable generation under risk-based strategies.

    PubMed

    Gazijahani, Farhad Samadi; Ravadanegh, Sajad Najafi; Salehi, Javad

    2018-02-01

    The inherent volatility and unpredictable nature of renewable generations and load demand pose considerable challenges for energy exchange optimization of microgrids (MG). To address these challenges, this paper proposes a new risk-based multi-objective energy exchange optimization for networked MGs from economic and reliability standpoints under load consumption and renewable power generation uncertainties. In so doing, three various risk-based strategies are distinguished by using conditional value at risk (CVaR) approach. The proposed model is specified as a two-distinct objective function. The first function minimizes the operation and maintenance costs, cost of power transaction between upstream network and MGs as well as power loss cost, whereas the second function minimizes the energy not supplied (ENS) value. Furthermore, the stochastic scenario-based approach is incorporated into the approach in order to handle the uncertainty. Also, Kantorovich distance scenario reduction method has been implemented to reduce the computational burden. Finally, non-dominated sorting genetic algorithm (NSGAII) is applied to minimize the objective functions simultaneously and the best solution is extracted by fuzzy satisfying method with respect to risk-based strategies. To indicate the performance of the proposed model, it is performed on the modified IEEE 33-bus distribution system and the obtained results show that the presented approach can be considered as an efficient tool for optimal energy exchange optimization of MGs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Climate Intervention as an Optimization Problem

    NASA Astrophysics Data System (ADS)

    Caldeira, Ken; Ban-Weiss, George A.

    2010-05-01

    Typically, climate models simulations of intentional intervention in the climate system have taken the approach of imposing a change (eg, in solar flux, aerosol concentrations, aerosol emissions) and then predicting how that imposed change might affect Earth's climate or chemistry. Computations proceed from cause to effect. However, humans often proceed from "What do I want?" to "How do I get it?" One approach to thinking about intentional intervention in the climate system ("geoengineering") is to ask "What kind of climate do we want?" and then ask "What pattern of radiative forcing would come closest to achieving that desired climate state?" This involves defining climate goals and a cost function that measures how closely those goals are attained. (An important next step is to ask "How would we go about producing these desired patterns of radiative forcing?" However, this question is beyond the scope of our present study.) We performed a variety of climate simulations in NCAR's CAM3.1 atmospheric general circulation model with a slab ocean model and thermodynamic sea ice model. We then evaluated, for a specific set of climate forcing basis functions (ie, aerosol concentration distributions), the extent to which the climate response to a linear combination of those basis functions was similar to a linear combination of the climate response to each basis function taken individually. We then developed several cost functions (eg, relative to the 1xCO2 climate, minimize rms difference in zonal and annual mean land temperature, minimize rms difference in zonal and annual mean runoff, minimize rms difference in a combination of these temperature and runoff indices) and then predicted optimal combinations of our basis functions that would minimize these cost functions. Lastly, we produced forward simulations of the predicted optimal radiative forcing patterns and compared these with our expected results. Obviously, our climate model is much simpler than reality and predictions from individual models do not provide a sound basis for action; nevertheless, our model results indicate that the general approach outlined here can lead to patterns of radiative forcing that make the zonal annual mean climate of a high CO2 world markedly more similar to that of a low CO2 world simultaneously for both temperature and hydrological indices, where degree of similarity is measured using our explicit cost functions. We restricted ourselves to zonally uniform aerosol concentrations distributions that can be defined in terms of a positive-definite quadratic equation on the sine of latitude. Under this constraint, applying an aerosol distribution in a 2xCO2 climate that minimized a combination of rms difference in zonal and annual mean land temperature and runoff relative to the 1xCO2 climate, the rms difference in zonal and annual mean temperatures was reduced by ~90% and the rms difference in zonal and annual mean runoff was reduced by ~80%. This indicates that there may be potential for stratospheric aerosols to diminish simultaneously both temperature and hydrological cycle changes caused by excess CO2 in the atmosphere. Clearly, our model does not include many factors (eg, socio-political consequences, chemical consequences, ocean circulation changes, aerosol transport and microphysics) so we do not argue strongly for our specific climate model results, however, we do argue strongly in favor of our methodological approach. The proposed approach is general, in the sense that cost functions can be developed that represent different valuations. While the choice of appropriate cost functions is inherently a value judgment, evaluating those functions for a specific climate simulation is a quantitative exercise. Thus, the use of explicit cost functions in evaluating model results for climate intervention scenarios is a clear way of separating value judgments from purely scientific and technical issues.

  1. Probabilistic distance-based quantizer design for distributed estimation

    NASA Astrophysics Data System (ADS)

    Kim, Yoon Hak

    2016-12-01

    We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.

  2. Simple, Defensible Sample Sizes Based on Cost Efficiency

    PubMed Central

    Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.

    2009-01-01

    Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055

  3. OPTIM: Computer program to generate a vertical profile which minimizes aircraft fuel burn or direct operating cost. User's guide

    NASA Technical Reports Server (NTRS)

    1983-01-01

    A profile of altitude, airspeed, and flight path angle as a function of range between a given set of origin and destination points for particular models of transport aircraft provided by NASA is generated. Inputs to the program include the vertical wind profile, the aircraft takeoff weight, the costs of time and fuel, certain constraint parameters and control flags. The profile can be near optimum in the sense of minimizing: (1) fuel, (2) time, or (3) a combination of fuel and time (direct operating cost (DOC)). The user can also, as an option, specify the length of time the flight is to span. The theory behind the technical details of this program is also presented.

  4. Constant-Elasticity-of-Substitution Simulation

    NASA Technical Reports Server (NTRS)

    Reiter, G.

    1986-01-01

    Program simulates constant elasticity-of-substitution (CES) production function. CES function used by economic analysts to examine production costs as well as uncertainties in production. User provides such input parameters as price of labor, price of capital, and dispersion levels. CES minimizes expected cost to produce capital-uncertainty pair. By varying capital-value input, one obtains series of capital-uncertainty pairs. Capital-uncertainty pairs then used to generate several cost curves. CES program menu driven and features specific print menu for examining selected output curves. Program written in BASIC for interactive execution and implemented on IBM PC-series computer.

  5. Procedure for minimizing the cost per watt of photovoltaic systems

    NASA Technical Reports Server (NTRS)

    Redfield, D.

    1977-01-01

    A general analytic procedure is developed that provides a quantitative method for optimizing any element or process in the fabrication of a photovoltaic energy conversion system by minimizing its impact on the cost per watt of the complete system. By determining the effective value of any power loss associated with each element of the system, this procedure furnishes the design specifications that optimize the cost-performance tradeoffs for each element. A general equation is derived that optimizes the properties of any part of the system in terms of appropriate cost and performance functions, although the power-handling components are found to have a different character from the cell and array steps. Another principal result is that a fractional performance loss occurring at any cell- or array-fabrication step produces that same fractional increase in the cost per watt of the complete array. It also follows that no element or process step can be optimized correctly by considering only its own cost and performance

  6. Active Control of the Forced and Transient Response of a Finite Beam. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Post, John Theodore

    1989-01-01

    When studying structural vibrations resulting from a concentrated source, many structures may be modelled as a finite beam excited by a point source. The theoretical limit on cancelling the resulting beam vibrations by utilizing another point source as an active controller is explored. Three different types of excitation are considered, harmonic, random, and transient. In each case, a cost function is defined and minimized for numerous parameter variations. For the case of harmonic excitation, the cost function is obtained by integrating the mean squared displacement over a region of the beam in which control is desired. A controller is then found to minimize this cost function in the control interval. The control interval and controller location are continuously varied for several frequencies of excitation. The results show that control over the entire beam length is possible only when the excitation frequency is near a resonant frequency of the beam, but control over a subregion may be obtained even between resonant frequencies at the cost of increasing the vibration outside of the control region. For random excitation, the cost function is realized by integrating the expected value of the displacement squared over the interval of the beam in which control is desired. This is shown to yield the identical cost function as obtained by integrating the cost function for harmonic excitation over all excitation frequencies. As a result, it is always possible to reduce the cost function for random excitation whether controlling the entire beam or just a subregion, without ever increasing the vibration outside the region in which control is desired. The last type of excitation considered is a single, transient pulse. A cost function representative of the beam vibration is obtained by integrating the transient displacement squared over a region of the beam and over all time. The form of the controller is chosen a priori as either one or two delayed pulses. Delays constrain the controller to be causal. The best possible control is then examined while varying the region of control and the controller location. It is found that control is always possible using either one or two control pulses. The two pulse controller gives better performance than a single pulse controller, but finding the optimal delay time for the additional controllers increases as the square of the number of control pulses.

  7. A real-space approach to the X-ray phase problem

    NASA Astrophysics Data System (ADS)

    Liu, Xiangan

    Over the past few decades, the phase problem of X-ray crystallography has been explored in reciprocal space in the so called direct methods . Here we investigate the problem using a real-space approach that bypasses the laborious procedure of frequent Fourier synthesis and peak picking. Starting from a completely random structure, we move the atoms around in real space to minimize a cost function. A Monte Carlo method named simulated annealing (SA) is employed to search the global minimum of the cost function which could be constructed in either real space or reciprocal space. In the hybrid minimal principle, we combine the dual space costs together. One part of the cost function monitors the probability distribution of the phase triplets, while the other is a real space cost function which represents the discrepancy between measured and calculated intensities. Compared to the single space cost functions, the dual space cost function has a greatly improved landscape and therefore could prevent the system from being trapped in metastable states. Thus, the structures of large molecules such as virginiamycin (C43H 49N7O10 · 3CH0OH), isoleucinomycin (C60H102N 6O18) and hexadecaisoleucinomycin (HEXIL) (C80H136 N8O24) can now be solved, whereas it would not be possible using the single cost function. When a molecule gets larger, the configurational space becomes larger, and the requirement of CPU time increases exponentially. The method of improved Monte Carlo sampling has demonstrated its capability to solve large molecular structures. The atoms are encouraged to sample the high density regions in space determined by an approximate density map which in turn is updated and modified by averaging and Fourier synthesis. This type of biased sampling has led to considerable reduction of the configurational space. It greatly improves the algorithm compared to the previous uniform sampling. Hence, for instance, 90% of computer run time could be cut in solving the complex structure of isoleucinomycin. Successful trial calculations include larger molecular structures such as HEXIL and a collagen-like peptide (PPG). Moving chemical fragment is proposed to reduce the degrees of freedom. Furthermore, stereochemical parameters are considered for geometric constraints and for a cost function related to chemical energy.

  8. A minimal multiconfigurational technique.

    PubMed

    Fernández Rico, J; Paniagua, M; GarcíA De La Vega, J M; Fernández-Alonso, J I; Fantucci, P

    1986-04-01

    A direct minimization method previously presented by the authors is applied here to biconfigurational wave functions. A very moderate increasing in the time by iteration with respect to the one-determinant calculation and good convergence properties have been found. So qualitatively correct studies on singlet systems with strong biradical character can be performed with a cost similar to that required by Hartree-Fock calculations. Copyright © 1986 John Wiley & Sons, Inc.

  9. Asteroid Crew Segment Mission Lean Development

    NASA Technical Reports Server (NTRS)

    Gard, Joseph; McDonald, Mark

    2014-01-01

    Asteroid Retrieval Crewed Mission (ARCM) requires a minimum set of Key Capabilities compared in the context of the baseline EM-1/2 Orion and SLS capabilities. These include: Life Support & Human Systems Capabilities; Mission Kit Capabilities; Minimizing the impact to the Orion and SLS development schedules and funding. Leveraging existing technology development efforts to develop the kits adds functionality to Orion while minimizing cost and mass impact.

  10. Getting Down to Business.

    ERIC Educational Resources Information Center

    Wood, Lonnie

    1998-01-01

    A dozen schools in Colorado opened their doors to professional performance auditors to evaluate their effectiveness and efficiency. The audit reports recommended finding precise costs of functions, programs, and operations; minimizing duplication; and increasing accountability. (MLF)

  11. Navigable networks as Nash equilibria of navigation games.

    PubMed

    Gulyás, András; Bíró, József J; Kőrösi, Attila; Rétvári, Gábor; Krioukov, Dmitri

    2015-07-03

    Common sense suggests that networks are not random mazes of purposeless connections, but that these connections are organized so that networks can perform their functions well. One function common to many networks is targeted transport or navigation. Here, using game theory, we show that minimalistic networks designed to maximize the navigation efficiency at minimal cost share basic structural properties with real networks. These idealistic networks are Nash equilibria of a network construction game whose purpose is to find an optimal trade-off between the network cost and navigability. We show that these skeletons are present in the Internet, metabolic, English word, US airport, Hungarian road networks, and in a structural network of the human brain. The knowledge of these skeletons allows one to identify the minimal number of edges, by altering which one can efficiently improve or paralyse navigation in the network.

  12. A cost minimization analysis of early correction of anterior crossbite-a randomized controlled trial.

    PubMed

    Wiedel, Anna-Paulina; Norlund, Anders; Petrén, Sofia; Bondemark, Lars

    2016-04-01

    Economic evaluations provide an important basis for allocation of resources and health services planning. The aim of this study was to evaluate and compare the costs of correcting anterior crossbite with functional shift, using fixed or removable appliances (FA or RA) and to relate the costs to the effects, using cost-minimization analysis. Sixty-two patients with anterior crossbite and functional shift were randomized in blocks of 10. Thirty-one patients were randomized to be treated with brackets and arch wire (FA) and 31 with an acrylic plate (RA). Duration of treatment and number and estimated length of appointments and cancellations were registered. Direct costs (premises, staff salaries, material, and laboratory costs) and indirect costs (the accompanying parents' loss of income while absent from work) were calculated and evaluated with reference to successful outcome alone, to successful and unsuccessful outcomes and to re-treatment when required. Societal costs were defined as the sum of direct and indirect costs. Treatment with FA or RA. There were no significant differences between FA and RA with respect to direct costs for treatment time, but both indirect costs and direct costs for material were significantly lower for FA. The total societal costs were lower for FA than for RA. Costs depend on local factors and should not be directly extrapolated to other locations. The analysis disclosed significant economic benefits for FA over RA. Even when only successful outcomes were assessed, treatment with RA was more expensive. This trial was not registered. The protocol was not published before trial commencement. © The Author 2015. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  13. A cost minimization analysis of early correction of anterior crossbite—a randomized controlled trial

    PubMed Central

    Norlund, Anders; Petrén, Sofia; Bondemark, Lars

    2016-01-01

    Summary Objective: Economic evaluations provide an important basis for allocation of resources and health services planning. The aim of this study was to evaluate and compare the costs of correcting anterior crossbite with functional shift, using fixed or removable appliances (FA or RA) and to relate the costs to the effects, using cost-minimization analysis. Design, Setting, and Participants: Sixty-two patients with anterior crossbite and functional shift were randomized in blocks of 10. Thirty-one patients were randomized to be treated with brackets and arch wire (FA) and 31 with an acrylic plate (RA). Duration of treatment and number and estimated length of appointments and cancellations were registered. Direct costs (premises, staff salaries, material, and laboratory costs) and indirect costs (the accompanying parents’ loss of income while absent from work) were calculated and evaluated with reference to successful outcome alone, to successful and unsuccessful outcomes and to re-treatment when required. Societal costs were defined as the sum of direct and indirect costs. Interventions: Treatment with FA or RA. Results: There were no significant differences between FA and RA with respect to direct costs for treatment time, but both indirect costs and direct costs for material were significantly lower for FA. The total societal costs were lower for FA than for RA. Limitations: Costs depend on local factors and should not be directly extrapolated to other locations. Conclusion: The analysis disclosed significant economic benefits for FA over RA. Even when only successful outcomes were assessed, treatment with RA was more expensive. Trial registration: This trial was not registered. Protocol: The protocol was not published before trial commencement. PMID:25940585

  14. A grid layout algorithm for automatic drawing of biochemical networks.

    PubMed

    Li, Weijiang; Kurata, Hiroyuki

    2005-05-01

    Visualization is indispensable in the research of complex biochemical networks. Available graph layout algorithms are not adequate for satisfactorily drawing such networks. New methods are required to visualize automatically the topological architectures and facilitate the understanding of the functions of the networks. We propose a novel layout algorithm to draw complex biochemical networks. A network is modeled as a system of interacting nodes on squared grids. A discrete cost function between each node pair is designed based on the topological relation and the geometric positions of the two nodes. The layouts are produced by minimizing the total cost. We design a fast algorithm to minimize the discrete cost function, by which candidate layouts can be produced efficiently. A simulated annealing procedure is used to choose better candidates. Our algorithm demonstrates its ability to exhibit cluster structures clearly in relatively compact layout areas without any prior knowledge. We developed Windows software to implement the algorithm for CADLIVE. All materials can be freely downloaded from http://kurata21.bio.kyutech.ac.jp/grid/grid_layout.htm; http://www.cadlive.jp/ http://kurata21.bio.kyutech.ac.jp/grid/grid_layout.htm; http://www.cadlive.jp/

  15. The economics of data acquisition computers for ST and MST radars

    NASA Technical Reports Server (NTRS)

    Watkins, B. J.

    1983-01-01

    Some low cost options for data acquisition computers for ST (stratosphere, troposphere) and MST (mesosphere, stratosphere, troposphere) are presented. The particular equipment discussed reflects choices made by the University of Alaska group but of course many other options exist. The low cost microprocessor and array processor approach presented here has several advantages because of its modularity. An inexpensive system may be configured for a minimum performance ST radar, whereas a multiprocessor and/or a multiarray processor system may be used for a higher performance MST radar. This modularity is important for a network of radars because the initial cost is minimized while future upgrades will still be possible at minimal expense. This modularity also aids in lowering the cost of software development because system expansions should rquire little software changes. The functions of the radar computer will be to obtain Doppler spectra in near real time with some minor analysis such as vector wind determination.

  16. Design of a magnetic-tunnel-junction-oriented nonvolatile lookup table circuit with write-operation-minimized data shifting

    NASA Astrophysics Data System (ADS)

    Suzuki, Daisuke; Hanyu, Takahiro

    2018-04-01

    A magnetic-tunnel-junction (MTJ)-oriented nonvolatile lookup table (LUT) circuit, in which a low-power data-shift function is performed by minimizing the number of write operations in MTJ devices is proposed. The permutation of the configuration memory cell for read/write access is performed as opposed to conventional direct data shifting to minimize the number of write operations, which results in significant write energy savings in the data-shift function. Moreover, the hardware cost of the proposed LUT circuit is small since the selector is shared between read access and write access. In fact, the power consumption in the data-shift function and the transistor count are reduced by 82 and 52%, respectively, compared with those in a conventional static random-access memory-based implementation using a 90 nm CMOS technology.

  17. Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1985-01-01

    Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.

  18. Heuristic Approach for Configuration of a Grid-Tied Microgrid in Puerto Rico

    NASA Astrophysics Data System (ADS)

    Rodriguez, Miguel A.

    The high rates of cost of electricity that consumers are being charged by the utility grid in Puerto Rico have created an energy crisis around the island. This situation is due to the island's dependence on imported fossil fuels. In order to aid in the transition from fossil-fuel based electricity into electricity from renewable and alternative sources, this research work focuses on reducing the cost of electricity for Puerto Rico through means of finding the optimal microgrid configuration for a set number of consumers from the residential sector. The Hybrid Optimization Modeling for Energy Renewables (HOMER) software, developed by NREL, is utilized as an aid in determining the optimal microgrid setting. The problem is also approached via convex optimization; specifically, an objective function C(t) is formulated in order to be minimized. The cost function depends on the energy supplied by the grid, the energy supplied by renewable sources, the energy not supplied due to outages, as well as any excess energy sold to the utility in a yearly manner. A term for considering the social cost of carbon is also considered in the cost function. Once the microgrid settings from HOMER are obtained, those are evaluated via the optimized function C( t), which will in turn assess the true optimality of the microgrid configuration. A microgrid to supply 10 consumers is considered; each consumer can possess a different microgrid configuration. The cost function C( t) is minimized, and the Net Present Value and Cost of Electricity are computed for each configuration, in order to assess the true feasibility. Results show that the greater the penetration of components into the microgrid, the greater the energy produced by the renewable sources in the microgrid, the greater the energy not supplied due to outages. The proposed method demonstrates that adding large amounts of renewable components in a microgrid does not necessarily translates into economic benefits for the consumer; in fact, there is a trade back between cost and addition of elements that must be considered. Any configurations which consider further increases in microgrid components will result in increased NPV and increased costs of electricity, which deem the configurations as unfeasible.

  19. Balancing building and maintenance costs in growing transport networks

    NASA Astrophysics Data System (ADS)

    Bottinelli, Arianna; Louf, Rémi; Gherardi, Marco

    2017-09-01

    The costs associated to the length of links impose unavoidable constraints to the growth of natural and artificial transport networks. When future network developments cannot be predicted, the costs of building and maintaining connections cannot be minimized simultaneously, requiring competing optimization mechanisms. Here, we study a one-parameter nonequilibrium model driven by an optimization functional, defined as the convex combination of building cost and maintenance cost. By varying the coefficient of the combination, the model interpolates between global and local length minimization, i.e., between minimum spanning trees and a local version known as dynamical minimum spanning trees. We show that cost balance within this ensemble of dynamical networks is a sufficient ingredient for the emergence of tradeoffs between the network's total length and transport efficiency, and of optimal strategies of construction. At the transition between two qualitatively different regimes, the dynamics builds up power-law distributed waiting times between global rearrangements, indicating a point of nonoptimality. Finally, we use our model as a framework to analyze empirical ant trail networks, showing its relevance as a null model for cost-constrained network formation.

  20. Experimental demonstration of using divergence cost-function in SPGD algorithm for coherent beam combining with tip/tilt control.

    PubMed

    Geng, Chao; Luo, Wen; Tan, Yi; Liu, Hongmei; Mu, Jinbo; Li, Xinyang

    2013-10-21

    A novel approach of tip/tilt control by using divergence cost function in stochastic parallel gradient descent (SPGD) algorithm for coherent beam combining (CBC) is proposed and demonstrated experimentally in a seven-channel 2-W fiber amplifier array with both phase-locking and tip/tilt control, for the first time to our best knowledge. Compared with the conventional power-in-the-bucket (PIB) cost function for SPGD optimization, the tip/tilt control using divergence cost function ensures wider correction range, automatic switching control of program, and freedom of camera's intensity-saturation. Homemade piezoelectric-ring phase-modulator (PZT PM) and adaptive fiber-optics collimator (AFOC) are developed to correct piston- and tip/tilt-type aberrations, respectively. The PIB cost function is employed for phase-locking via maximization of SPGD optimization, while the divergence cost function is used for tip/tilt control via minimization. An average of 432-μrad of divergence metrics in open loop has decreased to 89-μrad when tip/tilt control implemented. In CBC, the power in the full width at half maximum (FWHM) of the main lobe increases by 32 times, and the phase residual error is less than λ/15.

  1. On the Convergence Analysis of the Optimized Gradient Method.

    PubMed

    Kim, Donghwan; Fessler, Jeffrey A

    2017-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.

  2. On the Convergence Analysis of the Optimized Gradient Method

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2016-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707

  3. Early play may predict later dominance relationships in yellow-bellied marmots (Marmota flaviventris).

    PubMed

    Blumstein, Daniel T; Chung, Lawrance K; Smith, Jennifer E

    2013-05-22

    Play has been defined as apparently functionless behaviour, yet since play is costly, models of adaptive evolution predict that it should have some beneficial function (or functions) that outweigh its costs. We provide strong evidence for a long-standing, but poorly supported hypothesis: that early social play is practice for later dominance relationships. We calculated the relative dominance rank by observing the directional outcome of playful interactions in juvenile and yearling yellow-bellied marmots (Marmota flaviventris) and found that these rank relationships were correlated with later dominance ranks calculated from agonistic interactions, however, the strength of this relationship attenuated over time. While play may have multiple functions, one of them may be to establish later dominance relationships in a minimally costly way.

  4. Strategy of arm movement control is determined by minimization of neural effort for joint coordination.

    PubMed

    Dounskaia, Natalia; Shimansky, Yury

    2016-06-01

    Optimality criteria underlying organization of arm movements are often validated by testing their ability to adequately predict hand trajectories. However, kinematic redundancy of the arm allows production of the same hand trajectory through different joint coordination patterns. We therefore consider movement optimality at the level of joint coordination patterns. A review of studies of multi-joint movement control suggests that a 'trailing' pattern of joint control is consistently observed during which a single ('leading') joint is rotated actively and interaction torque produced by this joint is the primary contributor to the motion of the other ('trailing') joints. A tendency to use the trailing pattern whenever the kinematic redundancy is sufficient and increased utilization of this pattern during skillful movements suggests optimality of the trailing pattern. The goal of this study is to determine the cost function minimization of which predicts the trailing pattern. We show that extensive experimental testing of many known cost functions cannot successfully explain optimality of the trailing pattern. We therefore propose a novel cost function that represents neural effort for joint coordination. That effort is quantified as the cost of neural information processing required for joint coordination. We show that a tendency to reduce this 'neurocomputational' cost predicts the trailing pattern and that the theoretically developed predictions fully agree with the experimental findings on control of multi-joint movements. Implications for future research of the suggested interpretation of the trailing joint control pattern and the theory of joint coordination underlying it are discussed.

  5. A variation reduction allocation model for quality improvement to minimize investment and quality costs by considering suppliers’ learning curve

    NASA Astrophysics Data System (ADS)

    Rosyidi, C. N.; Jauhari, WA; Suhardi, B.; Hamada, K.

    2016-02-01

    Quality improvement must be performed in a company to maintain its product competitiveness in the market. The goal of such improvement is to increase the customer satisfaction and the profitability of the company. In current practice, a company needs several suppliers to provide the components in assembly process of a final product. Hence quality improvement of the final product must involve the suppliers. In this paper, an optimization model to allocate the variance reduction is developed. Variation reduction is an important term in quality improvement for both manufacturer and suppliers. To improve suppliers’ components quality, the manufacturer must invest an amount of their financial resources in learning process of the suppliers. The objective function of the model is to minimize the total cost consists of investment cost, and quality costs for both internal and external quality costs. The Learning curve will determine how the employee of the suppliers will respond to the learning processes in reducing the variance of the component.

  6. An Interactive Life Cycle Cost Forecasting Tool

    DTIC Science & Technology

    1990-03-01

    of Phase in period PO - Length of Phase out period PV - Present value viii AFIT/GOR/ENS/90M-17 Abstract A tool was developed for Monte Carlo...and B. Note that this is for a given configuration. The E represents effectiveness and is equated to some function of the quantity of systems A and B...purchased. Either strategy, maximizing effectiveness or minimizing cost, leads to some type of cost comparison among the proposed systems. The problem

  7. Comparison of joint space versus task force load distribution optimization for a multiarm manipulator system

    NASA Technical Reports Server (NTRS)

    Soloway, Donald I.; Alberts, Thomas E.

    1989-01-01

    It is often proposed that the redundancy in choosing a force distribution for multiple arms grasping a single object should be handled by minimizing a quadratic performance index. The performance index may be formulated in terms of joint torques or in terms of the Cartesian space force/torque applied to the body by the grippers. The former seeks to minimize power consumption while the latter minimizes body stresses. Because the cost functions are related to each other by a joint angle dependent transformation on the weight matrix, it might be argued that either method tends to reduce power consumption, but clearly the joint space minimization is optimal. A comparison of these two options is presented with consideration given to computational cost and power consumption. Simulation results using a two arm robot system are presented to show the savings realized by employing the joint space optimization. These savings are offset by additional complexity, computation time and in some cases processor power consumption.

  8. Investigating the performance of wavelet neural networks in ionospheric tomography using IGS data over Europe

    NASA Astrophysics Data System (ADS)

    Ghaffari Razin, Mir Reza; Voosoghi, Behzad

    2017-04-01

    Ionospheric tomography is a very cost-effective method which is used frequently to modeling of electron density distributions. In this paper, residual minimization training neural network (RMTNN) is used in voxel based ionospheric tomography. Due to the use of wavelet neural network (WNN) with back-propagation (BP) algorithm in RMTNN method, the new method is named modified RMTNN (MRMTNN). To train the WNN with BP algorithm, two cost functions is defined: total and vertical cost functions. Using minimization of cost functions, temporal and spatial ionospheric variations is studied. The GPS measurements of the international GNSS service (IGS) in the central Europe have been used for constructing a 3-D image of the electron density. Three days (2009.04.15, 2011.07.20 and 2013.06.01) with different solar activity index is used for the processing. To validate and better assess reliability of the proposed method, 4 ionosonde and 3 testing stations have been used. Also the results of MRMTNN has been compared to that of the RMTNN method, international reference ionosphere model 2012 (IRI-2012) and spherical cap harmonic (SCH) method as a local ionospheric model. The comparison of MRMTNN results with RMTNN, IRI-2012 and SCH models shows that the root mean square error (RMSE) and standard deviation of the proposed approach are superior to those of the traditional method.

  9. Minimization for conditional simulation: Relationship to optimal transport

    NASA Astrophysics Data System (ADS)

    Oliver, Dean S.

    2014-05-01

    In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.

  10. High quality 4D cone-beam CT reconstruction using motion-compensated total variation regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Ma, Jianhua; Bian, Zhaoying; Zeng, Dong; Feng, Qianjin; Chen, Wufan

    2017-04-01

    Four dimensional cone-beam computed tomography (4D-CBCT) has great potential clinical value because of its ability to describe tumor and organ motion. But the challenge in 4D-CBCT reconstruction is the limited number of projections at each phase, which result in a reconstruction full of noise and streak artifacts with the conventional analytical algorithms. To address this problem, in this paper, we propose a motion compensated total variation regularization approach which tries to fully explore the temporal coherence of the spatial structures among the 4D-CBCT phases. In this work, we additionally conduct motion estimation/motion compensation (ME/MC) on the 4D-CBCT volume by using inter-phase deformation vector fields (DVFs). The motion compensated 4D-CBCT volume is then viewed as a pseudo-static sequence, of which the regularization function was imposed on. The regularization used in this work is the 3D spatial total variation minimization combined with 1D temporal total variation minimization. We subsequently construct a cost function for a reconstruction pass, and minimize this cost function using a variable splitting algorithm. Simulation and real patient data were used to evaluate the proposed algorithm. Results show that the introduction of additional temporal correlation along the phase direction can improve the 4D-CBCT image quality.

  11. Modeling the lowest-cost splitting of a herd of cows by optimizing a cost function

    NASA Astrophysics Data System (ADS)

    Gajamannage, Kelum; Bollt, Erik M.; Porter, Mason A.; Dawkins, Marian S.

    2017-06-01

    Animals live in groups to defend against predation and to obtain food. However, for some animals—especially ones that spend long periods of time feeding—there are costs if a group chooses to move on before their nutritional needs are satisfied. If the conflict between feeding and keeping up with a group becomes too large, it may be advantageous for some groups of animals to split into subgroups with similar nutritional needs. We model the costs and benefits of splitting in a herd of cows using a cost function that quantifies individual variation in hunger, desire to lie down, and predation risk. We model the costs associated with hunger and lying desire as the standard deviations of individuals within a group, and we model predation risk as an inverse exponential function of the group size. We minimize the cost function over all plausible groups that can arise from a given herd and study the dynamics of group splitting. We examine how the cow dynamics and cost function depend on the parameters in the model and consider two biologically-motivated examples: (1) group switching and group fission in a herd of relatively homogeneous cows, and (2) a herd with an equal number of adult males (larger animals) and adult females (smaller animals).

  12. Variational stereo imaging of oceanic waves with statistical constraints.

    PubMed

    Gallego, Guillermo; Yezzi, Anthony; Fedele, Francesco; Benetazzo, Alvise

    2013-11-01

    An image processing observational technique for the stereoscopic reconstruction of the waveform of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi-Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired waveform is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained by combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.

  13. Navigable networks as Nash equilibria of navigation games

    PubMed Central

    Gulyás, András; Bíró, József J.; Kőrösi, Attila; Rétvári, Gábor; Krioukov, Dmitri

    2015-01-01

    Common sense suggests that networks are not random mazes of purposeless connections, but that these connections are organized so that networks can perform their functions well. One function common to many networks is targeted transport or navigation. Here, using game theory, we show that minimalistic networks designed to maximize the navigation efficiency at minimal cost share basic structural properties with real networks. These idealistic networks are Nash equilibria of a network construction game whose purpose is to find an optimal trade-off between the network cost and navigability. We show that these skeletons are present in the Internet, metabolic, English word, US airport, Hungarian road networks, and in a structural network of the human brain. The knowledge of these skeletons allows one to identify the minimal number of edges, by altering which one can efficiently improve or paralyse navigation in the network. PMID:26138277

  14. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.

  15. Transient Dissipation and Structural Costs of Physical Information Transduction

    NASA Astrophysics Data System (ADS)

    Boyd, Alexander B.; Mandal, Dibyendu; Riechers, Paul M.; Crutchfield, James P.

    2017-06-01

    A central result that arose in applying information theory to the stochastic thermodynamics of nonlinear dynamical systems is the information-processing second law (IPSL): the physical entropy of the Universe can decrease if compensated by the Shannon-Kolmogorov-Sinai entropy change of appropriate information-carrying degrees of freedom. In particular, the asymptotic-rate IPSL precisely delineates the thermodynamic functioning of autonomous Maxwellian demons and information engines. How do these systems begin to function as engines, Landauer erasers, and error correctors? We identify a minimal, and thus inescapable, transient dissipation of physical information processing, which is not captured by asymptotic rates, but is critical to adaptive thermodynamic processes such as those found in biological systems. A component of transient dissipation, we also identify an implementation-dependent cost that varies from one physical substrate to another for the same information processing task. Applying these results to producing structured patterns from a structureless information reservoir, we show that "retrodictive" generators achieve the minimal costs. The results establish the thermodynamic toll imposed by a physical system's structure as it comes to optimally transduce information.

  16. "Cost creep due to age creep" phenomenon: pattern analyses of in-patient hospitalization costs for various age brackets in the United States.

    PubMed

    Chinta, Ravi; Burns, David J; Manolis, Chris; Nighswander, Tristan

    2013-01-01

    The expectation that aging leads to a progressive deterioration of biological functions leading to higher healthcare costs is known as the healthcare cost creep due to age creep phenomenon. The authors empirically test the validity of this phenomenon in the context of hospitalization costs based on more than 8 million hospital inpatient records from 1,056 hospitals in the United States. The results question the existence of cost creep due to age creep after the age of 65 years as far as average hospitalization costs are concerned. The authors discuss implications for potential knowledge transfer for cost minimization and medical tourism.

  17. A Study to Develop a Methodology to Enable Direct Cost UCA (Uniform Chart of Accounts) Data to Be Expressed in Terms of Cost per Admission for Specific Diagnosis Related Groups

    DTIC Science & Technology

    1985-07-01

    environment in which it functions . A particular debt of gratitude in this respect is owed to my preceptor, Colonel James Helgeson, for his enthusiasm...patient conditions might run from heart attacks, which often receive intensive care and extensive follow-up support, to episodes of acute respiratory ...to include the following features: a. No disruption of existing IAS and UCA functions as defined by appropriate regulations b. Minimized additional

  18. Computing motion using resistive networks

    NASA Technical Reports Server (NTRS)

    Koch, Christof; Luo, Jin; Mead, Carver; Hutchinson, James

    1988-01-01

    Recent developments in the theory of early vision are described which lead from the formulation of the motion problem as an ill-posed one to its solution by minimizing certain 'cost' functions. These cost or energy functions can be mapped onto simple analog and digital resistive networks. It is shown how the optical flow can be computed by injecting currents into resistive networks and recording the resulting stationary voltage distribution at each node. These networks can be implemented in cMOS VLSI circuits and represent plausible candidates for biological vision systems.

  19. Systems and methods for energy cost optimization in a building system

    DOEpatents

    Turney, Robert D.; Wenzel, Michael J.

    2016-09-06

    Methods and systems to minimize energy cost in response to time-varying energy prices are presented for a variety of different pricing scenarios. A cascaded model predictive control system is disclosed comprising an inner controller and an outer controller. The inner controller controls power use using a derivative of a temperature setpoint and the outer controller controls temperature via a power setpoint or power deferral. An optimization procedure is used to minimize a cost function within a time horizon subject to temperature constraints, equality constraints, and demand charge constraints. Equality constraints are formulated using system model information and system state information whereas demand charge constraints are formulated using system state information and pricing information. A masking procedure is used to invalidate demand charge constraints for inactive pricing periods including peak, partial-peak, off-peak, critical-peak, and real-time.

  20. Use of optimization to predict the effect of selected parameters on commuter aircraft performance

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Shevell, R. S.

    1982-01-01

    The relationships between field length and cruise speed and aircraft direct operating cost were determined. A gradient optimizing computer program was developed to minimize direct operating cost (DOC) as a function of airplane geometry. In this way, the best airplane operating under one set of constraints can be compared with the best operating under another. A constant 30-passenger fuselage and rubberized engines based on the General Electric CT-7 were used as a baseline. All aircraft had to have a 600 nautical mile maximum range and were designed to FAR part 25 structural integrity and climb gradient regulations. Direct operating cost was minimized for a typical design mission of 150 nautical miles. For purposes of C sub L sub max calculation, all aircraft had double-slotted flaps but with no Fowler action.

  1. Building Security into Schools.

    ERIC Educational Resources Information Center

    Kosar, John E.; Ahmed, Faruq

    2000-01-01

    Offers tips for redesigning safer school sites; installing and implementing security technologies (closed-circuit television cameras, door security hardware, electronic security panels, identification cards, metal detectors, and panic buttons); educating students and staff about security functions; and minimizing costs via a comprehensive campus…

  2. Knee point search using cascading top-k sorting with minimized time complexity.

    PubMed

    Wang, Zheng; Tseng, Shian-Shyong

    2013-01-01

    Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.

  3. Early play may predict later dominance relationships in yellow-bellied marmots (Marmota flaviventris)

    PubMed Central

    Blumstein, Daniel T.; Chung, Lawrance K.; Smith, Jennifer E.

    2013-01-01

    Play has been defined as apparently functionless behaviour, yet since play is costly, models of adaptive evolution predict that it should have some beneficial function (or functions) that outweigh its costs. We provide strong evidence for a long-standing, but poorly supported hypothesis: that early social play is practice for later dominance relationships. We calculated the relative dominance rank by observing the directional outcome of playful interactions in juvenile and yearling yellow-bellied marmots (Marmota flaviventris) and found that these rank relationships were correlated with later dominance ranks calculated from agonistic interactions, however, the strength of this relationship attenuated over time. While play may have multiple functions, one of them may be to establish later dominance relationships in a minimally costly way. PMID:23536602

  4. The Protein Cost of Metabolic Fluxes: Prediction from Enzymatic Rate Laws and Cost Minimization.

    PubMed

    Noor, Elad; Flamholz, Avi; Bar-Even, Arren; Davidi, Dan; Milo, Ron; Liebermeister, Wolfram

    2016-11-01

    Bacterial growth depends crucially on metabolic fluxes, which are limited by the cell's capacity to maintain metabolic enzymes. The necessary enzyme amount per unit flux is a major determinant of metabolic strategies both in evolution and bioengineering. It depends on enzyme parameters (such as kcat and KM constants), but also on metabolite concentrations. Moreover, similar amounts of different enzymes might incur different costs for the cell, depending on enzyme-specific properties such as protein size and half-life. Here, we developed enzyme cost minimization (ECM), a scalable method for computing enzyme amounts that support a given metabolic flux at a minimal protein cost. The complex interplay of enzyme and metabolite concentrations, e.g. through thermodynamic driving forces and enzyme saturation, would make it hard to solve this optimization problem directly. By treating enzyme cost as a function of metabolite levels, we formulated ECM as a numerically tractable, convex optimization problem. Its tiered approach allows for building models at different levels of detail, depending on the amount of available data. Validating our method with measured metabolite and protein levels in E. coli central metabolism, we found typical prediction fold errors of 4.1 and 2.6, respectively, for the two kinds of data. This result from the cost-optimized metabolic state is significantly better than randomly sampled metabolite profiles, supporting the hypothesis that enzyme cost is important for the fitness of E. coli. ECM can be used to predict enzyme levels and protein cost in natural and engineered pathways, and could be a valuable computational tool to assist metabolic engineering projects. Furthermore, it establishes a direct connection between protein cost and thermodynamics, and provides a physically plausible and computationally tractable way to include enzyme kinetics into constraint-based metabolic models, where kinetics have usually been ignored or oversimplified.

  5. Cost and performance model for redox flow batteries

    NASA Astrophysics Data System (ADS)

    Viswanathan, Vilayanur; Crawford, Alasdair; Stephenson, David; Kim, Soowhan; Wang, Wei; Li, Bin; Coffey, Greg; Thomsen, Ed; Graff, Gordon; Balducci, Patrick; Kintner-Meyer, Michael; Sprenkle, Vincent

    2014-02-01

    A cost model is developed for all vanadium and iron-vanadium redox flow batteries. Electrochemical performance modeling is done to estimate stack performance at various power densities as a function of state of charge and operating conditions. This is supplemented with a shunt current model and a pumping loss model to estimate actual system efficiency. The operating parameters such as power density, flow rates and design parameters such as electrode aspect ratio and flow frame channel dimensions are adjusted to maximize efficiency and minimize capital costs. Detailed cost estimates are obtained from various vendors to calculate cost estimates for present, near-term and optimistic scenarios. The most cost-effective chemistries with optimum operating conditions for power or energy intensive applications are determined, providing a roadmap for battery management systems development for redox flow batteries. The main drivers for cost reduction for various chemistries are identified as a function of the energy to power ratio of the storage system. Levelized cost analysis further guide suitability of various chemistries for different applications.

  6. Hybrid optimal online-overnight charging coordination of plug-in electric vehicles in smart grid

    NASA Astrophysics Data System (ADS)

    Masoum, Mohammad A. S.; Nabavi, Seyed M. H.

    2016-10-01

    Optimal coordinated charging of plugged-in electric vehicles (PEVs) in smart grid (SG) can be beneficial for both consumers and utilities. This paper proposes a hybrid optimal online followed by overnight charging coordination of high and low priority PEVs using discrete particle swarm optimization (DPSO) that considers the benefits of both consumers and electric utilities. Objective functions are online minimization of total cost (associated with grid losses and energy generation) and overnight valley filling through minimization of the total load levels. The constraints include substation transformer loading, node voltage regulations and the requested final battery state of charge levels (SOCreq). The main challenge is optimal selection of the overnight starting time (toptimal-overnight,start) to guarantee charging of all vehicle batteries to the SOCreq levels before the requested plug-out times (treq) which is done by simultaneously solving the online and overnight objective functions. The online-overnight PEV coordination approach is implemented on a 449-node SG; results are compared for uncoordinated and coordinated battery charging as well as a modified strategy using cost minimizations for both online and overnight coordination. The impact of toptimal-overnight,start on performance of the proposed PEV coordination is investigated.

  7. Functionalized graphene-based cathode for highly reversible lithium-sulfur batteries.

    PubMed

    Kim, Jin Won; Ocon, Joey D; Park, Dong-Won; Lee, Jaeyoung

    2014-05-01

    In this article, we highlight the salient issues in the development of lithium-sulfur battery (LSB) cathodes, present different points of view in solving them, and argue, why in the future, functionalized graphene or graphene oxide might be the ultimate solution towards LSB commercialization. As shown by previous studies and also in our recent work, functionalized graphene and graphene oxide enhance the reversibility of the charge-discharge process by trapping polysulfides in the oxygen functional groups on the graphene surface, thus minimizing polysulfide dissolution. This will be helpful for the rational design of new cathode structures based on graphene for LSBs with minimal capacity fading, low extra cost, and without the unnecessary weight increase caused by metal/metal oxide additives. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Critical analysis of common canister programs: a review of cross-functional considerations and health system economics.

    PubMed

    Larson, Trent; Gudavalli, Ravindra; Prater, Dean; Sutton, Scott

    2015-04-01

    Respiratory inhalers constitute a large percentage of hospital pharmacy expenditures. Metered-dose inhaler (MDI) canisters usually contain enough medication to last 2 to 4 weeks, while the average hospital stay for acute hospitalizations of respiratory illnesses is only 4-5 days. Hospital pharmacies are often unable to operationalize relabeling of inhalers at discharge to meet regulatory requirements. This dilemma produces drug wastage. The common canister (CC) approach is a method some hospitals implemented in an effort to minimize the costs associated with this issue. The CC program uses a shared inhaler, an individual one-way valve holding chamber, and a cleaning protocol. This approach has been the subject of considerable controversy. Proponents of the CC approach reported considerable cost savings to their institutions. Opponents of the CC approach are not convinced the benefits outweigh even a minimal risk of cross-contamination since adherence to protocols for hand washing and disinfection of the MDI device cannot be guaranteed to be 100% (pathogens from contaminated devices can enter the respiratory tract through inhalation). Other cost containment strategies, such as unit dose nebulizers, may be useful to realize similar reductions in pharmacy drug costs while minimizing the risks of nosocomial infections and their associated medical costs. The CC strategy may be appropriate for some hospital pharmacies that face budget constraints, but a full evaluation of the risks, benefits, and potential costs should guide those who make hospital policy decisions.

  9. Cellular Manufacturing System with Dynamic Lot Size Material Handling

    NASA Astrophysics Data System (ADS)

    Khannan, M. S. A.; Maruf, A.; Wangsaputra, R.; Sutrisno, S.; Wibawa, T.

    2016-02-01

    Material Handling take as important role in Cellular Manufacturing System (CMS) design. In several study at CMS design material handling was assumed per pieces or with constant lot size. In real industrial practice, lot size may change during rolling period to cope with demand changes. This study develops CMS Model with Dynamic Lot Size Material Handling. Integer Linear Programming is used to solve the problem. Objective function of this model is minimizing total expected cost consisting machinery depreciation cost, operating costs, inter-cell material handling cost, intra-cell material handling cost, machine relocation costs, setup costs, and production planning cost. This model determines optimum cell formation and optimum lot size. Numerical examples are elaborated in the paper to ilustrate the characterictic of the model.

  10. Bio-inspired secure data mules for medical sensor network

    NASA Astrophysics Data System (ADS)

    Muraleedharan, Rajani; Gao, Weihua; Osadciw, Lisa A.

    2010-04-01

    Medical sensor network consist of heterogeneous nodes, wireless, mobile and wired with varied functionality. The resources at each sensor require to be exploited minimally while sensitive information is sensed and communicated to its access points using secure data mules. In this paper, we analyze the flat architecture, where different functionality and priority information require varied resources forms a non-deterministic polynomial-time hard problem. Hence, a bio-inspired data mule that helps to obtain dynamic multi-objective solution with minimal resource and secure path is applied. The performance of the proposed approach is based on reduced latency, data delivery rate and resource cost.

  11. TH-A-9A-04: Incorporating Liver Functionality in Radiation Therapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, V; Epelman, M; Feng, M

    2014-06-15

    Purpose: Liver SBRT patients have both variable pretreatment liver function (e.g., due to degree of cirrhosis and/or prior treatments) and sensitivity to radiation, leading to high variability in potential liver toxicity with similar doses. This work aims to explicitly incorporate liver perfusion into treatment planning to redistribute dose to preserve well-functioning areas without compromising target coverage. Methods: Voxel-based liver perfusion, a measure of functionality, was computed from dynamic contrast-enhanced MRI. Two optimization models with different cost functions subject to the same dose constraints (e.g., minimum target EUD and maximum critical structure EUDs) were compared. The cost functions minimized were EUDmore » (standard model) and functionality-weighted EUD (functional model) to the liver. The resulting treatment plans delivering the same target EUD were compared with respect to their DVHs, their dose wash difference, the average dose delivered to voxels of a particular perfusion level, and change in number of high-/low-functioning voxels receiving a particular dose. Two-dimensional synthetic and three-dimensional clinical examples were studied. Results: The DVHs of all structures of plans from each model were comparable. In contrast, in plans obtained with the functional model, the average dose delivered to high-/low-functioning voxels was lower/higher than in plans obtained with its standard counterpart. The number of high-/low-functioning voxels receiving high/low dose was lower in the plans that considered perfusion in the cost function than in the plans that did not. Redistribution of dose can be observed in the dose wash differences. Conclusion: Liver perfusion can be used during treatment planning potentially to minimize the risk of toxicity during liver SBRT, resulting in better global liver function. The functional model redistributes dose in the standard model from higher to lower functioning voxels, while achieving the same target EUD and satisfying dose limits to critical structures. This project is funded by MCubed and grant R01-CA132834.« less

  12. Improved minimum cost and maximum power two stage genome-wide association study designs.

    PubMed

    Stanhope, Stephen A; Skol, Andrew D

    2012-01-01

    In a two stage genome-wide association study (2S-GWAS), a sample of cases and controls is allocated into two groups, and genetic markers are analyzed sequentially with respect to these groups. For such studies, experimental design considerations have primarily focused on minimizing study cost as a function of the allocation of cases and controls to stages, subject to a constraint on the power to detect an associated marker. However, most treatments of this problem implicitly restrict the set of feasible designs to only those that allocate the same proportions of cases and controls to each stage. In this paper, we demonstrate that removing this restriction can improve the cost advantages demonstrated by previous 2S-GWAS designs by up to 40%. Additionally, we consider designs that maximize study power with respect to a cost constraint, and show that recalculated power maximizing designs can recover a substantial amount of the planned study power that might otherwise be lost if study funding is reduced. We provide open source software for calculating cost minimizing or power maximizing 2S-GWAS designs.

  13. FACTOR - FACTOR II. Departmental Program and Model Documentation 71-3.

    ERIC Educational Resources Information Center

    Wilson, Stanley; Billingsley, Ray

    This computer program is designed to optimize a Cobb-Douglas type of production function. The user of this program may choose isoquants and/or the expansion path for a Cobb-Douglas type of production function with up to nine resources. An expansion path is the combination of quantities of each resource that minimizes the cost at each production…

  14. Game Theoretic Approaches to Protect Cyberspace

    DTIC Science & Technology

    2010-04-20

    security problems. 3.1 Definitions Game A description of the strategic interaction between opposing, or co-operating, interests where the con ...that involves probabilistic transitions through several states of the system. The game pro - gresses as a sequence of states. The game begins with a...eventually leads to a discretized model. The reaction functions uniquely minimize the strictly con - vex cost functions. After discretization, this

  15. Voltage stability index based optimal placement of static VAR compensator and sizing using Cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee

    2017-07-01

    This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.

  16. Method for protein structure alignment

    DOEpatents

    Blankenbecler, Richard; Ohlsson, Mattias; Peterson, Carsten; Ringner, Markus

    2005-02-22

    This invention provides a method for protein structure alignment. More particularly, the present invention provides a method for identification, classification and prediction of protein structures. The present invention involves two key ingredients. First, an energy or cost function formulation of the problem simultaneously in terms of binary (Potts) assignment variables and real-valued atomic coordinates. Second, a minimization of the energy or cost function by an iterative method, where in each iteration (1) a mean field method is employed for the assignment variables and (2) exact rotation and/or translation of atomic coordinates is performed, weighted with the corresponding assignment variables.

  17. Digital robust active control law synthesis for large order flexible structure using parameter optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.

    1988-01-01

    A generic procedure for the parameter optimization of a digital control law for a large-order flexible flight vehicle or large space structure modeled as a sampled data system is presented. A linear quadratic Guassian type cost function was minimized, while satisfying a set of constraints on the steady-state rms values of selected design responses, using a constrained optimization technique to meet multiple design requirements. Analytical expressions for the gradients of the cost function and the design constraints on mean square responses with respect to the control law design variables are presented.

  18. Bonded functional esthetic prototype: an alternative pre-treatment mock-up technique and cost-effective medium-term esthetic solution.

    PubMed

    McLaren, Edward A

    2013-09-01

    As the economy has receded in recent years, many patients have been inclined to reject dental treatment beyond what they feel is the minimal amount necessary. Increasingly, there has been reluctance to take on the expense of full-mouth restorations and time-consuming procedures. Consequently, clinicians can benefit from innovative, conservative, interim solutions that enable them to provide segment treatment with long-term stability and esthetics, with lower initial cost. The bonded functional esthetic prototype (BFEP) allows fabrication of up to 14 teeth from composite in 1 hour, providing either a pre-treatment restoration or a long-term provisional solution until further treatment can be completed. As demonstrated herein, the BFEP enables superb function, stability, and esthetics in the interim while dispersing the cost of definitive treatment over time.

  19. A New Model for Solving Time-Cost-Quality Trade-Off Problems in Construction

    PubMed Central

    Fu, Fang; Zhang, Tao

    2016-01-01

    A poor quality affects project makespan and its total costs negatively, but it can be recovered by repair works during construction. We construct a new non-linear programming model based on the classic multi-mode resource constrained project scheduling problem considering repair works. In order to obtain satisfactory quality without a high increase of project cost, the objective is to minimize total quality cost which consists of the prevention cost and failure cost according to Quality-Cost Analysis. A binary dependent normal distribution function is adopted to describe the activity quality; Cumulative quality is defined to determine whether to initiate repair works, according to the different relationships among activity qualities, namely, the coordinative and precedence relationship. Furthermore, a shuffled frog-leaping algorithm is developed to solve this discrete trade-off problem based on an adaptive serial schedule generation scheme and adjusted activity list. In the program of the algorithm, the frog-leaping progress combines the crossover operator of genetic algorithm and a permutation-based local search. Finally, an example of a construction project for a framed railway overpass is provided to examine the algorithm performance, and it assist in decision making to search for the appropriate makespan and quality threshold with minimal cost. PMID:27911939

  20. Estimating costs of sea lice control strategy in Norway.

    PubMed

    Liu, Yajie; Bjelland, Hans Vanhauwaer

    2014-12-01

    This paper explores the costs of sea lice control strategies associated with salmon aquaculture at a farm level in Norway. Diseases can cause reduction in growth, low feed efficiency and market prices, increasing mortality rates, and expenditures on prevention and treatment measures. Aquaculture farms suffer the most direct and immediate economic losses from diseases. The goal of a control strategy is to minimize the total disease costs, including biological losses, and treatment costs while to maximize overall profit. Prevention and control strategies are required to eliminate or minimize the disease, while cost-effective disease control strategies at the fish farm level are designed to reduce the losses, and to enhance productivity and profitability. Thus, the goal can be achieved by integrating models of fish growth, sea lice dynamics and economic factors. A production function is first constructed to incorporate the effects of sea lice on production at a farm level, followed by a detailed cost analysis of several prevention and treatment strategies associated with sea lice in Norway. The results reveal that treatments are costly and treatment costs are very sensitive to treatment types used and timing of the treatment conducted. Applying treatment at an early growth stage is more economical than at a later stage. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Cost-effectiveness analysis in minimally invasive spine surgery.

    PubMed

    Al-Khouja, Lutfi T; Baron, Eli M; Johnson, J Patrick; Kim, Terrence T; Drazin, Doniel

    2014-06-01

    Medical care has been evolving with the increased influence of a value-based health care system. As a result, more emphasis is being placed on ensuring cost-effectiveness and utility in the services provided to patients. This study looks at this development in respect to minimally invasive spine surgery (MISS) costs. A literature review using PubMed, the Cost-Effectiveness Analysis (CEA) Registry, and the National Health Service Economic Evaluation Database (NHS EED) was performed. Papers were included in the study if they reported costs associated with minimally invasive spine surgery (MISS). If there was no mention of cost, CEA, cost-utility analysis (CUA), quality-adjusted life year (QALY), quality, or outcomes mentioned, then the article was excluded. Fourteen studies reporting costs associated with MISS in 12,425 patients (3675 undergoing minimally invasive procedures and 8750 undergoing open procedures) were identified through PubMed, the CEA Registry, and NHS EED. The percent cost difference between minimally invasive and open approaches ranged from 2.54% to 33.68%-all indicating cost saving with a minimally invasive surgical approach. Average length of stay (LOS) for minimally invasive surgery ranged from 0.93 days to 5.1 days compared with 1.53 days to 12 days for an open approach. All studies reporting EBL reported lower volume loss in an MISS approach (range 10-392.5 ml) than in an open approach (range 55-535.5 ml). There are currently an insufficient number of studies published reporting the costs of MISS. Of the studies published, none have followed a standardized method of reporting and analyzing cost data. Preliminary findings analyzing the 14 studies showed both cost saving and better outcomes in MISS compared with an open approach. However, more Level I CEA/CUA studies including cost/QALY evaluations with specifics of the techniques utilized need to be reported in a standardized manner to make more accurate conclusions on the cost effectiveness of minimally invasive spine surgery.

  2. Hierarchical Control Using Networks Trained with Higher-Level Forward Models

    PubMed Central

    Wayne, Greg; Abbott, L.F.

    2015-01-01

    We propose and develop a hierarchical approach to network control of complex tasks. In this approach, a low-level controller directs the activity of a “plant,” the system that performs the task. However, the low-level controller may only be able to solve fairly simple problems involving the plant. To accomplish more complex tasks, we introduce a higher-level controller that controls the lower-level controller. We use this system to direct an articulated truck to a specified location through an environment filled with static or moving obstacles. The final system consists of networks that have memorized associations between the sensory data they receive and the commands they issue. These networks are trained on a set of optimal associations that are generated by minimizing cost functions. Cost function minimization requires predicting the consequences of sequences of commands, which is achieved by constructing forward models, including a model of the lower-level controller. The forward models and cost minimization are only used during training, allowing the trained networks to respond rapidly. In general, the hierarchical approach can be extended to larger numbers of levels, dividing complex tasks into more manageable sub-tasks. The optimization procedure and the construction of the forward models and controllers can be performed in similar ways at each level of the hierarchy, which allows the system to be modified to perform other tasks, or to be extended for more complex tasks without retraining lower-levels. PMID:25058706

  3. Comparison between different direct search optimization algorithms in the calibration of a distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca

    2010-05-01

    The modern distributed hydrological models allow the representation of the different surface and subsurface phenomena with great accuracy and high spatial and temporal resolution. Such complexity requires, in general, an equally accurate parametrization. A number of approaches have been followed in this respect, from simple local search method (like Nelder-Mead algorithm), that minimize a cost function representing some distance between model's output and available measures, to more complex approaches like dynamic filters (such as the Ensemble Kalman Filter) that carry on an assimilation of the observations. In this work the first approach was followed in order to compare the performances of three different direct search algorithms on the calibration of a distributed hydrological balance model. The direct search family can be defined as that category of algorithms that make no use of derivatives of the cost function (that is, in general, a black box) and comprehend a large number of possible approaches. The main benefit of this class of methods is that they don't require changes in the implementation of the numerical codes to be calibrated. The first algorithm is the classical Nelder-Mead, often used in many applications and utilized as reference. The second algorithm is a GSS (Generating Set Search) algorithm, built in order to guarantee the conditions of global convergence and suitable for a parallel and multi-start implementation, here presented. The third one is the EGO algorithm (Efficient Global Optimization), that is particularly suitable to calibrate black box cost functions that require expensive computational resource (like an hydrological simulation). EGO minimizes the number of evaluations of the cost function balancing the need to minimize a response surface that approximates the problem and the need to improve the approximation sampling where prediction error may be high. The hydrological model to be calibrated was MOBIDIC, a complete balance distributed model developed at the Department of Civil and Environmental Engineering of the University of Florence. Discussion on the comparisons between the effectiveness of the different algorithms on different cases of study on Central Italy basins is provided.

  4. Exploring the Pareto frontier using multisexual evolutionary algorithms: an application to a flexible manufacturing problem

    NASA Astrophysics Data System (ADS)

    Bonissone, Stefano R.; Subbu, Raj

    2002-12-01

    In multi-objective optimization (MOO) problems we need to optimize many possibly conflicting objectives. For instance, in manufacturing planning we might want to minimize the cost and production time while maximizing the product's quality. We propose the use of evolutionary algorithms (EAs) to solve these problems. Solutions are represented as individuals in a population and are assigned scores according to a fitness function that determines their relative quality. Strong solutions are selected for reproduction, and pass their genetic material to the next generation. Weak solutions are removed from the population. The fitness function evaluates each solution and returns a related score. In MOO problems, this fitness function is vector-valued, i.e. it returns a value for each objective. Therefore, instead of a global optimum, we try to find the Pareto-optimal or non-dominated frontier. We use multi-sexual EAs with as many genders as optimization criteria. We have created new crossover and gender assignment functions, and experimented with various parameters to determine the best setting (yielding the highest number of non-dominated solutions.) These experiments are conducted using a variety of fitness functions, and the algorithms are later evaluated on a flexible manufacturing problem with total cost and time minimization objectives.

  5. Rectenna system design

    NASA Technical Reports Server (NTRS)

    Brown, W. C.; Dickinson, R. M.; Nalos, E. J.; Ott, J. H.

    1980-01-01

    The function of the rectenna in the solar power satellite system is described and the basic design choices based on the desired microwave field concentration and ground clearance requirements are given. One important area of concern, from the EMI point of view, harmonic reradiation and scattering from the rectenna is also designed. An optimization of a rectenna system design to minimize costs was performed. The rectenna cost breakdown for a 56 w installation is given as an example.

  6. Approximate Dynamic Programming Algorithms for United States Air Force Officer Sustainment

    DTIC Science & Technology

    2015-03-26

    level of correction needed. While paying bonuses has an easily calculable cost, RIFs have more subtle costs. Mone (1994) discovered that in a steady...a regression is performed utilizing instrumental variables to minimize Bellman error. This algorithm uses a set of basis functions to approximate the...transitioned to an all-volunteer force. Charnes et al. (1972) utilize a goal programming model for General Schedule civilian manpower management in the

  7. DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro

    2016-10-01

    This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.

  8. Designing robust control laws using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Marrison, Chris

    1994-01-01

    The purpose of this research is to create a method of finding practical, robust control laws. The robustness of a controller is judged by Stochastic Robustness metrics and the level of robustness is optimized by searching for design parameters that minimize a robustness cost function.

  9. Life cycle optimization model for integrated cogeneration and energy systems applications in buildings

    NASA Astrophysics Data System (ADS)

    Osman, Ayat E.

    Energy use in commercial buildings constitutes a major proportion of the energy consumption and anthropogenic emissions in the USA. Cogeneration systems offer an opportunity to meet a building's electrical and thermal demands from a single energy source. To answer the question of what is the most beneficial and cost effective energy source(s) that can be used to meet the energy demands of the building, optimizations techniques have been implemented in some studies to find the optimum energy system based on reducing cost and maximizing revenues. Due to the significant environmental impacts that can result from meeting the energy demands in buildings, building design should incorporate environmental criteria in the decision making criteria. The objective of this research is to develop a framework and model to optimize a building's operation by integrating congregation systems and utility systems in order to meet the electrical, heating, and cooling demand by considering the potential life cycle environmental impact that might result from meeting those demands as well as the economical implications. Two LCA Optimization models have been developed within a framework that uses hourly building energy data, life cycle assessment (LCA), and mixed-integer linear programming (MILP). The objective functions that are used in the formulation of the problems include: (1) Minimizing life cycle primary energy consumption, (2) Minimizing global warming potential, (3) Minimizing tropospheric ozone precursor potential, (4) Minimizing acidification potential, (5) Minimizing NOx, SO 2 and CO2, and (6) Minimizing life cycle costs, considering a study period of ten years and the lifetime of equipment. The two LCA optimization models can be used for: (a) long term planning and operational analysis in buildings by analyzing the hourly energy use of a building during a day and (b) design and quick analysis of building operation based on periodic analysis of energy use of a building in a year. A Pareto-optimal frontier is also derived, which defines the minimum cost required to achieve any level of environmental emission or primary energy usage value or inversely the minimum environmental indicator and primary energy usage value that can be achieved and the cost required to achieve that value.

  10. Improved Evolutionary Programming with Various Crossover Techniques for Optimal Power Flow Problem

    NASA Astrophysics Data System (ADS)

    Tangpatiphan, Kritsana; Yokoyama, Akihiko

    This paper presents an Improved Evolutionary Programming (IEP) for solving the Optimal Power Flow (OPF) problem, which is considered as a non-linear, non-smooth, and multimodal optimization problem in power system operation. The total generator fuel cost is regarded as an objective function to be minimized. The proposed method is an Evolutionary Programming (EP)-based algorithm with making use of various crossover techniques, normally applied in Real Coded Genetic Algorithm (RCGA). The effectiveness of the proposed approach is investigated on the IEEE 30-bus system with three different types of fuel cost functions; namely the quadratic cost curve, the piecewise quadratic cost curve, and the quadratic cost curve superimposed by sine component. These three cost curves represent the generator fuel cost functions with a simplified model and more accurate models of a combined-cycle generating unit and a thermal unit with value-point loading effect respectively. The OPF solutions by the proposed method and Pure Evolutionary Programming (PEP) are observed and compared. The simulation results indicate that IEP requires less computing time than PEP with better solutions in some cases. Moreover, the influences of important IEP parameters on the OPF solution are described in details.

  11. Pricing health benefits: a cost-minimization approach.

    PubMed

    Miller, Nolan H

    2005-09-01

    We study the role of health benefits in an employer's compensation strategy, given the overall goal of minimizing total compensation cost (wages plus health-insurance cost). When employees' health status is private information, the employer's basic benefit package consists of a base wage and a moderate health plan, with a generous plan available for an additional charge. We show that in setting the charge for the generous plan, a cost-minimizing employer should act as a monopolist who sells "health plan upgrades" to its workers, and we discuss ways tax policy can encourage efficiency under cost-minimization and alternative pricing rules.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, Aladsair J.; Viswanathan, Vilayanur V.; Stephenson, David E.

    A robust performance-based cost model is developed for all-vanadium, iron-vanadium and iron chromium redox flow batteries. Systems aspects such as shunt current losses, pumping losses and thermal management are accounted for. The objective function, set to minimize system cost, allows determination of stack design and operating parameters such as current density, flow rate and depth of discharge (DOD). Component costs obtained from vendors are used to calculate system costs for various time frames. A 2 kW stack data was used to estimate unit energy costs and compared with model estimates for the same size electrodes. The tool has been sharedmore » with the redox flow battery community to both validate their stack data and guide future direction.« less

  13. Active control of panel vibrations induced by boundary-layer flow

    NASA Technical Reports Server (NTRS)

    Chow, Pao-Liu

    1991-01-01

    Some problems in active control of panel vibration excited by a boundary layer flow over a flat plate are studied. In the first phase of the study, the optimal control problem of vibrating elastic panel induced by a fluid dynamical loading was studied. For a simply supported rectangular plate, the vibration control problem can be analyzed by a modal analysis. The control objective is to minimize the total cost functional, which is the sum of a vibrational energy and the control cost. By means of the modal expansion, the dynamical equation for the plate and the cost functional are reduced to a system of ordinary differential equations and the cost functions for the modes. For the linear elastic plate, the modes become uncoupled. The control of each modal amplitude reduces to the so-called linear regulator problem in control theory. Such problems can then be solved by the method of adjoint state. The optimality system of equations was solved numerically by a shooting method. The results are summarized.

  14. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    PubMed

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  15. How Big is Too Big for Hubs: Marginal Profitability in Hub-and-Spoke Networks

    NASA Technical Reports Server (NTRS)

    Ross, Leola B.; Schmidt, Stephen J.

    1997-01-01

    Increasing the scale of hub operations at major airports has led to concerns about congestion at excessively large hubs. In this paper, we estimate the marginal cost of adding spokes to an existing hub network. We observe entry/non-entry decisions on potential spokes from existing hubs, and estimate both a variable profit function for providing service in markets using that spoke as well as the fixed costs of providing service to the spoke. We let the fixed costs depend upon the scale of operations at the hub, and find the hub size at which spoke service costs are minimized.

  16. "Fly-by-Wireless" and Wireless Sensors Update

    NASA Technical Reports Server (NTRS)

    Studor, George F.

    2009-01-01

    This slide presentation reviews the uses of wires in the Aerospace industry. The vision is to minimize cables and connectors and increase functionality across the aerospace industry by providing reliable lower cost modular and higher performance alternatives to wired data connectivity to benefit the entire vehicle and program

  17. Optimal Operation System of the Integrated District Heating System with Multiple Regional Branches

    NASA Astrophysics Data System (ADS)

    Kim, Ui Sik; Park, Tae Chang; Kim, Lae-Hyun; Yeo, Yeong Koo

    This paper presents an optimal production and distribution management for structural and operational optimization of the integrated district heating system (DHS) with multiple regional branches. A DHS consists of energy suppliers and consumers, district heating pipelines network and heat storage facilities in the covered region. In the optimal management system, production of heat and electric power, regional heat demand, electric power bidding and sales, transport and storage of heat at each regional DHS are taken into account. The optimal management system is formulated as a mixed integer linear programming (MILP) where the objectives is to minimize the overall cost of the integrated DHS while satisfying the operation constraints of heat units and networks as well as fulfilling heating demands from consumers. Piecewise linear formulation of the production cost function and stairwise formulation of the start-up cost function are used to compute nonlinear cost function approximately. Evaluation of the total overall cost is based on weekly operations at each district heat branches. Numerical simulations show the increase of energy efficiency due to the introduction of the present optimal management system.

  18. A cost-function approach to rival penalized competitive learning (RPCL).

    PubMed

    Ma, Jinwen; Wang, Taijun

    2006-08-01

    Rival penalized competitive learning (RPCL) has been shown to be a useful tool for clustering on a set of sample data in which the number of clusters is unknown. However, the RPCL algorithm was proposed heuristically and is still in lack of a mathematical theory to describe its convergence behavior. In order to solve the convergence problem, we investigate it via a cost-function approach. By theoretical analysis, we prove that a general form of RPCL, called distance-sensitive RPCL (DSRPCL), is associated with the minimization of a cost function on the weight vectors of a competitive learning network. As a DSRPCL process decreases the cost to a local minimum, a number of weight vectors eventually fall into a hypersphere surrounding the sample data, while the other weight vectors diverge to infinity. Moreover, it is shown by the theoretical analysis and simulation experiments that if the cost reduces into the global minimum, a correct number of weight vectors is automatically selected and located around the centers of the actual clusters, respectively. Finally, we apply the DSRPCL algorithms to unsupervised color image segmentation and classification of the wine data.

  19. Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions.

    PubMed

    Chen, Ke; Wang, Shihai

    2011-01-01

    Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work.

  20. The minimal work cost of information processing

    NASA Astrophysics Data System (ADS)

    Faist, Philippe; Dupuis, Frédéric; Oppenheim, Jonathan; Renner, Renato

    2015-07-01

    Irreversible information processing cannot be carried out without some inevitable thermodynamical work cost. This fundamental restriction, known as Landauer's principle, is increasingly relevant today, as the energy dissipation of computing devices impedes the development of their performance. Here we determine the minimal work required to carry out any logical process, for instance a computation. It is given by the entropy of the discarded information conditional to the output of the computation. Our formula takes precisely into account the statistically fluctuating work requirement of the logical process. It enables the explicit calculation of practical scenarios, such as computational circuits or quantum measurements. On the conceptual level, our result gives a precise and operational connection between thermodynamic and information entropy, and explains the emergence of the entropy state function in macroscopic thermodynamics.

  1. The Protein Cost of Metabolic Fluxes: Prediction from Enzymatic Rate Laws and Cost Minimization

    PubMed Central

    Noor, Elad; Flamholz, Avi; Bar-Even, Arren; Davidi, Dan; Milo, Ron; Liebermeister, Wolfram

    2016-01-01

    Bacterial growth depends crucially on metabolic fluxes, which are limited by the cell’s capacity to maintain metabolic enzymes. The necessary enzyme amount per unit flux is a major determinant of metabolic strategies both in evolution and bioengineering. It depends on enzyme parameters (such as kcat and KM constants), but also on metabolite concentrations. Moreover, similar amounts of different enzymes might incur different costs for the cell, depending on enzyme-specific properties such as protein size and half-life. Here, we developed enzyme cost minimization (ECM), a scalable method for computing enzyme amounts that support a given metabolic flux at a minimal protein cost. The complex interplay of enzyme and metabolite concentrations, e.g. through thermodynamic driving forces and enzyme saturation, would make it hard to solve this optimization problem directly. By treating enzyme cost as a function of metabolite levels, we formulated ECM as a numerically tractable, convex optimization problem. Its tiered approach allows for building models at different levels of detail, depending on the amount of available data. Validating our method with measured metabolite and protein levels in E. coli central metabolism, we found typical prediction fold errors of 4.1 and 2.6, respectively, for the two kinds of data. This result from the cost-optimized metabolic state is significantly better than randomly sampled metabolite profiles, supporting the hypothesis that enzyme cost is important for the fitness of E. coli. ECM can be used to predict enzyme levels and protein cost in natural and engineered pathways, and could be a valuable computational tool to assist metabolic engineering projects. Furthermore, it establishes a direct connection between protein cost and thermodynamics, and provides a physically plausible and computationally tractable way to include enzyme kinetics into constraint-based metabolic models, where kinetics have usually been ignored or oversimplified. PMID:27812109

  2. Hybrid Stochastic Search Technique based Suboptimal AGC Regulator Design for Power System using Constrained Feedback Control Strategy

    NASA Astrophysics Data System (ADS)

    Ibraheem, Omveer, Hasan, N.

    2010-10-01

    A new hybrid stochastic search technique is proposed to design of suboptimal AGC regulator for a two area interconnected non reheat thermal power system incorporating DC link in parallel with AC tie-line. In this technique, we are proposing the hybrid form of Genetic Algorithm (GA) and simulated annealing (SA) based regulator. GASA has been successfully applied to constrained feedback control problems where other PI based techniques have often failed. The main idea in this scheme is to seek a feasible PI based suboptimal solution at each sampling time. The feasible solution decreases the cost function rather than minimizing the cost function.

  3. Scheduling Jobs with Variable Job Processing Times on Unrelated Parallel Machines

    PubMed Central

    Zhang, Guang-Qian; Wang, Jian-Jun; Liu, Ya-Jing

    2014-01-01

    m unrelated parallel machines scheduling problems with variable job processing times are considered, where the processing time of a job is a function of its position in a sequence, its starting time, and its resource allocation. The objective is to determine the optimal resource allocation and the optimal schedule to minimize a total cost function that dependents on the total completion (waiting) time, the total machine load, the total absolute differences in completion (waiting) times on all machines, and total resource cost. If the number of machines is a given constant number, we propose a polynomial time algorithm to solve the problem. PMID:24982933

  4. CAD of control systems: Application of nonlinear programming to a linear quadratic formulation

    NASA Technical Reports Server (NTRS)

    Fleming, P.

    1983-01-01

    The familiar suboptimal regulator design approach is recast as a constrained optimization problem and incorporated in a Computer Aided Design (CAD) package where both design objective and constraints are quadratic cost functions. This formulation permits the separate consideration of, for example, model following errors, sensitivity measures and control energy as objectives to be minimized or limits to be observed. Efficient techniques for computing the interrelated cost functions and their gradients are utilized in conjunction with a nonlinear programming algorithm. The effectiveness of the approach and the degree of insight into the problem which it affords is illustrated in a helicopter regulation design example.

  5. Managing configuration software of ground software applications with glueware

    NASA Technical Reports Server (NTRS)

    Larsen, B.; Herrera, R.; Sesplaukis, T.; Cheng, L.; Sarrel, M.

    2003-01-01

    This paper reports on a simple, low-cost effort to streamline the configuration of the uplink software tools. Even though the existing ground system consisted of JPL and custom Cassini software rather than COTS, we chose a glueware approach--reintegrating with wrappers and bridges and adding minimal new functionality.

  6. Cognitive capacity limitations and Need for Cognition differentially predict reward-induced cognitive effort expenditure.

    PubMed

    Sandra, Dasha A; Otto, A Ross

    2018-03-01

    While psychological, economic, and neuroscientific accounts of behavior broadly maintain that people minimize expenditure of cognitive effort, empirical work reveals how reward incentives can mobilize increased cognitive effort expenditure. Recent theories posit that the decision to expend effort is governed, in part, by a cost-benefit tradeoff whereby the potential benefits of mental effort can offset the perceived costs of effort exertion. Taking an individual differences approach, the present study examined whether one's executive function capacity, as measured by Stroop interference, predicts the extent to which reward incentives reduce switch costs in a task-switching paradigm, which indexes additional expenditure of cognitive effort. In accordance with the predictions of a cost-benefit account of effort, we found that a low executive function capacity-and, relatedly, a low intrinsic motivation to expend effort (measured by Need for Cognition)-predicted larger increase in cognitive effort expenditure in response to monetary reward incentives, while individuals with greater executive function capacity-and greater intrinsic motivation to expend effort-were less responsive to reward incentives. These findings suggest that an individual's cost-benefit tradeoff is constrained by the perceived costs of exerting cognitive effort. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Classical statistical mechanics approach to multipartite entanglement

    NASA Astrophysics Data System (ADS)

    Facchi, P.; Florio, G.; Marzolino, U.; Parisi, G.; Pascazio, S.

    2010-06-01

    We characterize the multipartite entanglement of a system of n qubits in terms of the distribution function of the bipartite purity over balanced bipartitions. We search for maximally multipartite entangled states, whose average purity is minimal, and recast this optimization problem into a problem of statistical mechanics, by introducing a cost function, a fictitious temperature and a partition function. By investigating the high-temperature expansion, we obtain the first three moments of the distribution. We find that the problem exhibits frustration.

  8. Screening test recommendations for methicillin-resistant Staphylococcus aureus surveillance practices: A cost-minimization analysis.

    PubMed

    Whittington, Melanie D; Curtis, Donna J; Atherly, Adam J; Bradley, Cathy J; Lindrooth, Richard C; Campbell, Jonathan D

    2017-07-01

    To mitigate methicillin-resistant Staphylococcus aureus (MRSA) infections, intensive care units (ICUs) conduct surveillance through screening patients upon admission followed by adhering to isolation precautions. Two surveillance approaches commonly implemented are universal preemptive isolation and targeted isolation of only MRSA-positive patients. Decision analysis was used to calculate the total cost of universal preemptive isolation and targeted isolation. The screening test used as part of the surveillance practice was varied to identify which screening test minimized inappropriate and total costs. A probabilistic sensitivity analysis was conducted to evaluate the range of total costs resulting from variation in inputs. The total cost of the universal preemptive isolation surveillance practice was minimized when a polymerase chain reaction screening test was used ($82.51 per patient). Costs were $207.60 more per patient when a conventional culture was used due to the longer turnaround time and thus higher isolation costs. The total cost of the targeted isolation surveillance practice was minimized when chromogenic agar 24-hour testing was used ($8.54 per patient). Costs were $22.41 more per patient when polymerase chain reaction was used. For ICUs that preemptively isolate all patients, the use of a polymerase chain reaction screening test is recommended because it can minimize total costs by reducing inappropriate isolation costs. For ICUs that only isolate MRSA-positive patients, the use of chromogenic agar 24-hour testing is recommended to minimize total costs. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  9. New approach to the retrieval of AOD and its uncertainty from MISR observations over dark water

    NASA Astrophysics Data System (ADS)

    Witek, Marcin L.; Garay, Michael J.; Diner, David J.; Bull, Michael A.; Seidel, Felix C.

    2018-01-01

    A new method for retrieving aerosol optical depth (AOD) and its uncertainty from Multi-angle Imaging SpectroRadiometer (MISR) observations over dark water is outlined. MISR's aerosol retrieval algorithm calculates cost functions between observed and pre-simulated radiances for a range of AODs (from 0.0 to 3.0) and a prescribed set of aerosol mixtures. The previous version 22 (V22) operational algorithm considered only the AOD that minimized the cost function for each aerosol mixture and then used a combination of these values to compute the final, best estimate AOD and associated uncertainty. The new approach considers the entire range of cost functions associated with each aerosol mixture. The uncertainty of the reported AOD depends on a combination of (a) the absolute values of the cost functions for each aerosol mixture, (b) the widths of the cost function distributions as a function of AOD, and (c) the spread of the cost function distributions among the ensemble of mixtures. A key benefit of the new approach is that, unlike the V22 algorithm, it does not rely on empirical thresholds imposed on the cost function to determine the success or failure of a particular mixture. Furthermore, a new aerosol retrieval confidence index (ARCI) is established that can be used to screen high-AOD retrieval blunders caused by cloud contamination or other factors. Requiring ARCI ≥ 0.15 as a condition for retrieval success is supported through statistical analysis and outperforms the thresholds used in the V22 algorithm. The described changes to the MISR dark water algorithm will become operational in the new MISR aerosol product (V23), planned for release in 2017.

  10. New Approach to the Retrieval of AOD and its Uncertainty from MISR Observations Over Dark Water

    NASA Astrophysics Data System (ADS)

    Witek, M. L.; Garay, M. J.; Diner, D. J.; Bull, M. A.; Seidel, F.

    2017-12-01

    A new method for retrieving aerosol optical depth (AOD) and its uncertainty from Multi-angle Imaging SpectroRadiometer (MISR) observations over dark water is outlined. MISR's aerosol retrieval algorithm calculates cost functions between observed and pre-simulated radiances for a range of AODs (from 0.0 to 3.0) and a prescribed set of aerosol mixtures. The previous Version 22 (V22) operational algorithm considered only the AOD that minimized the cost function for each aerosol mixture, then used a combination of these values to compute the final, "best estimate" AOD and associated uncertainty. The new approach considers the entire range of cost functions associated with each aerosol mixture. The uncertainty of the reported AOD depends on a combination of a) the absolute values of the cost functions for each aerosol mixture, b) the widths of the cost function distributions as a function of AOD, and c) the spread of the cost function distributions among the ensemble of mixtures. A key benefit of the new approach is that, unlike the V22 algorithm, it does not rely on arbitrary thresholds imposed on the cost function to determine the success or failure of a particular mixture. Furthermore, a new Aerosol Retrieval Confidence Index (ARCI) is established that can be used to screen high-AOD retrieval blunders caused by cloud contamination or other factors. Requiring ARCI≥0.15 as a condition for retrieval success is supported through statistical analysis and outperforms the thresholds used in the V22 algorithm. The described changes to the MISR dark water algorithm will become operational in the new MISR aerosol product (V23), planned for release in 2017.

  11. Shifting orders among suppliers considering risk, price and transportation cost

    NASA Astrophysics Data System (ADS)

    Revitasari, C.; Pujawan, I. N.

    2018-04-01

    Supplier order allocation is an important supply chain decision for an enterprise. It is related to the supplier’s function as a raw material provider and other supporting materials that will be used in production process. Most of works on order allocation has been based on costs and other supply chain performance, but very limited of them taking risks into consideration. In this paper we address the problem of order allocation of a single commodity sourced from multiple suppliers considering supply risks in addition to the attempt of minimizing transportation costs. The supply chain risk was investigated and a procedure was proposed in the risk mitigation phase as a form of risk profile. The objective including risk profile in order allocation is to maximize the product flow from a risky supplier to a relatively less risky supplier. The proposed procedure is applied to a sugar company. The result suggests that order allocations should be maximized to suppliers that have a relatively low risk and minimized to suppliers that have a relatively larger risks.

  12. The cost of a small membrane bioreactor.

    PubMed

    Lo, C H; McAdam, E; Judd, S

    2015-01-01

    The individual cost contributions to the mechanical components of a small membrane bioreactor (MBR) (100-2,500 m3/d flow capacity) are itemised and collated to generate overall capital and operating costs (CAPEX and OPEX) as a function of size. The outcomes are compared to those from previously published detailed cost studies provided for both very small containerised plants (<40 m3/day capacity) and larger municipal plants (2,200-19,000 m3/d). Cost curves, as a function of flow capacity, determined for OPEX, CAPEX and net present value (NPV) based on the heuristic data used indicate a logarithmic function for OPEX and a power-based one for the CAPEX. OPEX correlations were in good quantitative agreement with those reported in the literature. Disparities in the calculated CAPEX trend compared with reported data were attributed to differences in assumptions concerning cost contributions. More reasonable agreement was obtained with the reported membrane separation component CAPEX data from published studies. The heuristic approach taken appears appropriate for small-scale MBRs with minimal costs associated with installation. An overall relationship of net present value=(a tb)Q(-c lnt+d) was determined for the net present value where a=1.265, b=0.44, c=0.00385 and d=0.868 according to the dataset employed for the analysis.

  13. Optimal cost for strengthening or destroying a given network

    NASA Astrophysics Data System (ADS)

    Patron, Amikam; Cohen, Reuven; Li, Daqing; Havlin, Shlomo

    2017-05-01

    Strengthening or destroying a network is a very important issue in designing resilient networks or in planning attacks against networks, including planning strategies to immunize a network against diseases, viruses, etc. Here we develop a method for strengthening or destroying a random network with a minimum cost. We assume a correlation between the cost required to strengthen or destroy a node and the degree of the node. Accordingly, we define a cost function c (k ) , which is the cost of strengthening or destroying a node with degree k . Using the degrees k in a network and the cost function c (k ) , we develop a method for defining a list of priorities of degrees and for choosing the right group of degrees to be strengthened or destroyed that minimizes the total price of strengthening or destroying the entire network. We find that the list of priorities of degrees is universal and independent of the network's degree distribution, for all kinds of random networks. The list of priorities is the same for both strengthening a network and for destroying a network with minimum cost. However, in spite of this similarity, there is a difference between their pc, the critical fraction of nodes that has to be functional to guarantee the existence of a giant component in the network.

  14. Optimal cost for strengthening or destroying a given network.

    PubMed

    Patron, Amikam; Cohen, Reuven; Li, Daqing; Havlin, Shlomo

    2017-05-01

    Strengthening or destroying a network is a very important issue in designing resilient networks or in planning attacks against networks, including planning strategies to immunize a network against diseases, viruses, etc. Here we develop a method for strengthening or destroying a random network with a minimum cost. We assume a correlation between the cost required to strengthen or destroy a node and the degree of the node. Accordingly, we define a cost function c(k), which is the cost of strengthening or destroying a node with degree k. Using the degrees k in a network and the cost function c(k), we develop a method for defining a list of priorities of degrees and for choosing the right group of degrees to be strengthened or destroyed that minimizes the total price of strengthening or destroying the entire network. We find that the list of priorities of degrees is universal and independent of the network's degree distribution, for all kinds of random networks. The list of priorities is the same for both strengthening a network and for destroying a network with minimum cost. However, in spite of this similarity, there is a difference between their p_{c}, the critical fraction of nodes that has to be functional to guarantee the existence of a giant component in the network.

  15. Smoothing of cost function leads to faster convergence of neural network learning

    NASA Astrophysics Data System (ADS)

    Xu, Li-Qun; Hall, Trevor J.

    1994-03-01

    One of the major problems in supervised learning of neural networks is the inevitable local minima inherent in the cost function f(W,D). This often makes classic gradient-descent-based learning algorithms that calculate the weight updates for each iteration according to (Delta) W(t) equals -(eta) (DOT)$DELwf(W,D) powerless. In this paper we describe a new strategy to solve this problem, which, adaptively, changes the learning rate and manipulates the gradient estimator simultaneously. The idea is to implicitly convert the local- minima-laden cost function f((DOT)) into a sequence of its smoothed versions {f(beta t)}Ttequals1, which, subject to the parameter (beta) t, bears less details at time t equals 1 and gradually more later on, the learning is actually performed on this sequence of functionals. The corresponding smoothed global minima obtained in this way, {Wt}Ttequals1, thus progressively approximate W-the desired global minimum. Experimental results on a nonconvex function minimization problem and a typical neural network learning task are given, analyses and discussions of some important issues are provided.

  16. A cost-minimization analysis in minimally invasive spine surgery using a national cost scale method.

    PubMed

    Maillard, Nicolas; Buffenoir-Billet, Kevin; Hamel, Olivier; Lefranc, Benoit; Sellal, Olivier; Surer, Nathalie; Bord, Eric; Grimandi, Gael; Clouet, Johann

    2015-03-01

    The last decade has seen the emergence of minimally invasive spine surgery. However, there is still no consensus on whether percutaneous osteosynthesis (PO) or open surgery (OS) is more cost-effective in treatment of traumatic fractures and degenerative lesions. The objective of this study is to compare the clinical results and hospitalization costs of OS and PO for degenerative lesions and thoraco-lumbar fractures. This cost-minimization study was performed in patients undergoing OS or PO on a 36-month period. Patient data, surgical and clinical results, as well as cost data were collected and analyzed. The financial costs were calculated based on diagnosis related group reimbursement and the French national cost scale, enabling the evaluation of charges for each hospital stay. 46 patients were included in this cost analysis, 24 patients underwent OS and 22 underwent PO. No significant difference was found between surgical groups in terms of patient's clinical features and outcomes during the patient hospitalization. The use of PO was significantly associated with a decrease in Length Of Stay (LOS). The cost-minimization revealed that PO is associated with decreased hospital charges and shorten LOS for patients, with similar clinical outcomes and medical device cost to OS. This medico-economic study has leaded to choose preferentially the use of minimally invasive surgery techniques. This study also illustrates the discrepancy between the national health system reimbursement and real hospital charges. The medico-economic is becoming critical in the current context of sustainable health resource allocation. Copyright © 2015 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  17. Using a genetic algorithm to optimize a water-monitoring network for accuracy and cost effectiveness

    NASA Astrophysics Data System (ADS)

    Julich, R. J.

    2004-05-01

    The purpose of this project is to determine the optimal spatial distribution of water-monitoring wells to maximize important data collection and to minimize the cost of managing the network. We have employed a genetic algorithm (GA) towards this goal. The GA uses a simple fitness measure with two parts: the first part awards a maximal score to those combinations of hydraulic head observations whose net uncertainty is closest to the value representing all observations present, thereby maximizing accuracy; the second part applies a penalty function to minimize the number of observations, thereby minimizing the overall cost of the monitoring network. We used the linear statistical inference equation to calculate standard deviations on predictions from a numerical model generated for the 501-observation Death Valley Regional Flow System as the basis for our uncertainty calculations. We have organized the results to address the following three questions: 1) what is the optimal design strategy for a genetic algorithm to optimize this problem domain; 2) what is the consistency of solutions over several optimization runs; and 3) how do these results compare to what is known about the conceptual hydrogeology? Our results indicate the genetic algorithms are a more efficient and robust method for solving this class of optimization problems than have been traditional optimization approaches.

  18. Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Xiao; Dong, Jin; Djouadi, Seddik M

    2015-01-01

    The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, wheremore » the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.« less

  19. 7 CFR 4290.1620 - Functions of agents, including Central Registration Agent, Selling Agent and Fiscal Agent.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Debentures and/or TCs. (iv) Arranging for the production of Offering Circulars, certificates, and such other... to determine those factors that will minimize or reduce the cost of funding Debentures. (iii) Monitor... the Secretary; (vii) Remain custodian of such other documentation as the Secretary shall direct by...

  20. 13 CFR 108.1620 - Functions of agents, including Central Registration Agent, Selling Agent and Fiscal Agent.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... periodic sales of Debentures and/or TCs. (iv) Arranging for the production of Offering Circulars... financial markets to determine those factors that will minimize or reduce the cost of funding Debentures... SBA; (vii) Remain custodian of such other documentation as SBA shall direct by written instructions...

  1. 13 CFR 107.1620 - Functions of agents, including Central Registration Agent, Selling Agent and Fiscal Agent.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Participating Securities and/or TCs. (iv) Arranging for the production of the Offering Circular, certificates... markets to determine those factors that will minimize or reduce the cost of funding Debentures or... instructions from SBA; (vii) Remain custodian of such other documentation as SBA shall direct by written...

  2. Multicategory nets of single-layer perceptrons: complexity and sample-size issues.

    PubMed

    Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras

    2010-05-01

    The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.

  3. Costs and benefits of different methods of esophagectomy for esophageal cancer.

    PubMed

    Yanasoot, Alongkorn; Yolsuriyanwong, Kamtorn; Ruangsin, Sakchai; Laohawiriyakamol, Supparerk; Sunpaweravong, Somkiat

    2017-01-01

    Background A minimally invasive approach to esophagectomy is being used increasingly, but concerns remain regarding the feasibility, safety, cost, and outcomes. We performed an analysis of the costs and benefits of minimally invasive, hybrid, and open esophagectomy approaches for esophageal cancer surgery. Methods The data of 83 consecutive patients who underwent a McKeown's esophagectomy at Prince of Songkla University Hospital between January 2008 and December 2014 were analyzed. Open esophagectomy was performed in 54 patients, minimally invasive esophagectomy in 13, and hybrid esophagectomy in 16. There were no differences in patient characteristics among the 3 groups Minimally invasive esophagectomy was undertaken via a thoracoscopic-laparoscopic approach, hybrid esophagectomy via a thoracoscopic-laparotomy approach, and open esophagectomy by a thoracotomy-laparotomy approach. Results Minimally invasive esophagectomy required a longer operative time than hybrid or open esophagectomy ( p = 0.02), but these patients reported less postoperative pain ( p = 0.01). There were no significant differences in blood loss, intensive care unit stay, hospital stay, or postoperative complications among the 3 groups. Minimally invasive esophagectomy incurred higher operative and surgical material costs than hybrid or open esophagectomy ( p = 0.01), but there were no significant differences in inpatient care and total hospital costs. Conclusion Minimally invasive esophagectomy resulted in the least postoperative pain but the greatest operative cost and longest operative time. Open esophagectomy was associated with the lowest operative cost and shortest operative time but the most postoperative pain. Hybrid esophagectomy had a shorter learning curve while sharing the advantages of minimally invasive esophagectomy.

  4. Guaranteed cost control of polynomial fuzzy systems via a sum of squares approach.

    PubMed

    Tanaka, Kazuo; Ohtake, Hiroshi; Wang, Hua O

    2009-04-01

    This paper presents the guaranteed cost control of polynomial fuzzy systems via a sum of squares (SOS) approach. First, we present a polynomial fuzzy model and controller that are more general representations of the well-known Takagi-Sugeno (T-S) fuzzy model and controller, respectively. Second, we derive a guaranteed cost control design condition based on polynomial Lyapunov functions. Hence, the design approach discussed in this paper is more general than the existing LMI approaches (to T-S fuzzy control system designs) based on quadratic Lyapunov functions. The design condition realizes a guaranteed cost control by minimizing the upper bound of a given performance function. In addition, the design condition in the proposed approach can be represented in terms of SOS and is numerically (partially symbolically) solved via the recent developed SOSTOOLS. To illustrate the validity of the design approach, two design examples are provided. The first example deals with a complicated nonlinear system. The second example presents micro helicopter control. Both the examples show that our approach provides more extensive design results for the existing LMI approach.

  5. Statistical Optimality in Multipartite Ranking and Ordinal Regression.

    PubMed

    Uematsu, Kazuki; Lee, Yoonkyung

    2015-05-01

    Statistical optimality in multipartite ranking is investigated as an extension of bipartite ranking. We consider the optimality of ranking algorithms through minimization of the theoretical risk which combines pairwise ranking errors of ordinal categories with differential ranking costs. The extension shows that for a certain class of convex loss functions including exponential loss, the optimal ranking function can be represented as a ratio of weighted conditional probability of upper categories to lower categories, where the weights are given by the misranking costs. This result also bridges traditional ranking methods such as proportional odds model in statistics with various ranking algorithms in machine learning. Further, the analysis of multipartite ranking with different costs provides a new perspective on non-smooth list-wise ranking measures such as the discounted cumulative gain and preference learning. We illustrate our findings with simulation study and real data analysis.

  6. Supervised Variational Relevance Learning, An Analytic Geometric Feature Selection with Applications to Omic Datasets.

    PubMed

    Boareto, Marcelo; Cesar, Jonatas; Leite, Vitor B P; Caticha, Nestor

    2015-01-01

    We introduce Supervised Variational Relevance Learning (Suvrel), a variational method to determine metric tensors to define distance based similarity in pattern classification, inspired in relevance learning. The variational method is applied to a cost function that penalizes large intraclass distances and favors small interclass distances. We find analytically the metric tensor that minimizes the cost function. Preprocessing the patterns by doing linear transformations using the metric tensor yields a dataset which can be more efficiently classified. We test our methods using publicly available datasets, for some standard classifiers. Among these datasets, two were tested by the MAQC-II project and, even without the use of further preprocessing, our results improve on their performance.

  7. A Class of Manifold Regularized Multiplicative Update Algorithms for Image Clustering.

    PubMed

    Yang, Shangming; Yi, Zhang; He, Xiaofei; Li, Xuelong

    2015-12-01

    Multiplicative update algorithms are important tools for information retrieval, image processing, and pattern recognition. However, when the graph regularization is added to the cost function, different classes of sample data may be mapped to the same subspace, which leads to the increase of data clustering error rate. In this paper, an improved nonnegative matrix factorization (NMF) cost function is introduced. Based on the cost function, a class of novel graph regularized NMF algorithms is developed, which results in a class of extended multiplicative update algorithms with manifold structure regularization. Analysis shows that in the learning, the proposed algorithms can efficiently minimize the rank of the data representation matrix. Theoretical results presented in this paper are confirmed by simulations. For different initializations and data sets, variation curves of cost functions and decomposition data are presented to show the convergence features of the proposed update rules. Basis images, reconstructed images, and clustering results are utilized to present the efficiency of the new algorithms. Last, the clustering accuracies of different algorithms are also investigated, which shows that the proposed algorithms can achieve state-of-the-art performance in applications of image clustering.

  8. Penalized Weighted Least-Squares Approach to Sinogram Noise Reduction and Image Reconstruction for Low-Dose X-Ray Computed Tomography

    PubMed Central

    Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong

    2006-01-01

    Reconstructing low-dose X-ray CT (computed tomography) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a MRF (Markov random field) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loève (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging. PMID:17024831

  9. Directional biases reveal utilization of arm's biomechanical properties for optimization of motor behavior.

    PubMed

    Goble, Jacob A; Zhang, Yanxin; Shimansky, Yury; Sharma, Siddharth; Dounskaia, Natalia V

    2007-09-01

    Strategies used by the CNS to optimize arm movements in terms of speed, accuracy, and resistance to fatigue remain largely unknown. A hypothesis is studied that the CNS exploits biomechanical properties of multijoint limbs to increase efficiency of movement control. To test this notion, a novel free-stroke drawing task was used that instructs subjects to make straight strokes in as many different directions as possible in the horizontal plane through rotations of the elbow and shoulder joints. Despite explicit instructions to distribute strokes uniformly, subjects showed biases to move in specific directions. These biases were associated with a tendency to perform movements that included active motion at one joint and largely passive motion at the other joint, revealing a tendency to minimize intervention of muscle torque for regulation of the effect of interaction torque. Other biomechanical factors, such as inertial resistance and kinematic manipulability, were unable to adequately account for these significant biases. Also, minimizations of jerk, muscle torque change, and sum of squared muscle torque were analyzed; however, these cost functions failed to explain the observed directional biases. Collectively, these results suggest that knowledge of biomechanical cost functions regarding interaction torque (IT) regulation is available to the control system. This knowledge may be used to evaluate potential movements and to select movement of "low cost." The preference to reduce active regulation of interaction torque suggests that, in addition to muscle energy, the criterion for movement cost may include neural activity required for movement control.

  10. Minimally invasive mitral valve surgery is associated with equivalent cost and shorter hospital stay when compared with traditional sternotomy.

    PubMed

    Atluri, Pavan; Stetson, Robert L; Hung, George; Gaffey, Ann C; Szeto, Wilson Y; Acker, Michael A; Hargrove, W Clark

    2016-02-01

    Mitral valve surgery is increasingly performed through minimally invasive approaches. There are limited data regarding the cost of minimally invasive mitral valve surgery. Moreover, there are no data on the specific costs associated with mitral valve surgery. We undertook this study to compare the costs (total and subcomponent) of minimally invasive mitral valve surgery relative to traditional sternotomy. All isolated mitral valve repairs performed in our health system from March 2012 through September 2013 were analyzed. To ensure like sets of patients, only those patients who underwent isolated mitral valve repairs with preoperative Society of Thoracic Surgeons scores of less than 4 were included in this study. A total of 159 patients were identified (sternotomy, 68; mini, 91). Total incurred direct cost was obtained from hospital financial records. Analysis demonstrated no difference in total cost (operative and postoperative) of mitral valve repair between mini and sternotomy ($25,515 ± $7598 vs $26,049 ± $11,737; P = .74). Operative costs were higher for the mini cohort, whereas postoperative costs were significantly lower. Postoperative intensive care unit and total hospital stays were both significantly shorter for the mini cohort. There were no differences in postoperative complications or survival between groups. Minimally invasive mitral valve surgery can be performed with overall equivalent cost and shorter hospital stay relative to traditional sternotomy. There is greater operative cost associated with minimally invasive mitral valve surgery that is offset by shorter intensive care unit and hospital stays. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  11. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    NASA Astrophysics Data System (ADS)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  12. Sizing a rainwater harvesting cistern by minimizing costs

    NASA Astrophysics Data System (ADS)

    Pelak, Norman; Porporato, Amilcare

    2016-10-01

    Rainwater harvesting (RWH) has the potential to reduce water-related costs by providing an alternate source of water, in addition to relieving pressure on public water sources and reducing stormwater runoff. Existing methods for determining the optimal size of the cistern component of a RWH system have various drawbacks, such as specificity to a particular region, dependence on numerical optimization, and/or failure to consider the costs of the system. In this paper a formulation is developed for the optimal cistern volume which incorporates the fixed and distributed costs of a RWH system while also taking into account the random nature of the depth and timing of rainfall, with a focus on RWH to supply domestic, nonpotable uses. With rainfall inputs modeled as a marked Poisson process, and by comparing the costs associated with building a cistern with the costs of externally supplied water, an expression for the optimal cistern volume is found which minimizes the water-related costs. The volume is a function of the roof area, water use rate, climate parameters, and costs of the cistern and of the external water source. This analytically tractable expression makes clear the dependence of the optimal volume on the input parameters. An analysis of the rainfall partitioning also characterizes the efficiency of a particular RWH system configuration and its potential for runoff reduction. The results are compared to the RWH system at the Duke Smart Home in Durham, NC, USA to show how the method could be used in practice.

  13. Greedy algorithms in disordered systems

    NASA Astrophysics Data System (ADS)

    Duxbury, P. M.; Dobrin, R.

    1999-08-01

    We discuss search, minimal path and minimal spanning tree algorithms and their applications to disordered systems. Greedy algorithms solve these problems exactly, and are related to extremal dynamics in physics. Minimal cost path (Dijkstra) and minimal cost spanning tree (Prim) algorithms provide extremal dynamics for a polymer in a random medium (the KPZ universality class) and invasion percolation (without trapping) respectively.

  14. [Determination of cost-effective strategies in colorectal cancer screening].

    PubMed

    Dervaux, B; Eeckhoudt, L; Lebrun, T; Sailly, J C

    1992-01-01

    The object of the article is to implement particular methodologies in order to determine which strategies are cost-effective in the mass screening of colorectal cancer after a positive Hemoccult test. The first approach to be presented consists in proposing a method which enables all the admissible diagnostic strategies to be determined. The second approach enables a minimal cost function to be estimated using an adaptation of "Data Envelopment Analysis". This method proves to be particularly successful in cost-efficiency analysis, when the performance indicators are numerous and hard to aggregate. The results show that there are two cost-effective strategies after a positive Hemoccult test: coloscopy and sigmoidoscopy; they put into question the relevance of double contrast barium enema in the diagnosis of colo-rectal lesions.

  15. Comparing cost-effectiveness of X-Stop with minimally invasive decompression in lumbar spinal stenosis: a randomized controlled trial.

    PubMed

    Lønne, Greger; Johnsen, Lars Gunnar; Aas, Eline; Lydersen, Stian; Andresen, Hege; Rønning, Roar; Nygaard, Øystein P

    2015-04-15

    Randomized clinical trial with 2-year follow-up. To compare the cost-effectiveness of X-stop to minimally invasive decompression in patients with symptomatic lumbar spinal stenosis. Lumbar spinal stenosis is the most common indication for operative treatment in elderly. Although surgery is more costly than nonoperative treatment, health outcomes for more than 2 years were shown to be significantly better. Surgical treatment with minimally invasive decompression is widely used. X-stop is introduced as another minimally invasive technique showing good results compared with nonoperative treatment. We enrolled 96 patients aged 50 to 85 years, with symptoms of neurogenic intermittent claudication within 250-m walking distance and 1- or 2-level lumbar spinal stenosis, randomized to either minimally invasive decompression or X-stop. Quality-adjusted life-years were based on EuroQol EQ-5D. The hospital unit costs were estimated by means of the top-down approach. Each cost unit was converted into a monetary value by dividing the overall cost by the amount of cost units produced. The analysis of costs and health outcomes is presented by the incremental cost-effectiveness ratio. The study was terminated after a midway interim analysis because of significantly higher reoperation rate in the X-stop group (33%). The incremental cost for X-stop compared with minimally invasive decompression was &OV0556;2832 (95% confidence interval: 1886-3778), whereas the incremental health gain was 0.11 quality-adjusted life-year (95% confidence interval: -0.01 to 0.23). Based on the incremental cost and effect, the incremental cost-effectiveness ratio was &OV0556;25,700. The majority of the bootstrap samples displayed in the northeast corner of the cost-effectiveness plane, giving a 50% likelihood that X-stop is cost-effective at the extra cost of &OV0556;25,700 (incremental cost-effectiveness ratio) for a quality-adjusted life-year. The significantly higher cost of X-stop is mainly due to implant cost and the significantly higher reoperation rate. 2.

  16. [Possible changes in energy-minimizer mechanisms of locomotion due to chronic low back pain - a literature review].

    PubMed

    de Carvalho, Alberito Rodrigo; Andrade, Alexandro; Peyré-Tartaruga, Leonardo Alexandre

    2015-01-01

    One goal of the locomotion is to move the body in the space at the most economical way possible. However, little is known about the mechanical and energetic aspects of locomotion that are affected by low back pain. And in case of occurring some damage, little is known about how the mechanical and energetic characteristics of the locomotion are manifested in functional activities, especially with respect to the energy-minimizer mechanisms during locomotion. This study aimed: a) to describe the main energy-minimizer mechanisms of locomotion; b) to check if there are signs of damage on the mechanical and energetic characteristics of the locomotion due to chronic low back pain (CLBP) which may endanger the energy-minimizer mechanisms. This study is characterized as a narrative literature review. The main theory that explains the minimization of energy expenditure during the locomotion is the inverted pendulum mechanism, by which the energy-minimizer mechanism converts kinetic energy into potential energy of the center of mass and vice-versa during the step. This mechanism is strongly influenced by spatio-temporal gait (locomotion) parameters such as step length and preferred walking speed, which, in turn, may be severely altered in patients with chronic low back pain. However, much remains to be understood about the effects of chronic low back pain on the individual's ability to practice an economic locomotion, because functional impairment may compromise the mechanical and energetic characteristics of this type of gait, making it more costly. Thus, there are indications that such changes may compromise the functional energy-minimizer mechanisms. Copyright © 2014 Elsevier Editora Ltda. All rights reserved.

  17. Great hammerhead sharks swim on their side to reduce transport costs

    PubMed Central

    Payne, Nicholas L.; Iosilevskii, Gil; Barnett, Adam; Fischer, Chris; Graham, Rachel T.; Gleiss, Adrian C.; Watanabe, Yuuki Y.

    2016-01-01

    Animals exhibit various physiological and behavioural strategies for minimizing travel costs. Fins of aquatic animals play key roles in efficient travel and, for sharks, the functions of dorsal and pectoral fins are considered well divided: the former assists propulsion and generates lateral hydrodynamic forces during turns and the latter generates vertical forces that offset sharks' negative buoyancy. Here we show that great hammerhead sharks drastically reconfigure the function of these structures, using an exaggerated dorsal fin to generate lift by swimming rolled on their side. Tagged wild sharks spend up to 90% of time swimming at roll angles between 50° and 75°, and hydrodynamic modelling shows that doing so reduces drag—and in turn, the cost of transport—by around 10% compared with traditional upright swimming. Employment of such a strongly selected feature for such a unique purpose raises interesting questions about evolutionary pathways to hydrodynamic adaptations, and our perception of form and function. PMID:27457414

  18. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.

  19. Tire Force Estimation using a Proportional Integral Observer

    NASA Astrophysics Data System (ADS)

    Farhat, Ahmad; Koenig, Damien; Hernandez-Alcantara, Diana; Morales-Menendez, Ruben

    2017-01-01

    This paper addresses a method for detecting critical stability situations in the lateral vehicle dynamics by estimating the non-linear part of the tire forces. These forces indicate the road holding performance of the vehicle. The estimation method is based on a robust fault detection and estimation approach which minimize the disturbance and uncertainties to residual sensitivity. It consists in the design of a Proportional Integral Observer (PIO), while minimizing the well known H ∞ norm for the worst case uncertainties and disturbance attenuation, and combining a transient response specification. This multi-objective problem is formulated as a Linear Matrix Inequalities (LMI) feasibility problem where a cost function subject to LMI constraints is minimized. This approach is employed to generate a set of switched robust observers for uncertain switched systems, where the convergence of the observer is ensured using a Multiple Lyapunov Function (MLF). Whilst the forces to be estimated can not be physically measured, a simulation scenario with CarSimTM is presented to illustrate the developed method.

  20. Fixed order dynamic compensation for multivariable linear systems

    NASA Technical Reports Server (NTRS)

    Kramer, F. S.; Calise, A. J.

    1986-01-01

    This paper considers the design of fixed order dynamic compensators for multivariable time invariant linear systems, minimizing a linear quadratic performance cost functional. Attention is given to robustness issues in terms of multivariable frequency domain specifications. An output feedback formulation is adopted by suitably augmenting the system description to include the compensator states. Either a controller or observer canonical form is imposed on the compensator description to reduce the number of free parameters to its minimal number. The internal structure of the compensator is prespecified by assigning a set of ascending feedback invariant indices, thus forming a Brunovsky structure for the nominal compensator.

  1. A planning model for the short-term management of cash.

    PubMed

    Broyles, Robert W; Mattachione, Steven; Khaliq, Amir

    2011-02-01

    This paper develops a model that enables the health administrator to identify the balance that minimizes the projected cost of holding cash. Adopting the principles of mathematical expectation, the model estimates the expected total costs of adopting each of the several strategies concerning the cash balance that the organization might maintain. Expected total costs consist of anticipated short costs, resulting from a potential shortage of funds. Long costs are associated with a potential surplus of funds and an opportunity cost represented by foregone investment income. Of importance to the model is the potential for the health service organization to realize a surplus of funds during periods characterized by a net cash disbursement. The paper also develops an interactive spreadsheet that enables the administrator to perform sensitivity analysis and examine the response of the desired or target cash balance to changes in the parameters that define the expected long and short cost functions.

  2. Flight plan optimization

    NASA Astrophysics Data System (ADS)

    Dharmaseelan, Anoop; Adistambha, Keyne D.

    2015-05-01

    Fuel cost accounts for 40 percent of the operating cost of an airline. Fuel cost can be minimized by planning a flight on optimized routes. The routes can be optimized by searching best connections based on the cost function defined by the airline. The most common algorithm that used to optimize route search is Dijkstra's. Dijkstra's algorithm produces a static result and the time taken for the search is relatively long. This paper experiments a new algorithm to optimize route search which combines the principle of simulated annealing and genetic algorithm. The experimental results of route search, presented are shown to be computationally fast and accurate compared with timings from generic algorithm. The new algorithm is optimal for random routing feature that is highly sought by many regional operators.

  3. Fuzzy Multi-Objective Vendor Selection Problem with Modified S-CURVE Membership Function

    NASA Astrophysics Data System (ADS)

    Díaz-Madroñero, Manuel; Peidro, David; Vasant, Pandian

    2010-06-01

    In this paper, the S-Curve membership function methodology is used in a vendor selection (VS) problem. An interactive method for solving multi-objective VS problems with fuzzy goals is developed. The proposed method attempts simultaneously to minimize the total order costs, the number of rejected items and the number of late delivered items with reference to several constraints such as meeting buyers' demand, vendors' capacity, vendors' quota flexibility, vendors' allocated budget, etc. We compare in an industrial case the performance of S-curve membership functions, representing uncertainty goals and constraints in VS problems, with linear membership functions.

  4. The Advantages and Disadvantages of Using a Centralized In-House Marketing Office.

    ERIC Educational Resources Information Center

    Miller, Ronald H.

    A centralized marketing and promotion office may or may not be a panacea for a continuing education program. Five major advantages to centralization of the marketing and promotion function are minimization of costs, a school-wide marketing strategy, maximization of the school image, enhanced quality control, and building of technical expertise of…

  5. A Dynamic Process Model for Optimizing the Hospital Environment Cash-Flow

    NASA Astrophysics Data System (ADS)

    Pater, Flavius; Rosu, Serban

    2011-09-01

    In this article is presented a new approach to some fundamental techniques of solving dynamic programming problems with the use of functional equations. We will analyze the problem of minimizing the cost of treatment in a hospital environment. Mathematical modeling of this process leads to an optimal control problem with a finite horizon.

  6. Three essays on pricing and risk management in electricity markets

    NASA Astrophysics Data System (ADS)

    Kotsan, Serhiy

    2005-07-01

    A set of three papers forms this dissertation. In the first paper I analyze an electricity market that does not clear. The system operator satisfies fixed demand at a fixed price, and attempts to minimize "cost" as indicated by independent generators' supply bids. No equilibrium exists in this situation, and the operator lacks information sufficient to minimize actual cost. As a remedy, we propose a simple efficient tax mechanism. With the tax, Nash equilibrium bids still diverge from marginal cost but nonetheless provide sufficient information to minimize actual cost, regardless of the tax rate or number of generators. The second paper examines a price mechanism with one price assigned for each level of bundled real and reactive power. Equilibrium allocation under this pricing approach raises system efficiency via better allocation of the reactive power reserves, neglected in the traditional pricing approach. Pricing reactive power should be considered in the bundle with real power since its cost is highly dependent on real power output. The efficiency of pricing approach is shown in the general case, and tested on the 30-bus IEEE network with piecewise linear cost functions of the generators. Finally the third paper addresses the problem of optimal investment in generation based on mean-variance portfolio analysis. It is assumed the investor can freely create a portfolio of shares in generation located on buses of the electrical network. Investors are risk averse, and seek to minimize the variance of the weighted average Locational Marginal Price (LMP) in their portfolio, and to maximize its expected value. I conduct simulations using a standard IEEE 68-bus network that resembles the New York - New England system and calculate LMPs in accordance with the PJM methodology for a fully optimal AC power flow solution. Results indicate that the network topology is a crucial determinant of the investment decision as line congestion makes it difficult to deliver power to certain nodes at system peak load. Determining those nodes is an important task for an investor in generation as well as the transmission system operator.

  7. An EOQ model for weibull distribution deterioration with time-dependent cubic demand and backlogging

    NASA Astrophysics Data System (ADS)

    Santhi, G.; Karthikeyan, K.

    2017-11-01

    In this article we introduce an economic order quantity model with weibull deterioration and time dependent cubic demand rate where holding costs as a linear function of time. Shortages are allowed in the inventory system are partially and fully backlogging. The objective of this model is to minimize the total inventory cost by using the optimal order quantity and the cycle length. The proposed model is illustrated by numerical examples and the sensitivity analysis is performed to study the effect of changes in parameters on the optimum solutions.

  8. Restoration ecology: two-sex dynamics and cost minimization.

    PubMed

    Molnár, Ferenc; Caragine, Christina; Caraco, Thomas; Korniss, Gyorgy

    2013-01-01

    We model a spatially detailed, two-sex population dynamics, to study the cost of ecological restoration. We assume that cost is proportional to the number of individuals introduced into a large habitat. We treat dispersal as homogeneous diffusion in a one-dimensional reaction-diffusion system. The local population dynamics depends on sex ratio at birth, and allows mortality rates to differ between sexes. Furthermore, local density dependence induces a strong Allee effect, implying that the initial population must be sufficiently large to avert rapid extinction. We address three different initial spatial distributions for the introduced individuals; for each we minimize the associated cost, constrained by the requirement that the species must be restored throughout the habitat. First, we consider spatially inhomogeneous, unstable stationary solutions of the model's equations as plausible candidates for small restoration cost. Second, we use numerical simulations to find the smallest rectangular cluster, enclosing a spatially homogeneous population density, that minimizes the cost of assured restoration. Finally, by employing simulated annealing, we minimize restoration cost among all possible initial spatial distributions of females and males. For biased sex ratios, or for a significant between-sex difference in mortality, we find that sex-specific spatial distributions minimize the cost. But as long as the sex ratio maximizes the local equilibrium density for given mortality rates, a common homogeneous distribution for both sexes that spans a critical distance yields a similarly low cost.

  9. Restoration Ecology: Two-Sex Dynamics and Cost Minimization

    PubMed Central

    Molnár, Ferenc; Caragine, Christina; Caraco, Thomas; Korniss, Gyorgy

    2013-01-01

    We model a spatially detailed, two-sex population dynamics, to study the cost of ecological restoration. We assume that cost is proportional to the number of individuals introduced into a large habitat. We treat dispersal as homogeneous diffusion in a one-dimensional reaction-diffusion system. The local population dynamics depends on sex ratio at birth, and allows mortality rates to differ between sexes. Furthermore, local density dependence induces a strong Allee effect, implying that the initial population must be sufficiently large to avert rapid extinction. We address three different initial spatial distributions for the introduced individuals; for each we minimize the associated cost, constrained by the requirement that the species must be restored throughout the habitat. First, we consider spatially inhomogeneous, unstable stationary solutions of the model’s equations as plausible candidates for small restoration cost. Second, we use numerical simulations to find the smallest rectangular cluster, enclosing a spatially homogeneous population density, that minimizes the cost of assured restoration. Finally, by employing simulated annealing, we minimize restoration cost among all possible initial spatial distributions of females and males. For biased sex ratios, or for a significant between-sex difference in mortality, we find that sex-specific spatial distributions minimize the cost. But as long as the sex ratio maximizes the local equilibrium density for given mortality rates, a common homogeneous distribution for both sexes that spans a critical distance yields a similarly low cost. PMID:24204810

  10. Reliability, Risk and Cost Trade-Offs for Composite Designs

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.

    1996-01-01

    Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.

  11. A methodology for commonality analysis, with applications to selected space station systems

    NASA Technical Reports Server (NTRS)

    Thomas, Lawrence Dale

    1989-01-01

    The application of commonality in a system represents an attempt to reduce costs by reducing the number of unique components. A formal method for conducting commonality analysis has not been established. In this dissertation, commonality analysis is characterized as a partitioning problem. The cost impacts of commonality are quantified in an objective function, and the solution is that partition which minimizes this objective function. Clustering techniques are used to approximate a solution, and sufficient conditions are developed which can be used to verify the optimality of the solution. This method for commonality analysis is general in scope. It may be applied to the various types of commonality analysis required in the conceptual, preliminary, and detail design phases of the system development cycle.

  12. Text-line extraction in handwritten Chinese documents based on an energy minimization framework.

    PubMed

    Koo, Hyung Il; Cho, Nam Ik

    2012-03-01

    Text-line extraction in unconstrained handwritten documents remains a challenging problem due to nonuniform character scale, spatially varying text orientation, and the interference between text lines. In order to address these problems, we propose a new cost function that considers the interactions between text lines and the curvilinearity of each text line. Precisely, we achieve this goal by introducing normalized measures for them, which are based on an estimated line spacing. We also present an optimization method that exploits the properties of our cost function. Experimental results on a database consisting of 853 handwritten Chinese document images have shown that our method achieves a detection rate of 99.52% and an error rate of 0.32%, which outperforms conventional methods.

  13. Steering of Frequency Standards by the Use of Linear Quadratic Gaussian Control Theory

    NASA Technical Reports Server (NTRS)

    Koppang, Paul; Leland, Robert

    1996-01-01

    Linear quadratic Gaussian control is a technique that uses Kalman filtering to estimate a state vector used for input into a control calculation. A control correction is calculated by minimizing a quadratic cost function that is dependent on both the state vector and the control amount. Different penalties, chosen by the designer, are assessed by the controller as the state vector and control amount vary from given optimal values. With this feature controllers can be designed to force the phase and frequency differences between two standards to zero either more or less aggressively depending on the application. Data will be used to show how using different parameters in the cost function analysis affects the steering and the stability of the frequency standards.

  14. Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Piunovskiy, A. B., E-mail: piunov@liv.ac.uk

    2016-08-15

    In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures ofmore » the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.« less

  15. Prime focus architectures for large space telescopes: reduce surfaces to save cost

    NASA Astrophysics Data System (ADS)

    Breckinridge, J. B.; Lillie, C. F.

    2016-07-01

    Conceptual architectures are now being developed to identify future directions for post JWST large space telescope systems to operate in the UV Optical and near IR regions of the spectrum. Here we show that the cost of optical surfaces within large aperture telescope/instrument systems can exceed $100M/reflection when expressed in terms of the aperture increase needed to over come internal absorption loss. We recommend a program in innovative optical design to minimize the number of surfaces by considering multiple functions for mirrors. An example is given using the Rowland circle imaging spectrometer systems for UV space science. With few exceptions, current space telescope architectures are based on systems optimized for ground-based astronomy. Both HST and JWST are classical "Cassegrain" telescopes derived from the ground-based tradition to co-locate the massive primary mirror and the instruments at the same end of the metrology structure. This requirement derives from the dual need to minimize observatory dome size and cost in the presence of the Earth's 1-g gravitational field. Space telescopes, however function in the zero gravity of space and the 1- g constraint is relieved to the advantage of astronomers. Here we suggest that a prime focus large aperture telescope system in space may have potentially have higher transmittance, better pointing, improved thermal and structural control, less internal polarization and broader wavelength coverage than Cassegrain telescopes. An example is given showing how UV astronomy telescopes use single optical elements for multiple functions and therefore have a minimum number of reflections.

  16. Minimizing communication cost among distributed controllers in software defined networks

    NASA Astrophysics Data System (ADS)

    Arlimatti, Shivaleela; Elbreiki, Walid; Hassan, Suhaidi; Habbal, Adib; Elshaikh, Mohamed

    2016-08-01

    Software Defined Networking (SDN) is a new paradigm to increase the flexibility of today's network by promising for a programmable network. The fundamental idea behind this new architecture is to simplify network complexity by decoupling control plane and data plane of the network devices, and by making the control plane centralized. Recently controllers have distributed to solve the problem of single point of failure, and to increase scalability and flexibility during workload distribution. Even though, controllers are flexible and scalable to accommodate more number of network switches, yet the problem of intercommunication cost between distributed controllers is still challenging issue in the Software Defined Network environment. This paper, aims to fill the gap by proposing a new mechanism, which minimizes intercommunication cost with graph partitioning algorithm, an NP hard problem. The methodology proposed in this paper is, swapping of network elements between controller domains to minimize communication cost by calculating communication gain. The swapping of elements minimizes inter and intra communication cost among network domains. We validate our work with the OMNeT++ simulation environment tool. Simulation results show that the proposed mechanism minimizes the inter domain communication cost among controllers compared to traditional distributed controllers.

  17. Optimization of prosthetic foot stiffness to reduce metabolic cost and intact knee loading during below-knee amputee walking: a theoretical study.

    PubMed

    Fey, Nicholas P; Klute, Glenn K; Neptune, Richard R

    2012-11-01

    Unilateral below-knee amputees develop abnormal gait characteristics that include bilateral asymmetries and an elevated metabolic cost relative to non-amputees. In addition, long-term prosthesis use has been linked to an increased prevalence of joint pain and osteoarthritis in the intact leg knee. To improve amputee mobility, prosthetic feet that utilize elastic energy storage and return (ESAR) have been designed, which perform important biomechanical functions such as providing body support and forward propulsion. However, the prescription of appropriate design characteristics (e.g., stiffness) is not well-defined since its influence on foot function and important in vivo biomechanical quantities such as metabolic cost and joint loading remain unclear. The design of feet that improve these quantities could provide considerable advancements in amputee care. Therefore, the purpose of this study was to couple design optimization with dynamic simulations of amputee walking to identify the optimal foot stiffness that minimizes metabolic cost and intact knee joint loading. A musculoskeletal model and distributed stiffness ESAR prosthetic foot model were developed to generate muscle-actuated forward dynamics simulations of amputee walking. Dynamic optimization was used to solve for the optimal muscle excitation patterns and foot stiffness profile that produced simulations that tracked experimental amputee walking data while minimizing metabolic cost and intact leg internal knee contact forces. Muscle and foot function were evaluated by calculating their contributions to the important walking subtasks of body support, forward propulsion and leg swing. The analyses showed that altering a nominal prosthetic foot stiffness distribution by stiffening the toe and mid-foot while making the ankle and heel less stiff improved ESAR foot performance by offloading the intact knee during early to mid-stance of the intact leg and reducing metabolic cost. The optimal design also provided moderate braking and body support during the first half of residual leg stance, while increasing the prosthesis contributions to forward propulsion and body support during the second half of residual leg stance. Future work will be directed at experimentally validating these results, which have important implications for future designs of prosthetic feet that could significantly improve amputee care.

  18. Gene Architectures that Minimize Cost of Gene Expression.

    PubMed

    Frumkin, Idan; Schirman, Dvir; Rotman, Aviv; Li, Fangfei; Zahavi, Liron; Mordret, Ernest; Asraf, Omer; Wu, Song; Levy, Sasha F; Pilpel, Yitzhak

    2017-01-05

    Gene expression burdens cells by consuming resources and energy. While numerous studies have investigated regulation of expression level, little is known about gene design elements that govern expression costs. Here, we ask how cells minimize production costs while maintaining a given protein expression level and whether there are gene architectures that optimize this process. We measured fitness of ∼14,000 E. coli strains, each expressing a reporter gene with a unique 5' architecture. By comparing cost-effective and ineffective architectures, we found that cost per protein molecule could be minimized by lowering transcription levels, regulating translation speeds, and utilizing amino acids that are cheap to synthesize and that are less hydrophobic. We then examined natural E. coli genes and found that highly expressed genes have evolved more forcefully to minimize costs associated with their expression. Our study thus elucidates gene design elements that improve the economy of protein expression in natural and heterologous systems. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Novel Wireless-Communicating Textiles Made from Multi-Material and Minimally-Invasive Fibers

    PubMed Central

    Gorgutsa, Stepan; Bélanger-Garnier, Victor; Ung, Bora; Viens, Jeff; Gosselin, Benoit; LaRochelle, Sophie; Messaddeq, Younes

    2014-01-01

    The ability to integrate multiple materials into miniaturized fiber structures enables the realization of novel biomedical textile devices with higher-level functionalities and minimally-invasive attributes. In this work, we present novel textile fabrics integrating unobtrusive multi-material fibers that communicate through 2.4 GHz wireless networks with excellent signal quality. The conductor elements of the textiles are embedded within the fibers themselves, providing electrical and chemical shielding against the environment, while preserving the mechanical and cosmetic properties of the garments. These multi-material fibers combine insulating and conducting materials into a well-defined geometry, and represent a cost-effective and minimally-invasive approach to sensor fabrics and bio-sensing textiles connected in real time to mobile communications infrastructures, suitable for a variety of health and life science applications. PMID:25325335

  20. Novel wireless-communicating textiles made from multi-material and minimally-invasive fibers.

    PubMed

    Bélanger-Garnier, Victor; Gorgutsa, Stephan; Ung, Bora; Viens, Jeff; Gosselin, Benoit; LaRochelle, Sophie; Messaddeq, Younes

    2014-01-01

    The ability to integrate multiple materials into miniaturized fiber structures enables the realization of novel biomedical textile devices with higher-level functionalities and minimally-invasive attributes. In this work, we present novel textile fabrics integrating unobtrusive multi-material fibers that communicate through 2.4 GHz wireless networks with excellent signal quality. The conductor elements of the textiles are embedded within the fibers themselves, providing electrical and chemical shielding against the environment, while preserving the mechanical and cosmetic properties of the garments. These multi-material fibers combine insulating and conducting materials into a well-defined geometry, and represent a cost-effective and minimally-invasive approach to sensor fabrics and bio-sensing textiles connected in real time to mobile communications infrastructures, suitable for a variety of health and life science applications.

  1. Novel wireless-communicating textiles made from multi-material and minimally-invasive fibers.

    PubMed

    Gorgutsa, Stepan; Bélanger-Garnier, Victor; Ung, Bora; Viens, Jeff; Gosselin, Benoit; LaRochelle, Sophie; Messaddeq, Younes

    2014-10-16

    The ability to integrate multiple materials into miniaturized fiber structures enables the realization of novel biomedical textile devices with higher-level functionalities and minimally-invasive attributes. In this work, we present novel textile fabrics integrating unobtrusive multi-material fibers that communicate through 2.4 GHz wireless networks with excellent signal quality. The conductor elements of the textiles are embedded within the fibers themselves, providing electrical and chemical shielding against the environment, while preserving the mechanical and cosmetic properties of the garments. These multi-material fibers combine insulating and conducting materials into a well-defined geometry, and represent a cost-effective and minimally-invasive approach to sensor fabrics and bio-sensing textiles connected in real time to mobile communications infrastructures, suitable for a variety of health and life science applications.

  2. A minimal cost function method for optimizing the age-Depth relation of deep-sea sediment cores

    NASA Astrophysics Data System (ADS)

    Brüggemann, Wolfgang

    1992-08-01

    The question of an optimal age-depth relation for deep-sea sediment cores has been raised frequently. The data from such cores (e.g., δ18O values) are used to test the astronomical theory of ice ages as established by Milankovitch in 1938. In this work, we use a minimal cost function approach to find simultaneously an optimal age-depth relation and a linear model that optimally links solar insolation or other model input with global ice volume. Thus a general tool for the calibration of deep-sea cores to arbitrary tuning targets is presented. In this inverse modeling type approach, an objective function is minimized that penalizes: (1) the deviation of the data from the theoretical linear model (whose transfer function can be computed analytically for a given age-depth relation) and (2) the violation of a set of plausible assumptions about the model, the data and the obtained correction of a first guess age-depth function. These assumptions have been suggested before but are now quantified and incorporated explicitly into the objective function as penalty terms. We formulate an optimization problem that is solved numerically by conjugate gradient type methods. Using this direct approach, we obtain high coherences in the Milankovitch frequency bands (over 90%). Not only the data time series but also the the derived correction to a first guess linear age-depth function (and therefore the sedimentation rate) itself contains significant energy in a broad frequency band around 100 kyr. The use of a sedimentation rate which varies continuously on ice age time scales results in a shift of energy from 100 kyr in the original data spectrum to 41, 23, and 19 kyr in the spectrum of the corrected data. However, a large proportion of the data variance remains unexplained, particularly in the 100 kyr frequency band, where there is no significant input by orbital forcing. The presented method is applied to a real sediment core and to the SPECMAP stack, and results are compared with those obtained in earlier investigations.

  3. Evidence for composite cost functions in arm movement planning: an inverse optimal control approach.

    PubMed

    Berret, Bastien; Chiovetto, Enrico; Nori, Francesco; Pozzo, Thierry

    2011-10-01

    An important issue in motor control is understanding the basic principles underlying the accomplishment of natural movements. According to optimal control theory, the problem can be stated in these terms: what cost function do we optimize to coordinate the many more degrees of freedom than necessary to fulfill a specific motor goal? This question has not received a final answer yet, since what is optimized partly depends on the requirements of the task. Many cost functions were proposed in the past, and most of them were found to be in agreement with experimental data. Therefore, the actual principles on which the brain relies to achieve a certain motor behavior are still unclear. Existing results might suggest that movements are not the results of the minimization of single but rather of composite cost functions. In order to better clarify this last point, we consider an innovative experimental paradigm characterized by arm reaching with target redundancy. Within this framework, we make use of an inverse optimal control technique to automatically infer the (combination of) optimality criteria that best fit the experimental data. Results show that the subjects exhibited a consistent behavior during each experimental condition, even though the target point was not prescribed in advance. Inverse and direct optimal control together reveal that the average arm trajectories were best replicated when optimizing the combination of two cost functions, nominally a mix between the absolute work of torques and the integrated squared joint acceleration. Our results thus support the cost combination hypothesis and demonstrate that the recorded movements were closely linked to the combination of two complementary functions related to mechanical energy expenditure and joint-level smoothness.

  4. Cost-minimization analysis favors outpatient quick diagnosis unit over hospitalization for the diagnosis of potentially serious diseases.

    PubMed

    Sanclemente-Ansó, Carmen; Bosch, Xavier; Salazar, Albert; Moreno, Ramón; Capdevila, Cristina; Rosón, Beatriz; Corbella, Xavier

    2016-05-01

    Quick diagnosis units (QDUs) are a promising alternative to conventional hospitalization for the diagnosis of suspected serious diseases, most commonly cancer and severe anemia. Although QDUs are as effective as hospitalization in reaching a timely diagnosis, a full economic evaluation comparing both approaches has not been reported. To evaluate the costs of QDU vs. conventional hospitalization for the diagnosis of cancer and anemia using a cost-minimization analysis on the proven assumption that health outcomes of both approaches were equivalent. Patients referred to the QDU of Bellvitge University Hospital of Barcelona over 51 months with a final diagnosis of severe anemia (unrelated to malignancy), lymphoma, and lung cancer were compared with patients hospitalized for workup with the same diagnoses. The total cost per patient until diagnosis was analyzed. Direct and non-direct costs of QDU and hospitalization were compared. Time to diagnosis in QDU patients (n=195) and length-of-stay in hospitalized patients (n=237) were equivalent. There were considerable costs savings from hospitalization. Highest savings for the three groups were related to fixed direct costs of hospital stays (66% of total savings). Savings related to fixed non-direct costs of structural and general functioning were 33% of total savings. Savings related to variable direct costs of investigations were 1% of total savings. Overall savings from hospitalization of all patients were €867,719.31. QDUs appear to be a cost-effective resource for avoiding unnecessary hospitalization in patients with anemia and cancer. Internists, hospital executives, and healthcare authorities should consider establishing this model elsewhere. Copyright © 2015. Published by Elsevier B.V.

  5. Joint brain connectivity estimation from diffusion and functional MRI data

    NASA Astrophysics Data System (ADS)

    Chu, Shu-Hsien; Lenglet, Christophe; Parhi, Keshab K.

    2015-03-01

    Estimating brain wiring patterns is critical to better understand the brain organization and function. Anatomical brain connectivity models axonal pathways, while the functional brain connectivity characterizes the statistical dependencies and correlation between the activities of various brain regions. The synchronization of brain activity can be inferred through the variation of blood-oxygen-level dependent (BOLD) signal from functional MRI (fMRI) and the neural connections can be estimated using tractography from diffusion MRI (dMRI). Functional connections between brain regions are supported by anatomical connections, and the synchronization of brain activities arises through sharing of information in the form of electro-chemical signals on axon pathways. Jointly modeling fMRI and dMRI data may improve the accuracy in constructing anatomical connectivity as well as functional connectivity. Such an approach may lead to novel multimodal biomarkers potentially able to better capture functional and anatomical connectivity variations. We present a novel brain network model which jointly models the dMRI and fMRI data to improve the anatomical connectivity estimation and extract the anatomical subnetworks associated with specific functional modes by constraining the anatomical connections as structural supports to the functional connections. The key idea is similar to a multi-commodity flow optimization problem that minimizes the cost or maximizes the efficiency for flow configuration and simultaneously fulfills the supply-demand constraint for each commodity. In the proposed network, the nodes represent the grey matter (GM) regions providing brain functionality, and the links represent white matter (WM) fiber bundles connecting those regions and delivering information. The commodities can be thought of as the information corresponding to brain activity patterns as obtained for instance by independent component analysis (ICA) of fMRI data. The concept of information flow is introduced and used to model the propagation of information between GM areas through WM fiber bundles. The link capacity, i.e., ability to transfer information, is characterized by the relative strength of fiber bundles, e.g., fiber count gathered from the tractography of dMRI data. The node information demand is considered to be proportional to the correlation between neural activity at various cortical areas involved in a particular functional mode (e.g. visual, motor, etc.). These two properties lead to the link capacity and node demand constraints in the proposed model. Moreover, the information flow of a link cannot exceed the demand from either end node. This is captured by the feasibility constraints. Two different cost functions are considered in the optimization formulation in this paper. The first cost function, the reciprocal of fiber strength represents the unit cost for information passing through the link. In the second cost function, a min-max (minimizing the maximal link load) approach is used to balance the usage of each link. Optimizing the first cost function selects the pathway with strongest fiber strength for information propagation. In the second case, the optimization procedure finds all the possible propagation pathways and allocates the flow proportionally to their strength. Additionally, a penalty term is incorporated with both the cost functions to capture the possible missing and weak anatomical connections. With this set of constraints and the proposed cost functions, solving the network optimization problem recovers missing and weak anatomical connections supported by the functional information and provides the functional-associated anatomical subnetworks. Feasibility is demonstrated using realistic diffusion and functional MRI phantom data. It is shown that the proposed model recovers the maximum number of true connections, with fewest number of false connections when compared with the connectivity derived from a joint probabilistic model using the expectation-maximization (EM) algorithm presented in a prior work. We also apply the proposed method to data provided by the Human Connectome Project (HCP).

  6. Calculating Optimum sowing factor: A tool to evaluate sowing strategies and minimize seedling production cost

    Treesearch

    Eric van Steenis

    2013-01-01

    This paper illustrates how to use an excel spreadsheet as a decision-making tool to determine optimum sowing factor to minimize seedling production cost. Factors incorporated into the spreadsheet calculations include germination percentage, seeder accuracy, cost per seed, cavities per block, costs of handling, thinning, and transplanting labor, and more. In addition to...

  7. NLSCIDNT user's guide maximum likehood parameter identification computer program with nonlinear rotorcraft model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.

  8. Error minimizing algorithms for nearest eighbor classifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, Reid B; Hush, Don; Zimmer, G. Beate

    2011-01-03

    Stack Filters define a large class of discrete nonlinear filter first introd uced in image and signal processing for noise removal. In recent years we have suggested their application to classification problems, and investigated their relationship to other types of discrete classifiers such as Decision Trees. In this paper we focus on a continuous domain version of Stack Filter Classifiers which we call Ordered Hypothesis Machines (OHM), and investigate their relationship to Nearest Neighbor classifiers. We show that OHM classifiers provide a novel framework in which to train Nearest Neighbor type classifiers by minimizing empirical error based loss functions. Wemore » use the framework to investigate a new cost sensitive loss function that allows us to train a Nearest Neighbor type classifier for low false alarm rate applications. We report results on both synthetic data and real-world image data.« less

  9. Objective analysis of pseudostress over the Indian Ocean using a direct-minimization approach

    NASA Technical Reports Server (NTRS)

    Legler, David M.; Navon, I. M.; O'Brien, James J.

    1989-01-01

    A technique not previously used in objective analysis of meteorological data is used here to produce monthly average surface pseudostress data over the Indian Ocean. An initial guess field is derived and a cost functional is constructed with five terms: approximation to initial guess, approximation to climatology, a smoothness parameter, and two kinematic terms. The functional is minimized using a conjugate-gradient technique, and the weight for the climatology term controls the overall balance of influence between the climatology and the initial guess. Results from various weight combinations are presented for January and July 1984. Quantitative and qualitative comparisons to the subject analysis are made to find which weight combination provides the best results. The weight on the approximation to climatology is found to balance the influence of the original field and climatology.

  10. Fuzzy Multi-Objective Transportation Planning with Modified S-Curve Membership Function

    NASA Astrophysics Data System (ADS)

    Peidro, D.; Vasant, P.

    2009-08-01

    In this paper, the S-Curve membership function methodology is used in a transportation planning decision (TPD) problem. An interactive method for solving multi-objective TPD problems with fuzzy goals, available supply and forecast demand is developed. The proposed method attempts simultaneously to minimize the total production and transportation costs and the total delivery time with reference to budget constraints and available supply, machine capacities at each source, as well as forecast demand and warehouse space constraints at each destination. We compare in an industrial case the performance of S-curve membership functions, representing uncertainty goals and constraints in TPD problems, with linear membership functions.

  11. Optimizing Wellfield Operation in a Variable Power Price Regime.

    PubMed

    Bauer-Gottwein, Peter; Schneider, Raphael; Davidsen, Claus

    2016-01-01

    Wellfield management is a multiobjective optimization problem. One important objective has been energy efficiency in terms of minimizing the energy footprint (EFP) of delivered water (MWh/m(3) ). However, power systems in most countries are moving in the direction of deregulated markets and price variability is increasing in many markets because of increased penetration of intermittent renewable power sources. In this context the relevant management objective becomes minimizing the cost of electric energy used for pumping and distribution of groundwater from wells rather than minimizing energy use itself. We estimated EFP of pumped water as a function of wellfield pumping rate (EFP-Q relationship) for a wellfield in Denmark using a coupled well and pipe network model. This EFP-Q relationship was subsequently used in a Stochastic Dynamic Programming (SDP) framework to minimize total cost of operating the combined wellfield-storage-demand system over the course of a 2-year planning period based on a time series of observed price on the Danish power market and a deterministic, time-varying hourly water demand. In the SDP setup, hourly pumping rates are the decision variables. Constraints include storage capacity and hourly water demand fulfilment. The SDP was solved for a baseline situation and for five scenario runs representing different EFP-Q relationships and different maximum wellfield pumping rates. Savings were quantified as differences in total cost between the scenario and a constant-rate pumping benchmark. Minor savings up to 10% were found in the baseline scenario, while the scenario with constant EFP and unlimited pumping rate resulted in savings up to 40%. Key factors determining potential cost savings obtained by flexible wellfield operation under a variable power price regime are the shape of the EFP-Q relationship, the maximum feasible pumping rate and the capacity of available storage facilities. © 2015 The Authors. Groundwater published by Wiley Periodicals, Inc. on behalf of National Ground Water Association.

  12. A Bootstrap Approach to an Affordable Exploration Program

    NASA Technical Reports Server (NTRS)

    Oeftering, Richard C.

    2011-01-01

    This paper examines the potential to build an affordable sustainable exploration program by adopting an approach that requires investing in technologies that can be used to build a space infrastructure from very modest initial capabilities. Human exploration has had a history of flight programs that have high development and operational costs. Since Apollo, human exploration has had very constrained budgets and they are expected be constrained in the future. Due to their high operations costs it becomes necessary to consider retiring established space facilities in order to move on to the next exploration challenge. This practice may save cost in the near term but it does so by sacrificing part of the program s future architecture. Human exploration also has a history of sacrificing fully functional flight hardware to achieve mission objectives. An affordable exploration program cannot be built when it involves billions of dollars of discarded space flight hardware, instead, the program must emphasize preserving its high value space assets and building a suitable permanent infrastructure. Further this infrastructure must reduce operational and logistics cost. The paper examines the importance of achieving a high level of logistics independence by minimizing resource consumption, minimizing the dependency on external logistics, and maximizing the utility of resources available. The approach involves the development and deployment of a core suite of technologies that have minimum initial needs yet are able expand upon initial capability in an incremental bootstrap fashion. The bootstrap approach incrementally creates an infrastructure that grows and becomes self sustaining and eventually begins producing the energy, products and consumable propellants that support human exploration. The bootstrap technologies involve new methods of delivering and manipulating energy and materials. These technologies will exploit the space environment, minimize dependencies, and minimize the need for imported resources. They will provide the widest range of utility in a resource scarce environment and pave the way to an affordable exploration program.

  13. Load emphasizes muscle effort minimization during selection of arm movement direction

    PubMed Central

    2012-01-01

    Background Directional preferences during center-out horizontal shoulder-elbow movements were previously established for both the dominant and non-dominant arm with the use of a free-stroke drawing task that required random selection of movement directions. While the preferred directions were mirror-symmetrical in both arms, they were attributed to a tendency specific for the dominant arm to simplify control of interaction torque by actively accelerating one joint and producing largely passive motion at the other joint. No conclusive evidence has been obtained in support of muscle effort minimization as a contributing factor to the directional preferences. Here, we tested whether distal load changes directional preferences, making the influence of muscle effort minimization on the selection of movement direction more apparent. Methods The free-stroke drawing task was performed by the dominant and non-dominant arm with no load and with 0.454 kg load at the wrist. Motion of each arm was limited to rotation of the shoulder and elbow in the horizontal plane. Directional histograms of strokes produced by the fingertip were calculated to assess directional preferences in each arm and load condition. Possible causes for directional preferences were further investigated by studying optimization across directions of a number of cost functions. Results Preferences in both arms to move in the diagonal directions were revealed. The previously suggested tendency to actively accelerate one joint and produce passive motion at the other joint was supported in both arms and load conditions. However, the load increased the tendency to produce strokes in the transverse diagonal directions (perpendicular to the forearm orientation) in both arms. Increases in required muscle effort caused by the load suggested that the higher frequency of movements in the transverse directions represented increased influence of muscle effort minimization on the selection of movement direction. This interpretation was supported by cost function optimization results. Conclusions While without load, the contribution of muscle effort minimization was minor, and therefore, not apparent, the load revealed this contribution by enhancing it. Unlike control of interaction torque, the revealed tendency to minimize muscle effort was independent of arm dominance. PMID:23035925

  14. The Costs of Legislated Minimal Competency Requirements. A background paper prepared for the Minimal Cometency Workshops sponsored by the Education Commission of the States and the National Institute of Education.

    ERIC Educational Resources Information Center

    Anderson, Barry D.

    Little is known about the costs of setting up and implementing legislated minimal competency testing (MCT). To estimate the financial obstacles which lie between the idea and its implementation, MCT requirements are viewed from two perspectives. The first, government regulation, views legislated minimal competency requirements as an attempt by the…

  15. Economic impact of minimally invasive lumbar surgery.

    PubMed

    Hofstetter, Christoph P; Hofer, Anna S; Wang, Michael Y

    2015-03-18

    Cost effectiveness has been demonstrated for traditional lumbar discectomy, lumbar laminectomy as well as for instrumented and noninstrumented arthrodesis. While emerging evidence suggests that minimally invasive spine surgery reduces morbidity, duration of hospitalization, and accelerates return to activites of daily living, data regarding cost effectiveness of these novel techniques is limited. The current study analyzes all available data on minimally invasive techniques for lumbar discectomy, decompression, short-segment fusion and deformity surgery. In general, minimally invasive spine procedures appear to hold promise in quicker patient recovery times and earlier return to work. Thus, minimally invasive lumbar spine surgery appears to have the potential to be a cost-effective intervention. Moreover, novel less invasive procedures are less destabilizing and may therefore be utilized in certain indications that traditionally required arthrodesis procedures. However, there is a lack of studies analyzing the economic impact of minimally invasive spine surgery. Future studies are necessary to confirm the durability and further define indications for minimally invasive lumbar spine procedures.

  16. Candidate Mission from Planet Earth control and data delivery system architecture

    NASA Technical Reports Server (NTRS)

    Shapiro, Phillip; Weinstein, Frank C.; Hei, Donald J., Jr.; Todd, Jacqueline

    1992-01-01

    Using a structured, experienced-based approach, Goddard Space Flight Center (GSFC) has assessed the generic functional requirements for a lunar mission control and data delivery (CDD) system. This analysis was based on lunar mission requirements outlined in GSFC-developed user traffic models. The CDD system will facilitate data transportation among user elements, element operations, and user teams by providing functions such as data management, fault isolation, fault correction, and link acquisition. The CDD system for the lunar missions must not only satisfy lunar requirements but also facilitate and provide early development of data system technologies for Mars. Reuse and evolution of existing data systems can help to maximize system reliability and minimize cost. This paper presents a set of existing and currently planned NASA data systems that provide the basic functionality. Reuse of such systems can have an impact on mission design and significantly reduce CDD and other system development costs.

  17. Renormalized anisotropic exchange for representing heat assisted magnetic recording media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiao, Yipeng; Liu, Zengyuan; Victora, R. H., E-mail: victora@umn.edu

    2015-05-07

    Anisotropic exchange has been incorporated in a description of magnetic recording media near the Curie temperature, as would be found during heat assisted magnetic recording. The new parameters were found using a cost function that minimized the difference between atomistic properties and those of renormalized spin blocks. Interestingly, the anisotropic exchange description at 1.5 nm discretization yields very similar switching and magnetization behavior to that found at 1.2 nm (and below) discretization for the previous isotropic exchange. This suggests that the increased accuracy of anisotropic exchange may also reduce the computational cost during simulation.

  18. Use of optimization to predict the effect of selected parameters on commuter aircraft performance

    NASA Technical Reports Server (NTRS)

    Wells, V. L.; Shevell, R. S.

    1982-01-01

    An optimizing computer program determined the turboprop aircraft with lowest direct operating cost for various sets of cruise speed and field length constraints. External variables included wing area, wing aspect ratio and engine sea level static horsepower; tail sizes, climb speed and cruise altitude were varied within the function evaluation program. Direct operating cost was minimized for a 150 n.mi typical mission. Generally, DOC increased with increasing speed and decreasing field length but not by a large amount. Ride roughness, however, increased considerably as speed became higher and field length became shorter.

  19. Proven Innovations and New Initiatives in Ground System Development: Reducing Costs in the Ground System

    NASA Technical Reports Server (NTRS)

    Gunn, Jody M.

    2006-01-01

    The state-of-the-practice for engineering and development of Ground Systems has evolved significantly over the past half decade. Missions that challenge ground system developers with significantly reduced budgets in spite of requirements for greater and previously unimagined functionality are now the norm. Making the right trades early in the mission lifecycle is one of the key factors to minimizing ground system costs. The Mission Operations Strategic Leadership Team at the Jet Propulsion Laboratory has spent the last year collecting and working through successes and failures in ground systems for application to future missions.

  20. A Cost-Effective Energy-Recovering Sustain Driving Circuit for ac Plasma Display Panels

    NASA Astrophysics Data System (ADS)

    Lim, Jae Kwang; Tae, Heung-Sik; Choi, Byungcho; Kim, Seok Gi

    A new sustain driving circuit, featuring an energy-recovering function with simple structure and minimal component count, is proposed as a cost-effective solution for driving plasma display panels during the sustaining period. Compared with existing solutions, the proposed circuit reduces the number of semiconductor switches and reactive circuit components without compromising the circuit performance and gas-discharging characteristics. In addition, the proposed circuit utilizes the harness wire as an inductive circuit component, thereby further simplifying the circuit structure. The performance of the proposed circuit is confirmed with a 42-inch plasma display panel.

  1. Optimizing conjunctive use of surface water and groundwater resources with stochastic dynamic programming

    NASA Astrophysics Data System (ADS)

    Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter

    2014-05-01

    Optimal management of conjunctive use of surface water and groundwater has been attempted with different algorithms in the literature. In this study, a hydro-economic modelling approach to optimize conjunctive use of scarce surface water and groundwater resources under uncertainty is presented. A stochastic dynamic programming (SDP) approach is used to minimize the basin-wide total costs arising from water allocations and water curtailments. Dynamic allocation problems with inclusion of groundwater resources proved to be more complex to solve with SDP than pure surface water allocation problems due to head-dependent pumping costs. These dynamic pumping costs strongly affect the total costs and can lead to non-convexity of the future cost function. The water user groups (agriculture, industry, domestic) are characterized by inelastic demands and fixed water allocation and water supply curtailment costs. As in traditional SDP approaches, one step-ahead sub-problems are solved to find the optimal management at any time knowing the inflow scenario and reservoir/aquifer storage levels. These non-linear sub-problems are solved using a genetic algorithm (GA) that minimizes the sum of the immediate and future costs for given surface water reservoir and groundwater aquifer end storages. The immediate cost is found by solving a simple linear allocation sub-problem, and the future costs are assessed by interpolation in the total cost matrix from the following time step. Total costs for all stages, reservoir states, and inflow scenarios are used as future costs to drive a forward moving simulation under uncertain water availability. The use of a GA to solve the sub-problems is computationally more costly than a traditional SDP approach with linearly interpolated future costs. However, in a two-reservoir system the future cost function would have to be represented by a set of planes, and strict convexity in both the surface water and groundwater dimension cannot be maintained. The optimization framework based on the GA is still computationally feasible and represents a clean and customizable method. The method has been applied to the Ziya River basin, China. The basin is located on the North China Plain and is subject to severe water scarcity, which includes surface water droughts and groundwater over-pumping. The head-dependent groundwater pumping costs will enable assessment of the long-term effects of increased electricity prices on the groundwater pumping. The coupled optimization framework is used to assess realistic alternative development scenarios for the basin. In particular the potential for using electricity pricing policies to reach sustainable groundwater pumping is investigated.

  2. Permanent magnet design for magnetic heat pumps using total cost minimization

    NASA Astrophysics Data System (ADS)

    Teyber, R.; Trevizoli, P. V.; Christiaanse, T. V.; Govindappa, P.; Niknia, I.; Rowe, A.

    2017-11-01

    The active magnetic regenerator (AMR) is an attractive technology for efficient heat pumps and cooling systems. The costs associated with a permanent magnet for near room temperature applications are a central issue which must be solved for broad market implementation. To address this problem, we present a permanent magnet topology optimization to minimize the total cost of cooling using a thermoeconomic cost-rate balance coupled with an AMR model. A genetic algorithm identifies cost-minimizing magnet topologies. For a fixed temperature span of 15 K and 4.2 kg of gadolinium, the optimal magnet configuration provides 3.3 kW of cooling power with a second law efficiency (ηII) of 0.33 using 16.3 kg of permanent magnet material.

  3. A parameter optimization approach to controller partitioning for integrated flight/propulsion control application

    NASA Technical Reports Server (NTRS)

    Schmidt, Phillip; Garg, Sanjay; Holowecky, Brian

    1992-01-01

    A parameter optimization framework is presented to solve the problem of partitioning a centralized controller into a decentralized hierarchical structure suitable for integrated flight/propulsion control implementation. The controller partitioning problem is briefly discussed and a cost function to be minimized is formulated, such that the resulting 'optimal' partitioned subsystem controllers will closely match the performance (including robustness) properties of the closed-loop system with the centralized controller while maintaining the desired controller partitioning structure. The cost function is written in terms of parameters in a state-space representation of the partitioned sub-controllers. Analytical expressions are obtained for the gradient of this cost function with respect to parameters, and an optimization algorithm is developed using modern computer-aided control design and analysis software. The capabilities of the algorithm are demonstrated by application to partitioned integrated flight/propulsion control design for a modern fighter aircraft in the short approach to landing task. The partitioning optimization is shown to lead to reduced-order subcontrollers that match the closed-loop command tracking and decoupling performance achieved by a high-order centralized controller.

  4. A parameter optimization approach to controller partitioning for integrated flight/propulsion control application

    NASA Technical Reports Server (NTRS)

    Schmidt, Phillip H.; Garg, Sanjay; Holowecky, Brian R.

    1993-01-01

    A parameter optimization framework is presented to solve the problem of partitioning a centralized controller into a decentralized hierarchical structure suitable for integrated flight/propulsion control implementation. The controller partitioning problem is briefly discussed and a cost function to be minimized is formulated, such that the resulting 'optimal' partitioned subsystem controllers will closely match the performance (including robustness) properties of the closed-loop system with the centralized controller while maintaining the desired controller partitioning structure. The cost function is written in terms of parameters in a state-space representation of the partitioned sub-controllers. Analytical expressions are obtained for the gradient of this cost function with respect to parameters, and an optimization algorithm is developed using modern computer-aided control design and analysis software. The capabilities of the algorithm are demonstrated by application to partitioned integrated flight/propulsion control design for a modern fighter aircraft in the short approach to landing task. The partitioning optimization is shown to lead to reduced-order subcontrollers that match the closed-loop command tracking and decoupling performance achieved by a high-order centralized controller.

  5. Digital robust active control law synthesis for large order systems using constrained optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1987-01-01

    This paper presents a direct digital control law synthesis procedure for a large order, sampled data, linear feedback system using constrained optimization techniques to meet multiple design requirements. A linear quadratic Gaussian type cost function is minimized while satisfying a set of constraints on the design loads and responses. General expressions for gradients of the cost function and constraints, with respect to the digital control law design variables are derived analytically and computed by solving a set of discrete Liapunov equations. The designer can choose the structure of the control law and the design variables, hence a stable classical control law as well as an estimator-based full or reduced order control law can be used as an initial starting point. Selected design responses can be treated as constraints instead of lumping them into the cost function. This feature can be used to modify a control law, to meet individual root mean square response limitations as well as minimum single value restrictions. Low order, robust digital control laws were synthesized for gust load alleviation of a flexible remotely piloted drone aircraft.

  6. A case study in electricity regulation: Theory, evidence, and policy

    NASA Astrophysics Data System (ADS)

    Luk, Stephen Kai Ming

    This research provides a thorough empirical analysis of the problem of excess capacity found in the electricity supply industry in Hong Kong. I utilize a cost-function based temporary equilibrium framework to investigate empirically whether the current regulatory scheme encourages the two utilities to overinvest in capital, and how much consumers would have saved if the underutilized capacity is eliminated. The research is divided into two main parts. The first section attempts to find any evidence of over-investment in capital. As a point of departure from traditional analysis, I treat physical capital as quasi-fixed, which implies a restricted cost function to represent the firm's short-run cost structure. Under such specification, the firm minimizes the cost of employing variable factor inputs subject to predetermined levels of quasi-fixed factors. Using a transcendental logarithmic restricted cost function, I estimate the cost-side equivalent of marginal product of capital, or commonly referred to as "shadow values" of capital. The estimation results suggest that the two electric utilities consistently over-invest in generation capacity. The second part of this research focuses on the economies of capital utilization, and the estimation of distortion cost in capital investment. Again, I utilize a translog specification of the cost function to estimate the actual cost of the excess capacity, and to find out how much consumers could have saved if the underutilized generation capacity were brought closer to the international standard. Estimation results indicate that an increase in the utilization rate can significantly reduce the costs of both utilities. And if the current excess capacity were reduced to the international standard, the combined savings in costs for both firms will reach 4.4 billion. This amount of savings, if redistributed to all consumers evenly, will translate into a 650 rebate per capita. Finally, two policy recommendations: a more stringent policy towards capacity expansion and the creation of a reimbursement program, are discussed.

  7. Toward Improved Methods of Estimating Attenuation, Phase and Group velocity of surface waves observed on Shallow Seismic Records

    NASA Astrophysics Data System (ADS)

    Diallo, M. S.; Holschneider, M.; Kulesh, M.; Scherbaum, F.; Ohrnberger, M.; Lück, E.

    2004-05-01

    This contribution is concerned with the estimate of attenuation and dispersion characteristics of surface waves observed on a shallow seismic record. The analysis is based on a initial parameterization of the phase and attenuation functions which are then estimated by minimizing a properly defined merit function. To minimize the effect of random noise on the estimates of dispersion and attenuation we use cross-correlations (in Fourier domain) of preselected traces from some region of interest along the survey line. These cross-correlations are then expressed in terms of the parameterized attenuation and phase functions and the auto-correlation of the so-called source trace or reference trace. Cross-corelation that enter the optimization are selected so as to provide an average estimate of both the attenuation function and the phase (group) velocity of the area under investigation. The advantage of the method over the standard two stations method using Fourier technique is that uncertainties related to the phase unwrapping and the estimate of the number of 2π cycle skip in the phase phase are eliminated. However when mutliple modes arrival are observed, its become merely impossible to obtain reliable estimate the dipsersion curves for the different modes using optimization method alone. To circumvent this limitations we using the presented approach in conjunction with the wavelet propagation operator (Kulesh et al., 2003) which allows the application of band pass filtering in (ω -t) domain, to select a particular mode for the minimization. Also by expressing the cost function in the wavelet domain the optimization can be performed either with respect to the phase, the modulus of the transform or a combination of both. This flexibility in the design of the cost function provides an additional mean of constraining the optimization results. Results from the application of this dispersion and attenuation analysis method are shown for both synthetic and real 2D shallow seismic data sets. M. Kulesh, M. Holschneider, M. S. Diallo, Q. Xie and F. Scherbaum, Modeling of Wave Dispersion Using Wavelet Transfrom (Submitted to Pure and Applied Geophysics).

  8. Piecewise linear approximation for hereditary control problems

    NASA Technical Reports Server (NTRS)

    Propst, Georg

    1987-01-01

    Finite dimensional approximations are presented for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems when a quadratic cost integral has to be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in case the cost integral ranges over a finite time interval as well as in the case it ranges over an infinite time interval. The arguments in the latter case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense. This feature is established using a vector-component stability criterion in the state space R(n) x L(2) and the favorable eigenvalue behavior of the piecewise linear approximations.

  9. Multi objective optimization model for minimizing production cost and environmental impact in CNC turning process

    NASA Astrophysics Data System (ADS)

    Widhiarso, Wahyu; Rosyidi, Cucuk Nur

    2018-02-01

    Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.

  10. Software Cost Measuring and Reporting. One of the Software Acquisition Engineering Guidebook Series.

    DTIC Science & Technology

    1979-01-02

    through the peripherals. How- and performance criteria), ever, his interaction is usually minimal since, by difinition , the automatic test Since TS...performs its Software estimating is still heavily intended functions properly. dependent on experienced judgement. However, quantitative methods...apply to systems of totally different can be distributed to specialists who content. The Quantitative guideline may are most familiar with the work. One

  11. Consortium for health and military performance and American College of Sports Medicine Summit: utility of functional movement assessment in identifying musculoskeletal injury risk.

    PubMed

    Teyhen, Deydre; Bergeron, Michael F; Deuster, Patricia; Baumgartner, Neal; Beutler, Anthony I; de la Motte, Sarah J; Jones, Bruce H; Lisman, Peter; Padua, Darin A; Pendergrass, Timothy L; Pyne, Scott W; Schoomaker, Eric; Sell, Timothy C; O'Connor, Francis

    2014-01-01

    Prevention of musculoskeletal injuries (MSKI) is critical in both civilian and military populations to enhance physical performance, optimize health, and minimize health care expenses. Developing a more unified approach through addressing identified movement impairments could result in improved dynamic balance, trunk stability, and functional movement quality while potentially minimizing the risk of incurring such injuries. Although the evidence supporting the utility of injury prediction and return-to-activity readiness screening tools is encouraging, considerable additional research is needed regarding improving sensitivity, specificity, and outcomes, and especially the implementation challenges and barriers in a military setting. If selected current functional movement assessments can be administered in an efficient and cost-effective manner, utilization of the existing tools may be a beneficial first step in decreasing the burden of MSKI, with a subsequent focus on secondary and tertiary prevention via further assessments on those with prior injury history.

  12. Peri-operative interventions producing better functional outcomes and enhanced recovery following total hip and knee arthroplasty: an evidence-based review

    PubMed Central

    2013-01-01

    The increasing numbers of patients undergoing total hip arthroplasty (THA) or total knee arthroplasty (TKA), combined with the rapidly growing repertoire of surgical techniques and interventions available have put considerable pressure on surgeons and other healthcare professionals to produce excellent results with early functional recovery and short hospital stays. The current economic climate and the restricted healthcare budgets further necessitate brief hospitalization while minimizing costs. Clinical pathways and protocols introduced to achieve these goals include a variety of peri-operative interventions to fulfill patient expectations and achieve the desired outcomes. In this review, we present an evidence-based summary of common interventions available to achieve enhanced recovery, reduce hospital stay, and improve functional outcomes following THA and TKA. It covers pre-operative patient education and nutrition, pre-emptive analgesia, neuromuscular electrical stimulation, pulsed electromagnetic fields, peri-operative rehabilitation, modern wound dressings, standard surgical techniques, minimally invasive surgery, and fast-track arthroplasty units. PMID:23406499

  13. Exergoeconomic analysis and optimization of an evaporator for a binary mixture of fluids in an organic Rankine cycle

    NASA Astrophysics Data System (ADS)

    Li, You-Rong; Du, Mei-Tang; Wang, Jian-Ning

    2012-12-01

    This paper focuses on the research of an evaporator with a binary mixture of organic working fluids in the organic Rankine cycle. Exergoeconomic analysis and performance optimization were performed based on the first and second laws of thermodynamics, and the exergoeconomic theory. The annual total cost per unit heat transfer rate was introduced as the objective function. In this model, the exergy loss cost caused by the heat transfer irreversibility and the capital cost were taken into account; however, the exergy loss due to the frictional pressure drops, heat dissipation to surroundings, and the flow imbalance were neglected. The variation laws of the annual total cost with respect to the number of transfer units and the temperature ratios were presented. Optimal design parameters that minimize the objective function had been obtained, and the effects of some important dimensionless parameters on the optimal performances had also been discussed for three types of evaporator flow arrangements. In addition, optimal design parameters of evaporators were compared with those of condensers.

  14. The Deterministic Information Bottleneck

    NASA Astrophysics Data System (ADS)

    Strouse, D. J.; Schwab, David

    2015-03-01

    A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.

  15. Analyzing the homeland security of the U.S.-Mexico border.

    PubMed

    Wein, Lawrence M; Liu, Yifan; Motskin, Arik

    2009-05-01

    We develop a mathematical optimization model at the intersection of homeland security and immigration, that chooses various immigration enforcement decision variables to minimize the probability that a terrorist can successfully enter the United States across the U.S.-Mexico border. Included are a discrete choice model for the probability that a potential alien crosser will attempt to cross the U.S.-Mexico border in terms of the likelihood of success and the U.S. wage for illegal workers, a spatial model that calculates the apprehension probability as a function of the number of crossers, the number of border patrol agents, and the amount of surveillance technology on the border, a queueing model that determines the probability that an apprehended alien will be detained and removed as a function of the number of detention beds, and an equilibrium model for the illegal wage that balances the supply and demand for work and incorporates the impact of worksite enforcement. Our main result is that detention beds are the current system bottleneck (even after the large reduction in detention residence times recently achieved by expedited removal), and increases in border patrol staffing or surveillance technology would not provide any improvements without a large increase in detention capacity. Our model also predicts that surveillance technology is more cost effective than border patrol agents, which in turn are more cost effective than worksite inspectors, but these results are not robust due to the difficulty of predicting human behavior from existing data. Overall, the probability that a terrorist can successfully enter the United States is very high, and it would be extremely costly and difficult to significantly reduce it. We also investigate the alternative objective function of minimizing the flow of illegal aliens across the U.S.-Mexico border, and obtain qualitatively similar results.

  16. Re-Engineering JPL's Mission Planning Ground System Architecture for Cost Efficient Operations in the 21st Century

    NASA Technical Reports Server (NTRS)

    Fordyce, Jess

    1996-01-01

    Work carried out to re-engineer the mission analysis segment of JPL's mission planning ground system architecture is reported on. The aim is to transform the existing software tools, originally developed for specific missions on different support environments, into an integrated, general purpose, multi-mission tool set. The issues considered are: the development of a partnership between software developers and users; the definition of key mission analysis functions; the development of a consensus based architecture; the move towards evolutionary change instead of revolutionary replacement; software reusability, and the minimization of future maintenance costs. The current status and aims of new developments are discussed and specific examples of cost savings and improved productivity are presented.

  17. Optimum swimming pathways of fish spawning migrations in rivers

    USGS Publications Warehouse

    McElroy, Brandon; DeLonay, Aaron; Jacobson, Robert

    2012-01-01

    Fishes that swim upstream in rivers to spawn must navigate complex fluvial velocity fields to arrive at their ultimate locations. One hypothesis with substantial implications is that fish traverse pathways that minimize their energy expenditure during migration. Here we present the methodological and theoretical developments necessary to test this and similar hypotheses. First, a cost function is derived for upstream migration that relates work done by a fish to swimming drag. The energetic cost scales with the cube of a fish's relative velocity integrated along its path. By normalizing to the energy requirements of holding a position in the slowest waters at the path's origin, a cost function is derived that depends only on the physical environment and not on specifics of individual fish. Then, as an example, we demonstrate the analysis of a migration pathway of a telemetrically tracked pallid sturgeon (Scaphirhynchus albus) in the Missouri River (USA). The actual pathway cost is lower than 105 random paths through the surveyed reach and is consistent with the optimization hypothesis. The implication—subject to more extensive validation—is that reproductive success in managed rivers could be increased through manipulation of reservoir releases or channel morphology to increase abundance of lower-cost migration pathways.

  18. Constrained Optimization of Average Arrival Time via a Probabilistic Approach to Transport Reliability

    PubMed Central

    Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam

    2015-01-01

    To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902

  19. A global approach for using kinematic redundancy to minimize base reactions of manipulators

    NASA Technical Reports Server (NTRS)

    Chung, C. L.; Desa, S.

    1989-01-01

    An important consideration in the use of manipulators in microgravity environments is the minimization of the base reactions, i.e. the magnitude of the force and the moment exerted by the manipulator on its base as it performs its tasks. One approach which was proposed and implemented is to use the redundant degree of freedom in a kinematically redundant manipulator to plan manipulator trajectories to minimize base reactions. A global approach was developed for minimizing the magnitude of the base reactions for kinematically redundant manipulators which integrates the Partitioned Jacobian method of redundancy resolution, a 4-3-4 joint-trajectory representation and the minimization of a cost function which is the time-integral of the magnitude of the base reactions. The global approach was also compared with a local approach developed earlier for the case of point-to-point motion of a three degree-of-freedom planar manipulator with one redundant degree-of-freedom. The results show that the global approach is more effective in reducing and smoothing the base force while the local approach is superior in reducing the base moment.

  20. Due-Window Assignment Scheduling with Variable Job Processing Times

    PubMed Central

    Wu, Yu-Bin

    2015-01-01

    We consider a common due-window assignment scheduling problem jobs with variable job processing times on a single machine, where the processing time of a job is a function of its position in a sequence (i.e., learning effect) or its starting time (i.e., deteriorating effect). The problem is to determine the optimal due-windows, and the processing sequence simultaneously to minimize a cost function includes earliness, tardiness, the window location, window size, and weighted number of tardy jobs. We prove that the problem can be solved in polynomial time. PMID:25918745

  1. Optimal trajectory generation for mechanical arms. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Iemenschot, J. A.

    1972-01-01

    A general method of generating optimal trajectories between an initial and a final position of an n degree of freedom manipulator arm with nonlinear equations of motion is proposed. The method is based on the assumption that the time history of each of the coordinates can be expanded in a series of simple time functions. By searching over the coefficients of the terms in the expansion, trajectories which minimize the value of a given cost function can be obtained. The method has been applied to a planar three degree of freedom arm.

  2. Manufacturing of polylactic acid nanocomposite 3D printer filaments for smart textile applications

    NASA Astrophysics Data System (ADS)

    Hashemi Sanatgar, R.; Cayla, A.; Campagne, C.; Nierstrasz, V.

    2017-10-01

    In this paper, manufacturing of polylactic acid nanocomposite 3D printer filaments was considered for smart textile applications. 3D printing process was applied as a novel process for deposition of nanocomposites on PLA fabrics to introduce more flexible, resourceefficient and cost effective textile functionalization processes than conventional printing process like screen and inkjet printing. The aim is to develop an integrated or tailored production process for smart and functional textiles which avoid unnecessary use of water, energy, chemicals and minimize the waste to improve ecological footprint and productivity.

  3. Space shuttle low cost/risk avionics study

    NASA Technical Reports Server (NTRS)

    1971-01-01

    All work breakdown structure elements containing any avionics related effort were examined for pricing the life cycle costs. The analytical, testing, and integration efforts are included for the basic onboard avionics and electrical power systems. The design and procurement of special test equipment and maintenance and repair equipment are considered. Program management associated with these efforts is described. Flight test spares and labor and materials associated with the operations and maintenance of the avionics systems throughout the horizontal flight test are examined. It was determined that cost savings can be achieved by using existing hardware, maximizing orbiter-booster commonality, specifying new equipments to MIL quality standards, basing redundancy on cost effective analysis, minimizing software complexity and reducing cross strapping and computer-managed functions, utilizing compilers and floating point computers, and evolving the design as dictated by the horizontal flight test schedules.

  4. The Impact of Mission Duration on a Mars Orbital Mission

    NASA Technical Reports Server (NTRS)

    Arney, Dale; Earle, Kevin; Cirillo, Bill; Jones, Christopher; Klovstad, Jordan; Grande, Melanie; Stromgren, Chel

    2017-01-01

    Performance alone is insufficient to assess the total impact of changing mission parameters on a space mission concept, architecture, or campaign; the benefit, cost, and risk must also be understood. This paper examines the impact to benefit, cost, and risk of changing the total mission duration of a human Mars orbital mission. The changes in the sizing of the crew habitat, including consumables and spares, was assessed as a function of duration, including trades of different life support strategies; this was used to assess the impact on transportation system requirements. The impact to benefit is minimal, while the impact on cost is dominated by the increases in transportation costs to achieve shorter total durations. The risk is expected to be reduced by decreasing total mission duration; however, large uncertainty exists around the magnitude of that reduction.

  5. Dynamic remapping of parallel computations with varying resource demands

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.; Saltz, J. H.

    1986-01-01

    A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.

  6. Optimization of conventional water treatment plant using dynamic programming.

    PubMed

    Mostafa, Khezri Seyed; Bahareh, Ghafari; Elahe, Dadvar; Pegah, Dadras

    2015-12-01

    In this research, the mathematical models, indicating the capability of various units, such as rapid mixing, coagulation and flocculation, sedimentation, and the rapid sand filtration are used. Moreover, cost functions were used for the formulation of conventional water and wastewater treatment plant by applying Clark's formula (Clark, 1982). Also, by applying dynamic programming algorithm, it is easy to design a conventional treatment system with minimal cost. The application of the model for a case reduced the annual cost. This reduction was approximately in the range of 4.5-9.5% considering variable limitations. Sensitivity analysis and prediction of system's feedbacks were performed for different alterations in proportion from parameters optimized amounts. The results indicated (1) that the objective function is more sensitive to design flow rate (Q), (2) the variations in the alum dosage (A), and (3) the sand filter head loss (H). Increasing the inflow by 20%, the total annual cost would increase to about 12.6%, while 20% reduction in inflow leads to 15.2% decrease in the total annual cost. Similarly, 20% increase in alum dosage causes 7.1% increase in the total annual cost, while 20% decrease results in 7.9% decrease in the total annual cost. Furthermore, the pressure decrease causes 2.95 and 3.39% increase and decrease in total annual cost of treatment plants. © The Author(s) 2013.

  7. Systematic review of reusable versus disposable laparoscopic instruments: costs and safety.

    PubMed

    Siu, Joey; Hill, Andrew G; MacCormick, Andrew D

    2017-01-01

    The quality of instruments and surgical expertise in minimally invasive surgery has developed markedly in the last two decades. Attention is now being turned to ways to allow surgeons to adopt more cost-effective and environmental-friendly approaches. This review explores current evidence on the cost and environmental impact of reusable versus single-use instruments. In addition, we aim to compare their quality, functionality and associated clinical outcomes. The Medline and EMBASE databases were searched for relevant literature from January 2000 to May 2015. Subject headings were Equipment Reuse/, Disposable Equipment/, Cholecystectomy/, Laparoscopic/, Laparoscopy/, Surgical Instruments/, Medical Waste Disposal/, Waste Management/, Medical Waste/, Environmental Sustainability/ and Sterilization/. There are few objective comparative analyses between single-use versus reusable instruments. Current evidence suggests that limiting use of disposal instruments to necessity may hold both economical and environmental advantages. Theoretical advantages of single-use instruments in quality, safety, sterility, ease of use and importantly patient outcomes have rarely been examined. Cost-saving methods, environmental-friendly methods, global operative costs, hidden costs, sterilization methods and quality assurance systems vary greatly between studies making it difficult to gain an overview of the comparison between single-use and reusable instruments. Further examination of cost comparisons between disposable and reusable instruments is necessary while externalized environmental costs, instrument function and safety are also important to consider in future studies. © 2016 Royal Australasian College of Surgeons.

  8. Cost minimizing of cutting process for CNC thermal and water-jet machines

    NASA Astrophysics Data System (ADS)

    Tavaeva, Anastasia; Kurennov, Dmitry

    2015-11-01

    This paper deals with optimization problem of cutting process for CNC thermal and water-jet machines. The accuracy of objective function parameters calculation for optimization problem is investigated. This paper shows that working tool path speed is not constant value. One depends on some parameters that are described in this paper. The relations of working tool path speed depending on the numbers of NC programs frames, length of straight cut, configuration part are presented. Based on received results the correction coefficients for working tool speed are defined. Additionally the optimization problem may be solved by using mathematical model. Model takes into account the additional restrictions of thermal cutting (choice of piercing and output tool point, precedence condition, thermal deformations). At the second part of paper the non-standard cutting techniques are considered. Ones may lead to minimizing of cutting cost and time compared with standard cutting techniques. This paper considers the effectiveness of non-standard cutting techniques application. At the end of the paper the future research works are indicated.

  9. Control at stability's edge minimizes energetic costs: expert stick balancing

    PubMed Central

    Meyer, Ryan; Zhvanetsky, Max; Ridge, Sarah; Insperger, Tamás

    2016-01-01

    Stick balancing on the fingertip is a complex voluntary motor task that requires the stabilization of an unstable system. For seated expert stick balancers, the time delay is 0.23 s, the shortest stick that can be balanced for 240 s is 0.32 m and there is a ° dead zone for the estimation of the vertical displacement angle in the saggital plane. These observations motivate a switching-type, pendulum–cart model for balance control which uses an internal model to compensate for the time delay by predicting the sensory consequences of the stick's movements. Numerical simulations using the semi-discretization method suggest that the feedback gains are tuned near the edge of stability. For these choices of the feedback gains, the cost function which takes into account the position of the fingertip and the corrective forces is minimized. Thus, expert stick balancers optimize control with a combination of quick manoeuvrability and minimum energy expenditures. PMID:27278361

  10. Brandon/Hill selected list of books and journals for the small medical library.

    PubMed

    Hill, D R

    1999-04-01

    The interrelationship of print and electronic media in the hospital library and its relevance to the "Brandon/Hill Selected List" in 1999 are addressed in the updated list (eighteenth version) of 627 books and 145 journals. This list is intended as a selection guide for the small or medium-size library in a hospital or similar facility. More realistically, it can function as a core collection for a library consortium. Books and journals are categorized by subject; the book list is followed by an author/editor index, and the subject list of journals by an alphabetical title listing. Due to continuing requests from librarians, a "minimal core" book collection consisting of 82 titles has been pulled out from the 214 asterisked (*) initial-purchase books and marked with daggers ([symbol: see text]). To purchase the entire collection of books and to pay for 1999 journal subscriptions would require $114,900. The cost of only the asterisked items, books and journals, totals $49,100. The "minimal core" book collection costs $13,200.

  11. Porous graphitic carbon nitride synthesized via direct polymerization of urea for efficient sunlight-driven photocatalytic hydrogen production

    NASA Astrophysics Data System (ADS)

    Zhang, Yuewei; Liu, Jinghai; Wu, Guan; Chen, Wei

    2012-08-01

    Energy captured directly from sunlight provides an attractive approach towards fulfilling the need for green energy resources on the terawatt scale with minimal environmental impact. Collecting and storing solar energy into fuel through photocatalyzed water splitting to generate hydrogen in a cost-effective way is desirable. To achieve this goal, low cost and environmentally benign urea was used to synthesize the metal-free photocatalyst graphitic carbon nitride (g-C3N4). A porous structure is achieved via one-step polymerization of the single precursor. The porous structure with increased BET surface area and pore volume shows a much higher hydrogen production rate under simulated sunlight irradiation than thiourea-derived and dicyanamide-derived g-C3N4. The presence of an oxygen atom is presumed to play a key role in adjusting the textural properties. Further improvement of the photocatalytic function can be expected with after-treatment due to its rich chemistry in functionalization.Energy captured directly from sunlight provides an attractive approach towards fulfilling the need for green energy resources on the terawatt scale with minimal environmental impact. Collecting and storing solar energy into fuel through photocatalyzed water splitting to generate hydrogen in a cost-effective way is desirable. To achieve this goal, low cost and environmentally benign urea was used to synthesize the metal-free photocatalyst graphitic carbon nitride (g-C3N4). A porous structure is achieved via one-step polymerization of the single precursor. The porous structure with increased BET surface area and pore volume shows a much higher hydrogen production rate under simulated sunlight irradiation than thiourea-derived and dicyanamide-derived g-C3N4. The presence of an oxygen atom is presumed to play a key role in adjusting the textural properties. Further improvement of the photocatalytic function can be expected with after-treatment due to its rich chemistry in functionalization. Electronic supplementary information (ESI) available: Methods for preparing and characterizing UCN, TCN and DCN samples. Methods for examining the photocatalytic hydrogen production. FTIR, XPS, and digital photos of three products are shown in Fig. S1-6. See DOI: 10.1039/c2nr30948c

  12. International Space Station (ISS) Advanced Recycle Filter Tank Assembly (ARFTA)

    NASA Technical Reports Server (NTRS)

    Nasrullah, Mohammed K.

    2013-01-01

    The International Space Station (ISS) Recycle Filter Tank Assembly (RFTA) provides the following three primary functions for the Urine Processor Assembly (UPA): volume for concentrating/filtering pretreated urine, filtration of product distillate, and filtration of the Pressure Control and Pump Assembly (PCPA) effluent. The RFTAs, under nominal operations, are to be replaced every 30 days. This poses a significant logistical resupply problem, as well as cost in upmass and new tanks purchase. In addition, it requires significant amount of crew time. To address and resolve these challenges, NASA required Boeing to develop a design which eliminated the logistics and upmass issues and minimize recurring costs. Boeing developed the Advanced Recycle Filter Tank Assembly (ARFTA) that allowed the tanks to be emptied on-orbit into disposable tanks that eliminated the need for bringing the fully loaded tanks to earth for refurbishment and relaunch, thereby eliminating several hundred pounds of upmass and its associated costs. The ARFTA will replace the RFTA by providing the same functionality, but with reduced resupply requirements

  13. Correlation and Stacking of Relative Paleointensity and Oxygen Isotope Data

    NASA Astrophysics Data System (ADS)

    Lurcock, P. C.; Channell, J. E.; Lee, D.

    2012-12-01

    The transformation of a depth-series into a time-series is routinely implemented in the geological sciences. This transformation often involves correlation of a depth-series to an astronomically calibrated time-series. Eyeball tie-points with linear interpolation are still regularly used, although these have the disadvantages of being non-repeatable and not based on firm correlation criteria. Two automated correlation methods are compared: the simulated annealing algorithm (Huybers and Wunsch, 2004) and the Match protocol (Lisiecki and Lisiecki, 2002). Simulated annealing seeks to minimize energy (cross-correlation) as "temperature" is slowly decreased. The Match protocol divides records into intervals, applies penalty functions that constrain accumulation rates, and minimizes the sum of the squares of the differences between two series while maintaining the data sequence in each series. Paired relative paleointensity (RPI) and oxygen isotope records, such as those from IODP Site U1308 and/or reference stacks such as LR04 and PISO, are warped using known warping functions, and then the un-warped and warped time-series are correlated to evaluate the efficiency of the correlation methods. Correlations are performed in tandem to simultaneously optimize RPI and oxygen isotope data. Noise spectra are introduced at differing levels to determine correlation efficiency as noise levels change. A third potential method, known as dynamic time warping, involves minimizing the sum of distances between correlated point pairs across the whole series. A "cost matrix" between the two series is analyzed to find a least-cost path through the matrix. This least-cost path is used to nonlinearly map the time/depth of one record onto the depth/time of another. Dynamic time warping can be expanded to more than two dimensions and used to stack multiple time-series. This procedure can improve on arithmetic stacks, which often lose coherent high-frequency content during the stacking process.

  14. Computer Support for Conducting Supportability Trade-Offs in a Team Setting

    DTIC Science & Technology

    1990-01-01

    maintenance visits, and spares costs. To minimize the total system LCC, which includes both acquisition and support costs, a method for obtaining the...from different departments with a range of skills to work for a common goal is not an easy task. Ignoring the logistical concerns, a fundamental problem...maintenance visits, and spares costs. To minimize the total system LCC, which includes both acquisition and support costs, a method for obtaining the

  15. Current Developments in Cost Accounting/Performance Measuring Systems for Implementing Advanced Manufacturing Technology

    DTIC Science & Technology

    1989-11-01

    incomplete accounting of benefits, few strategic projects will * be adopted. Nanni , et al [21], provide similar discussion regarding a benefit analysis in...management tends to ignore the fact that minimizing costs within departments does not guarantee minimization of overall costs ( Nanni (21]). Sullivan, et...changes in the manufacturing environment. The author also remarks that these cost systems need to be modified or replaced by entirely new systems

  16. Cost and effectiveness of lung lobectomy by video-assisted thoracic surgery for lung cancer

    PubMed Central

    Mafé, Juan J.; Planelles, Beatriz; Asensio, Santos; Cerezal, Jorge; Inda, María-del-Mar; Lacueva, Javier; Esteban, Maria-Dolores; Hernández, Luis; Martín, Concepción; Baschwitz, Benno

    2017-01-01

    Background Video-assisted thoracic surgery (VATS) emerged as a minimally invasive surgery for diseases in the field of thoracic surgery. We herein reviewed our experience on thoracoscopic lobectomy for early lung cancer and evaluated Health System use. Methods A cost-effectiveness study was performed comparing VATS vs. open thoracic surgery (OPEN) for lung cancer patients. Demographic data, tumor localization, dynamic pulmonary function tests [forced vital capacity (FVC), forced expiratory volume in one second (FEV1), diffusion capacity (DLCO) and maximal oxygen uptake (VO2max)], surgical approach, postoperative details, and complications were recorded and analyzed. Results One hundred seventeen patients underwent lung resection by VATS (n=42, 36%; age: 63±9 years old, 57% males) or OPEN (n=75, 64%; age: 61±11 years old, 73% males). Pulmonary function tests decreased just after surgery with a parallel increasing tendency during first 12 months. VATS group tended to recover FEV1 and FVC quicker with significantly less clinical and post-surgical complications (31% vs. 53%, P=0.015). Costs including surgery and associated hospital stay, complications and costs in the 12 months after surgery were significantly lower for VATS (P<0.05). Conclusions The VATS approach surgery allowed earlier recovery at a lower cost than OPEN with a better cost-effectiveness profile. PMID:28932560

  17. A toxicity cost function approach to optimal CPA equilibration in tissues.

    PubMed

    Benson, James D; Higgins, Adam Z; Desai, Kunjan; Eroglu, Ali

    2018-02-01

    There is growing need for cryopreserved tissue samples that can be used in transplantation and regenerative medicine. While a number of specific tissue types have been successfully cryopreserved, this success is not general, and there is not a uniform approach to cryopreservation of arbitrary tissues. Additionally, while there are a number of long-established approaches towards optimizing cryoprotocols in single cell suspensions, and even plated cell monolayers, computational approaches in tissue cryopreservation have classically been limited to explanatory models. Here we develop a numerical approach to adapt cell-based CPA equilibration damage models for use in a classical tissue mass transport model. To implement this with real-world parameters, we measured CPA diffusivity in three human-sourced tissue types, skin, fibroid and myometrium, yielding propylene glycol diffusivities of 0.6 × 10 -6  cm 2 /s, 1.2 × 10 -6  cm 2 /s and 1.3 × 10 -6  cm 2 /s, respectively. Based on these results, we numerically predict and compare optimal multistep equilibration protocols that minimize the cell-based cumulative toxicity cost function and the damage due to excessive osmotic gradients at the tissue boundary. Our numerical results show that there are fundamental differences between protocols designed to minimize total CPA exposure time in tissues and protocols designed to minimize accumulated CPA toxicity, and that "one size fits all" stepwise approaches are predicted to be more toxic and take considerably longer than needed. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. The cost of misremembering: Inferring the loss function in visual working memory.

    PubMed

    Sims, Chris R

    2015-03-04

    Visual working memory (VWM) is a highly limited storage system. A basic consequence of this fact is that visual memories cannot perfectly encode or represent the veridical structure of the world. However, in natural tasks, some memory errors might be more costly than others. This raises the intriguing possibility that the nature of memory error reflects the costs of committing different kinds of errors. Many existing theories assume that visual memories are noise-corrupted versions of afferent perceptual signals. However, this additive noise assumption oversimplifies the problem. Implicit in the behavioral phenomena of visual working memory is the concept of a loss function: a mathematical entity that describes the relative cost to the organism of making different types of memory errors. An optimally efficient memory system is one that minimizes the expected loss according to a particular loss function, while subject to a constraint on memory capacity. This paper describes a novel theoretical framework for characterizing visual working memory in terms of its implicit loss function. Using inverse decision theory, the empirical loss function is estimated from the results of a standard delayed recall visual memory experiment. These results are compared to the predicted behavior of a visual working memory system that is optimally efficient for a previously identified natural task, gaze correction following saccadic error. Finally, the approach is compared to alternative models of visual working memory, and shown to offer a superior account of the empirical data across a range of experimental datasets. © 2015 ARVO.

  19. Emerging microengineering tools for functional analysis and phenotyping of blood cells

    PubMed Central

    Li, Xiang; Chen, Weiqiang; Li, Zida; Li, Ling; Gu, Hongchen; Fu, Jianping

    2014-01-01

    The available techniques for assessing blood cell functions are limited considering the various types of blood cells and their diverse functions. In the past decade, rapid advancement in microengineering has enabled an array of blood cell functional measurements that are difficult or impossible to achieve using conventional bulk platforms. Such miniaturized blood cell assay platforms also provide attractive capabilities of reducing chemical consumption, cost, assay time, as well as exciting opportunities of device integration, automation, and assay standardization. This review summarizes these contemporary microengineering tools and discusses their promising potential for constructing accurate in vitro models and rapid clinical diagnosis using minimal amount of whole blood samples. PMID:25283971

  20. Min-Cut Based Segmentation of Airborne LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Ural, S.; Shan, J.

    2012-07-01

    Introducing an organization to the unstructured point cloud before extracting information from airborne lidar data is common in many applications. Aggregating the points with similar features into segments in 3-D which comply with the nature of actual objects is affected by the neighborhood, scale, features and noise among other aspects. In this study, we present a min-cut based method for segmenting the point cloud. We first assess the neighborhood of each point in 3-D by investigating the local geometric and statistical properties of the candidates. Neighborhood selection is essential since point features are calculated within their local neighborhood. Following neighborhood determination, we calculate point features and determine the clusters in the feature space. We adapt a graph representation from image processing which is especially used in pixel labeling problems and establish it for the unstructured 3-D point clouds. The edges of the graph that are connecting the points with each other and nodes representing feature clusters hold the smoothness costs in the spatial domain and data costs in the feature domain. Smoothness costs ensure spatial coherence, while data costs control the consistency with the representative feature clusters. This graph representation formalizes the segmentation task as an energy minimization problem. It allows the implementation of an approximate solution by min-cuts for a global minimum of this NP hard minimization problem in low order polynomial time. We test our method with airborne lidar point cloud acquired with maximum planned post spacing of 1.4 m and a vertical accuracy 10.5 cm as RMSE. We present the effects of neighborhood and feature determination in the segmentation results and assess the accuracy and efficiency of the implemented min-cut algorithm as well as its sensitivity to the parameters of the smoothness and data cost functions. We find that smoothness cost that only considers simple distance parameter does not strongly conform to the natural structure of the points. Including shape information within the energy function by assigning costs based on the local properties may help to achieve a better representation for segmentation.

  1. Enhanced visualization of the bile duct via parallel white light and indocyanine green fluorescence laparoscopic imaging

    NASA Astrophysics Data System (ADS)

    Demos, Stavros G.; Urayama, Shiro

    2014-03-01

    Despite best efforts, bile duct injury during laparoscopic cholecystectomy is a major potential complication. Precise detection method of extrahepatic bile duct during laparoscopic procedures would minimize the risk of injury. Towards this goal, we have developed a compact imaging instrumentation designed to enable simultaneous acquisition of conventional white color and NIR fluorescence endoscopic/laparoscopic imaging using ICG as contrast agent. The capabilities of this system, which offers optimized sensitivity and functionality, are demonstrated for the detection of the bile duct in an animal model. This design could also provide a low-cost real-time surgical navigation capability to enhance the efficacy of a variety of other image-guided minimally invasive procedures.

  2. A variational data assimilation system for the range dependent acoustic model using the representer method: Theoretical derivations.

    PubMed

    Ngodock, Hans; Carrier, Matthew; Fabre, Josette; Zingarelli, Robert; Souopgui, Innocent

    2017-07-01

    This study presents the theoretical framework for variational data assimilation of acoustic pressure observations into an acoustic propagation model, namely, the range dependent acoustic model (RAM). RAM uses the split-step Padé algorithm to solve the parabolic equation. The assimilation consists of minimizing a weighted least squares cost function that includes discrepancies between the model solution and the observations. The minimization process, which uses the principle of variations, requires the derivation of the tangent linear and adjoint models of the RAM. The mathematical derivations are presented here, and, for the sake of brevity, a companion study presents the numerical implementation and results from the assimilation simulated acoustic pressure observations.

  3. Cost effectiveness of robotic mitral valve surgery.

    PubMed

    Moss, Emmanuel; Halkos, Michael E

    2017-01-01

    Significant technological advances have led to an impressive evolution in mitral valve surgery over the last two decades, allowing surgeons to safely perform less invasive operations through the right chest. Most new technology comes with an increased upfront cost that must be measured against postoperative savings and other advantages such as decreased perioperative complications, faster recovery, and earlier return to preoperative level of functioning. The Da Vinci robot is an example of such a technology, combining the significant benefits of minimally invasive surgery with a "gold standard" valve repair. Although some have reported that robotic surgery is associated with increased overall costs, there is literature suggesting that efficient perioperative care and shorter lengths of stay can offset the increased capital and intraoperative expenses. While data on current cost is important to consider, one must also take into account future potential value resulting from technological advancement when evaluating cost-effectiveness. Future refinements that will facilitate more effective surgery, coupled with declining cost of technology will further increase the value of robotic surgery compared to traditional approaches.

  4. Quantum generalisation of feedforward neural networks

    NASA Astrophysics Data System (ADS)

    Wan, Kwok Ho; Dahlsten, Oscar; Kristjánsson, Hlér; Gardner, Robert; Kim, M. S.

    2017-09-01

    We propose a quantum generalisation of a classical neural network. The classical neurons are firstly rendered reversible by adding ancillary bits. Then they are generalised to being quantum reversible, i.e., unitary (the classical networks we generalise are called feedforward, and have step-function activation functions). The quantum network can be trained efficiently using gradient descent on a cost function to perform quantum generalisations of classical tasks. We demonstrate numerically that it can: (i) compress quantum states onto a minimal number of qubits, creating a quantum autoencoder, and (ii) discover quantum communication protocols such as teleportation. Our general recipe is theoretical and implementation-independent. The quantum neuron module can naturally be implemented photonically.

  5. A Two-Dimensional Variational Analysis Method for NSCAT Ambiguity Removal: Methodology, Sensitivity, and Tuning

    NASA Technical Reports Server (NTRS)

    Hoffman, R. N.; Leidner, S. M.; Henderson, J. M.; Atlas, R.; Ardizzone, J. V.; Bloom, S. C.; Atlas, Robert (Technical Monitor)

    2001-01-01

    In this study, we apply a two-dimensional variational analysis method (2d-VAR) to select a wind solution from NASA Scatterometer (NSCAT) ambiguous winds. 2d-VAR determines a "best" gridded surface wind analysis by minimizing a cost function. The cost function measures the misfit to the observations, the background, and the filtering and dynamical constraints. The ambiguity closest in direction to the minimizing analysis is selected. 2d-VAR method, sensitivity and numerical behavior are described. 2d-VAR is compared to statistical interpolation (OI) by examining the response of both systems to a single ship observation and to a swath of unique scatterometer winds. 2d-VAR is used with both NSCAT ambiguities and NSCAT backscatter values. Results are roughly comparable. When the background field is poor, 2d-VAR ambiguity removal often selects low probability ambiguities. To avoid this behavior, an initial 2d-VAR analysis, using only the two most likely ambiguities, provides the first guess for an analysis using all the ambiguities or the backscatter data. 2d-VAR and median filter selected ambiguities usually agree. Both methods require horizontal consistency, so disagreements occur in clumps, or as linear features. In these cases, 2d-VAR ambiguities are often more meteorologically reasonable and more consistent with satellite imagery.

  6. Linear feasibility algorithms for treatment planning in interstitial photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Rendon, A.; Beck, J. C.; Lilge, Lothar

    2008-02-01

    Interstitial Photodynamic therapy (IPDT) has been under intense investigation in recent years, with multiple clinical trials underway. This effort has demanded the development of optimization strategies that determine the best locations and output powers for light sources (cylindrical or point diffusers) to achieve an optimal light delivery. Furthermore, we have recently introduced cylindrical diffusers with customizable emission profiles, placing additional requirements on the optimization algorithms, particularly in terms of the stability of the inverse problem. Here, we present a general class of linear feasibility algorithms and their properties. Moreover, we compare two particular instances of these algorithms, which are been used in the context of IPDT: the Cimmino algorithm and a weighted gradient descent (WGD) algorithm. The algorithms were compared in terms of their convergence properties, the cost function they minimize in the infeasible case, their ability to regularize the inverse problem, and the resulting optimal light dose distributions. Our results show that the WGD algorithm overall performs slightly better than the Cimmino algorithm and that it converges to a minimizer of a clinically relevant cost function in the infeasible case. Interestingly however, treatment plans resulting from either algorithms were very similar in terms of the resulting fluence maps and dose volume histograms, once the diffuser powers adjusted to achieve equal prostate coverage.

  7. Local-in-Time Adjoint-Based Method for Optimal Control/Design Optimization of Unsteady Compressible Flows

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2009-01-01

    .We study local-in-time adjoint-based methods for minimization of ow matching functionals subject to the 2-D unsteady compressible Euler equations. The key idea of the local-in-time method is to construct a very accurate approximation of the global-in-time adjoint equations and the corresponding sensitivity derivative by using only local information available on each time subinterval. In contrast to conventional time-dependent adjoint-based optimization methods which require backward-in-time integration of the adjoint equations over the entire time interval, the local-in-time method solves local adjoint equations sequentially over each time subinterval. Since each subinterval contains relatively few time steps, the storage cost of the local-in-time method is much lower than that of the global adjoint formulation, thus making the time-dependent optimization feasible for practical applications. The paper presents a detailed comparison of the local- and global-in-time adjoint-based methods for minimization of a tracking functional governed by the Euler equations describing the ow around a circular bump. Our numerical results show that the local-in-time method converges to the same optimal solution obtained with the global counterpart, while drastically reducing the memory cost as compared to the global-in-time adjoint formulation.

  8. Efficient data communication protocols for wireless networks

    NASA Astrophysics Data System (ADS)

    Zeydan, Engin

    In this dissertation, efficient decentralized algorithms are investigated for cost minimization problems in wireless networks. For wireless sensor networks, we investigate both the reduction in the energy consumption and throughput maximization problems separately using multi-hop data aggregation for correlated data in wireless sensor networks. The proposed algorithms exploit data redundancy using a game theoretic framework. For energy minimization, routes are chosen to minimize the total energy expended by the network using best response dynamics to local data. The cost function used in routing takes into account distance, interference and in-network data aggregation. The proposed energy-efficient correlation-aware routing algorithm significantly reduces the energy consumption in the network and converges in a finite number of steps iteratively. For throughput maximization, we consider both the interference distribution across the network and correlation between forwarded data when establishing routes. Nodes along each route are chosen to minimize the interference impact in their neighborhood and to maximize the in-network data aggregation. The resulting network topology maximizes the global network throughput and the algorithm is guaranteed to converge with a finite number of steps using best response dynamics. For multiple antenna wireless ad-hoc networks, we present distributed cooperative and regret-matching based learning schemes for joint transmit beanformer and power level selection problem for nodes operating in multi-user interference environment. Total network transmit power is minimized while ensuring a constant received signal-to-interference and noise ratio at each receiver. In cooperative and regret-matching based power minimization algorithms, transmit beanformers are selected from a predefined codebook to minimize the total power. By selecting transmit beamformers judiciously and performing power adaptation, the cooperative algorithm is shown to converge to pure strategy Nash equilibrium with high probability throughout the iterations in the interference impaired network. On the other hand, the regret-matching learning algorithm is noncooperative and requires minimum amount of overhead. The proposed cooperative and regret-matching based distributed algorithms are also compared with centralized solutions through simulation results.

  9. Single-machine common/slack due window assignment problems with linear decreasing processing times

    NASA Astrophysics Data System (ADS)

    Zhang, Xingong; Lin, Win-Chin; Wu, Wen-Hsiang; Wu, Chin-Chia

    2017-08-01

    This paper studies linear non-increasing processing times and the common/slack due window assignment problems on a single machine, where the actual processing time of a job is a linear non-increasing function of its starting time. The aim is to minimize the sum of the earliness cost, tardiness cost, due window location and due window size. Some optimality results are discussed for the common/slack due window assignment problems and two O(n log n) time algorithms are presented to solve the two problems. Finally, two examples are provided to illustrate the correctness of the corresponding algorithms.

  10. Multiobjective sampling design for parameter estimation and model discrimination in groundwater solute transport

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1989-01-01

    Sampling design for site characterization studies of solute transport in porous media is formulated as a multiobjective problem. Optimal design of a sampling network is a sequential process in which the next phase of sampling is designed on the basis of all available physical knowledge of the system. Three objectives are considered: model discrimination, parameter estimation, and cost minimization. For the first two objectives, physically based measures of the value of information obtained from a set of observations are specified. In model discrimination, value of information of an observation point is measured in terms of the difference in solute concentration predicted by hypothesized models of transport. Points of greatest difference in predictions can contribute the most information to the discriminatory power of a sampling design. Sensitivity of solute concentration to a change in a parameter contributes information on the relative variance of a parameter estimate. Inclusion of points in a sampling design with high sensitivities to parameters tends to reduce variance in parameter estimates. Cost minimization accounts for both the capital cost of well installation and the operating costs of collection and analysis of field samples. Sensitivities, discrimination information, and well installation and sampling costs are used to form coefficients in the multiobjective problem in which the decision variables are binary (zero/one), each corresponding to the selection of an observation point in time and space. The solution to the multiobjective problem is a noninferior set of designs. To gain insight into effective design strategies, a one-dimensional solute transport problem is hypothesized. Then, an approximation of the noninferior set is found by enumerating 120 designs and evaluating objective functions for each of the designs. Trade-offs between pairs of objectives are demonstrated among the models. The value of an objective function for a given design is shown to correspond to the ability of a design to actually meet an objective.

  11. Economics of gynecologic morcellation.

    PubMed

    Bortoletto, Pietro; Friedman, Jaclyn; Milad, Magdy P

    2018-02-01

    As the Food and Drug Administration raised concern over the power morcellator in 2014, the field has seen significant change, with patients and physicians questioning which procedure is safest and most cost-effective. The economic impact of these decisions is poorly understood. Multiple new technologies have been developed to allow surgeons to continue to afford patients the many benefits of minimally invasive surgery while minimizing the risks of power morcellation. At the same time, researchers have focused on the true benefits of the power morcellator from a safety and cost perspective, and consistently found that with careful patient selection, by preventing laparotomies, it can be a cost-effective tool. Changes since 2014 have resulted in new techniques and technologies to allow these minimally invasive procedures to continue to be offered in a safe manner. With this rapid change, physicians are altering their practice and patients are attempting to educate themselves to decide what is best for them. This evolution has allowed us to refocus on the cost implications of new developments, allowing stakeholders the opportunity to maximize patient safety and surgical outcomes while minimizing cost.

  12. Piece-wise quadratic approximations of arbitrary error functions for fast and robust machine learning.

    PubMed

    Gorban, A N; Mirkes, E M; Zinovyev, A

    2016-12-01

    Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0

  13. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel A.

    2016-11-01

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.

  14. Make or buy analysis model based on tolerance allocation to minimize manufacturing cost and fuzzy quality loss

    NASA Astrophysics Data System (ADS)

    Rosyidi, C. N.; Puspitoingrum, W.; Jauhari, W. A.; Suhardi, B.; Hamada, K.

    2016-02-01

    The specification of tolerances has a significant impact on the quality of product and final production cost. The company should carefully pay attention to the component or product tolerance so they can produce a good quality product at the lowest cost. Tolerance allocation has been widely used to solve problem in selecting particular process or supplier. But before merely getting into the selection process, the company must first make a plan to analyse whether the component must be made in house (make), to be purchased from a supplier (buy), or used the combination of both. This paper discusses an optimization model of process and supplier selection in order to minimize the manufacturing costs and the fuzzy quality loss. This model can also be used to determine the allocation of components to the selected processes or suppliers. Tolerance, process capability and production capacity are three important constraints that affect the decision. Fuzzy quality loss function is used in this paper to describe the semantic of the quality, in which the product quality level is divided into several grades. The implementation of the proposed model has been demonstrated by solving a numerical example problem that used a simple assembly product which consists of three components. The metaheuristic approach were implemented to OptQuest software from Oracle Crystal Ball in order to obtain the optimal solution of the numerical example.

  15. An Alternative Humans to Mars Approach: Reducing Mission Mass with Multiple Mars Flyby Trajectories and Minimal Capability Investments

    NASA Technical Reports Server (NTRS)

    Whitley, Ryan J.; Jedrey, Richard; Landau, Damon; Ocampo, Cesar

    2015-01-01

    Mars flyby trajectories and Earth return trajectories have the potential to enable lower- cost and sustainable human exploration of Mars. Flyby and return trajectories are true minimum energy paths with low to zero post-Earth departure maneuvers. By emplacing the large crew vehicles required for human transit on these paths, the total fuel cost can be reduced. The traditional full-up repeating Earth-Mars-Earth cycler concept requires significant infrastructure, but a Mars only flyby approach minimizes mission mass and maximizes opportunities to build-up missions in a stepwise manner. In this paper multiple strategies for sending a crew of 4 to Mars orbit and back are examined. With pre-emplaced assets in Mars orbit, a transit habitat and a minimally functional Mars taxi, a complete Mars mission can be accomplished in 3 SLS launches and 2 Mars Flyby's, including Orion. While some years are better than others, ample opportunities exist within a given 15-year Earth-Mars alignment cycle. Building up a mission cadence over time, this approach can translate to Mars surface access. Risk reduction, which is always a concern for human missions, is mitigated by the use of flybys with Earth return (some of which are true free returns) capability.

  16. Cost-effectiveness of minimally invasive sacroiliac joint fusion.

    PubMed

    Cher, Daniel J; Frasco, Melissa A; Arnold, Renée Jg; Polly, David W

    2016-01-01

    Sacroiliac joint (SIJ) disorders are common in patients with chronic lower back pain. Minimally invasive surgical options have been shown to be effective for the treatment of chronic SIJ dysfunction. To determine the cost-effectiveness of minimally invasive SIJ fusion. Data from two prospective, multicenter, clinical trials were used to inform a Markov process cost-utility model to evaluate cumulative 5-year health quality and costs after minimally invasive SIJ fusion using triangular titanium implants or non-surgical treatment. The analysis was performed from a third-party perspective. The model specifically incorporated variation in resource utilization observed in the randomized trial. Multiple one-way and probabilistic sensitivity analyses were performed. SIJ fusion was associated with a gain of approximately 0.74 quality-adjusted life years (QALYs) at a cost of US$13,313 per QALY gained. In multiple one-way sensitivity analyses all scenarios resulted in an incremental cost-effectiveness ratio (ICER) <$26,000/QALY. Probabilistic analyses showed a high degree of certainty that the maximum ICER for SIJ fusion was less than commonly selected thresholds for acceptability (mean ICER =$13,687, 95% confidence interval $5,162-$28,085). SIJ fusion provided potential cost savings per QALY gained compared to non-surgical treatment after a treatment horizon of greater than 13 years. Compared to traditional non-surgical treatments, SIJ fusion is a cost-effective, and, in the long term, cost-saving strategy for the treatment of SIJ dysfunction due to degenerative sacroiliitis or SIJ disruption.

  17. Cost-effectiveness of minimally invasive sacroiliac joint fusion

    PubMed Central

    Cher, Daniel J; Frasco, Melissa A; Arnold, Renée JG; Polly, David W

    2016-01-01

    Background Sacroiliac joint (SIJ) disorders are common in patients with chronic lower back pain. Minimally invasive surgical options have been shown to be effective for the treatment of chronic SIJ dysfunction. Objective To determine the cost-effectiveness of minimally invasive SIJ fusion. Methods Data from two prospective, multicenter, clinical trials were used to inform a Markov process cost-utility model to evaluate cumulative 5-year health quality and costs after minimally invasive SIJ fusion using triangular titanium implants or non-surgical treatment. The analysis was performed from a third-party perspective. The model specifically incorporated variation in resource utilization observed in the randomized trial. Multiple one-way and probabilistic sensitivity analyses were performed. Results SIJ fusion was associated with a gain of approximately 0.74 quality-adjusted life years (QALYs) at a cost of US$13,313 per QALY gained. In multiple one-way sensitivity analyses all scenarios resulted in an incremental cost-effectiveness ratio (ICER) <$26,000/QALY. Probabilistic analyses showed a high degree of certainty that the maximum ICER for SIJ fusion was less than commonly selected thresholds for acceptability (mean ICER =$13,687, 95% confidence interval $5,162–$28,085). SIJ fusion provided potential cost savings per QALY gained compared to non-surgical treatment after a treatment horizon of greater than 13 years. Conclusion Compared to traditional non-surgical treatments, SIJ fusion is a cost-effective, and, in the long term, cost-saving strategy for the treatment of SIJ dysfunction due to degenerative sacroiliitis or SIJ disruption. PMID:26719717

  18. Does Treatment Adherence Therapy reduce expense of healthcare use in patients with psychotic disorders? Cost-minimization analysis in a randomized controlled trial.

    PubMed

    Gilden, J; Staring, A B P; der Gaag, M van; Mulder, C L

    2011-12-01

    Adherence interventions in psychotic disorders have produced mixed results. Even when an intervention improved adherence, benefits to patients were unclear. Treatment Adherence Therapy (TAT) also improved adherence relative to Treatment As Usual (TAU), but it had no effects on symptoms or quality of life. TAT may or may not reduce healthcare costs. To determine whether TAT reduces the use of healthcare resources, and thus healthcare costs. Randomized controlled trial of TAT versus TAU with 98 patients. Interviews were conducted at baseline (T0), six months later, when TAT had been completed (T1) and at six-month follow-up (T2). We have used admission data and part of the Trimbos/iMTA questionnaire for Costs associated with Psychiatric Illness (TiC-P). We compared total costs in the TAT group with those in the control group with the help of multivariate analysis of covariance. TAT did not significantly minimize total costs. In the TAT group, the mean one-year health-treatment cost per patient (including TAT sessions) was € 23 003.64 (SD=19 317.95), whereas in the TAU group it was € 22 489.88 (SD=25 224.57) (F(1)=.652, p=.42). However, there were two significant differences at item-level, both with higher costs for the TAU group: psychiatric nurse contacts and legal proceedings for court-ordered admissions. Because TAT did not reduce total healthcare costs, it did not contribute to cost-minimization. Its benefits are therefore questionable. No other adherence intervention has included analysis of cost-effectiveness or cost-minimization. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Proactive replica checking to assure reliability of data in cloud storage with minimum replication

    NASA Astrophysics Data System (ADS)

    Murarka, Damini; Maheswari, G. Uma

    2017-11-01

    The two major issues for cloud storage systems are data reliability and storage costs. For data reliability protection, multi-replica replication strategy which is used mostly in current clouds acquires huge storage consumption, leading to a large storage cost for applications within the loud specifically. This paper presents a cost-efficient data reliability mechanism named PRCR to cut back the cloud storage consumption. PRCR ensures data reliability of large cloud information with the replication that might conjointly function as a price effective benchmark for replication. The duplication shows that when resembled to the standard three-replica approach, PRCR will scale back to consume only a simple fraction of the cloud storage from one-third of the storage, thence considerably minimizing the cloud storage price.

  20. Minimization of bovine tuberculosis control costs in US dairy herds

    PubMed Central

    Smith, Rebecca L.; Tauer, Loren W.; Schukken, Ynte H.; Lu, Zhao; Grohn, Yrjo T.

    2013-01-01

    The objective of this study was to minimize the cost of controlling an isolated bovine tuberculosis (bTB) outbreak in a US dairy herd, using a stochastic simulation model of bTB with economic and biological layers. A model optimizer produced a control program that required 2-month testing intervals (TI) with 2 negative whole-herd tests to leave quarantine. This control program minimized both farm and government costs. In all cases, test-and-removal costs were lower than depopulation costs, although the variability in costs increased for farms with high holding costs or small herd sizes. Increasing herd size significantly increased costs for both the farm and the government, while increasing indemnity payments significantly decreased farm costs and increasing testing costs significantly increased government costs. Based on the results of this model, we recommend 2-month testing intervals for herds after an outbreak of bovine tuberculosis, with 2 negative whole herd tests being sufficient to lift quarantine. A prolonged test and cull program may cause a state to lose its bTB-free status during the testing period. When the cost of losing the bTB-free status is greater than $1.4 million then depopulation of farms could be preferred over a test and cull program. PMID:23953679

  1. Reducing robotic prostatectomy costs by minimizing instrumentation.

    PubMed

    Delto, Joan C; Wayne, George; Yanes, Rafael; Nieder, Alan M; Bhandari, Akshay

    2015-05-01

    Since the introduction of robotic surgery for radical prostatectomy, the cost-benefit of this technology has been under scrutiny. While robotic surgery professes to offer multiple advantages, including reduced blood loss, reduced length of stay, and expedient recovery, the associated costs tend to be significantly higher, secondary to the fixed cost of the robot as well as the variable costs associated with instrumentation. This study provides a simple framework for the careful consideration of costs during the selection of equipment and materials. Two experienced robotic surgeons at our institution as well as several at other institutions were queried about their preferred instrument usage for robot-assisted prostatectomy. Costs of instruments and materials were obtained and clustered by type and price. A minimal set of instruments was identified and compared against alternative instrumentation. A retrospective review of 125 patients who underwent robotically assisted laparoscopic prostatectomy for prostate cancer at our institution was performed to compare estimated blood loss (EBL), operative times, and intraoperative complications for both surgeons. Our surgeons now conceptualize instrument costs as proportional changes to the cost of the baseline minimal combination. Robotic costs at our institution were reduced by eliminating an energy source like the Ligasure or vessel sealer, exploiting instrument versatility, and utilizing inexpensive tools such as Hem-o-lok clips. Such modifications reduced surgeon 1's cost of instrumentation to ∼40% less compared with surgeon 2 and up to 32% less than instrumentation used by surgeons at other institutions. Surgeon 1's combination may not be optimal for all robotic surgeons; however, it establishes a minimally viable toolbox for our institution through a rudimentary cost analysis. A similar analysis may aid others in better conceptualizing long-term costs not as nominal, often unwieldy prices, but as percent changes in spending. With regard to intraoperative outcomes, the use of a minimally viable toolbox did not result in increased EBL, operative time, or intraoperative complications. Simple changes to surgeon preference and creative utilization of instruments can eliminate 40% of costs incurred on robotic instruments alone. Moreover, EBL, operative times, and intraoperative complications are not compromised as a result of cost reduction. Our process of identifying such improvements is straightforward and may be replicated by other robotic surgeons. Further prospective multicenter trials should be initiated to assess other methods of cost reduction.

  2. Attractor neural networks with resource-efficient synaptic connectivity

    NASA Astrophysics Data System (ADS)

    Pehlevan, Cengiz; Sengupta, Anirvan

    Memories are thought to be stored in the attractor states of recurrent neural networks. Here we explore how resource constraints interplay with memory storage function to shape synaptic connectivity of attractor networks. We propose that given a set of memories, in the form of population activity patterns, the neural circuit choses a synaptic connectivity configuration that minimizes a resource usage cost. We argue that the total synaptic weight (l1-norm) in the network measures the resource cost because synaptic weight is correlated with synaptic volume, which is a limited resource, and is proportional to neurotransmitter release and post-synaptic current, both of which cost energy. Using numerical simulations and replica theory, we characterize optimal connectivity profiles in resource-efficient attractor networks. Our theory explains several experimental observations on cortical connectivity profiles, 1) connectivity is sparse, because synapses are costly, 2) bidirectional connections are overrepresented and 3) are stronger, because attractor states need strong recurrence.

  3. Spacelab cost reduction alternatives study. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1976-01-01

    Alternative approaches to payload operations planning and control and flight crew training are defined for spacelab payloads with the goal of: lowering FY77 and FY 78 costs for new starts; lowering costs to achieve Spacelab operational capability; and minimizing the cost per Spacelab flight. These alternatives attempt to minimize duplication of hardware, software, and personnel, and the investment in supporting facility and equipment. Of particular importance is the possible reduction of equipment, software, and manpower resources such as comtational systems, trainers, and simulators.

  4. Kullback-Leibler Divergence-Based Differential Evolution Markov Chain Filter for Global Localization of Mobile Robots.

    PubMed

    Martín, Fernando; Moreno, Luis; Garrido, Santiago; Blanco, Dolores

    2015-09-16

    One of the most important skills desired for a mobile robot is the ability to obtain its own location even in challenging environments. The information provided by the sensing system is used here to solve the global localization problem. In our previous work, we designed different algorithms founded on evolutionary strategies in order to solve the aforementioned task. The latest developments are presented in this paper. The engine of the localization module is a combination of the Markov chain Monte Carlo sampling technique and the Differential Evolution method, which results in a particle filter based on the minimization of a fitness function. The robot's pose is estimated from a set of possible locations weighted by a cost value. The measurements of the perceptive sensors are used together with the predicted ones in a known map to define a cost function to optimize. Although most localization methods rely on quadratic fitness functions, the sensed information is processed asymmetrically in this filter. The Kullback-Leibler divergence is the basis of a cost function that makes it possible to deal with different types of occlusions. The algorithm performance has been checked in a real map. The results are excellent in environments with dynamic and unmodeled obstacles, a fact that causes occlusions in the sensing area.

  5. Kullback-Leibler Divergence-Based Differential Evolution Markov Chain Filter for Global Localization of Mobile Robots

    PubMed Central

    Martín, Fernando; Moreno, Luis; Garrido, Santiago; Blanco, Dolores

    2015-01-01

    One of the most important skills desired for a mobile robot is the ability to obtain its own location even in challenging environments. The information provided by the sensing system is used here to solve the global localization problem. In our previous work, we designed different algorithms founded on evolutionary strategies in order to solve the aforementioned task. The latest developments are presented in this paper. The engine of the localization module is a combination of the Markov chain Monte Carlo sampling technique and the Differential Evolution method, which results in a particle filter based on the minimization of a fitness function. The robot’s pose is estimated from a set of possible locations weighted by a cost value. The measurements of the perceptive sensors are used together with the predicted ones in a known map to define a cost function to optimize. Although most localization methods rely on quadratic fitness functions, the sensed information is processed asymmetrically in this filter. The Kullback-Leibler divergence is the basis of a cost function that makes it possible to deal with different types of occlusions. The algorithm performance has been checked in a real map. The results are excellent in environments with dynamic and unmodeled obstacles, a fact that causes occlusions in the sensing area. PMID:26389914

  6. Selection of Reserves for Woodland Caribou Using an Optimization Approach

    PubMed Central

    Schneider, Richard R.; Hauer, Grant; Dawe, Kimberly; Adamowicz, Wiktor; Boutin, Stan

    2012-01-01

    Habitat protection has been identified as an important strategy for the conservation of woodland caribou (Rangifer tarandus). However, because of the economic opportunity costs associated with protection it is unlikely that all caribou ranges can be protected in their entirety. We used an optimization approach to identify reserve designs for caribou in Alberta, Canada, across a range of potential protection targets. Our designs minimized costs as well as three demographic risk factors: current industrial footprint, presence of white-tailed deer (Odocoileus virginianus), and climate change. We found that, using optimization, 60% of current caribou range can be protected (including 17% in existing parks) while maintaining access to over 98% of the value of resources on public lands. The trade-off between minimizing cost and minimizing demographic risk factors was minimal because the spatial distributions of cost and risk were similar. The prospects for protection are much reduced if protection is directed towards the herds that are most at risk of near-term extirpation. PMID:22363702

  7. Optimal control of nonlinear continuous-time systems in strict-feedback form.

    PubMed

    Zargarzadeh, Hassan; Dierks, Travis; Jagannathan, Sarangapani

    2015-10-01

    This paper proposes a novel optimal tracking control scheme for nonlinear continuous-time systems in strict-feedback form with uncertain dynamics. The optimal tracking problem is transformed into an equivalent optimal regulation problem through a feedforward adaptive control input that is generated by modifying the standard backstepping technique. Subsequently, a neural network-based optimal control scheme is introduced to estimate the cost, or value function, over an infinite horizon for the resulting nonlinear continuous-time systems in affine form when the internal dynamics are unknown. The estimated cost function is then used to obtain the optimal feedback control input; therefore, the overall optimal control input for the nonlinear continuous-time system in strict-feedback form includes the feedforward plus the optimal feedback terms. It is shown that the estimated cost function minimizes the Hamilton-Jacobi-Bellman estimation error in a forward-in-time manner without using any value or policy iterations. Finally, optimal output feedback control is introduced through the design of a suitable observer. Lyapunov theory is utilized to show the overall stability of the proposed schemes without requiring an initial admissible controller. Simulation examples are provided to validate the theoretical results.

  8. Modeling the role of information and limited optimal treatment on disease prevalence.

    PubMed

    Kumar, Anuj; Srivastava, Prashant K; Takeuchi, Yasuhiro

    2017-02-07

    Disease outbreaks induce behavioural changes in healthy individuals to avoid contracting infection. We first propose a compartmental model which accounts for the effect of individual's behavioural response due to information of the disease prevalence. It is assumed that the information is growing as a function of infective population density that saturates at higher density of infective population and depends on active educational and social programmes. Model analysis has been performed and the global stability of equilibrium points is established. Further, choosing the treatment (a pharmaceutical intervention) and the effect of information (a non-pharmaceutical intervention) as controls, an optimal control problem is formulated to minimize the cost and disease fatality. In the cost functional, the nonlinear effect of controls is accounted. Analytical characterization of optimal control paths is done with the help of Pontryagin's Maximum Principle. Numerical findings suggest that if only control via information is used, it is effective and economical for early phase of disease spread whereas treatment works well for long term control except for initial phase. Furthermore, we observe that the effect of information induced behavioural response plays a crucial role in the absence of pharmaceutical control. Moreover, comprehensive use of both the control interventions is more effective than any single applied control policy and it reduces the number of infective individuals and minimizes the economic cost generated from disease burden and applied controls. Thus, the combined effect of both the control policies is found more economical during the entire epidemic period whereas the implementation of a single policy is not found economically viable. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. An Innovative Infrastructure with a Universal Geo-Spatiotemporal Data Representation Supporting Cost-Effective Integration of Diverse Earth Science Data

    NASA Technical Reports Server (NTRS)

    Rilee, Michael Lee; Kuo, Kwo-Sen

    2017-01-01

    The SpatioTemporal Adaptive Resolution Encoding (STARE) is a unifying scheme encoding geospatial and temporal information for organizing data on scalable computing/storage resources, minimizing expensive data transfers. STARE provides a compact representation that turns set-logic functions into integer operations, e.g. conditional sub-setting, taking into account representative spatiotemporal resolutions of the data in the datasets. STARE geo-spatiotemporally aligns data placements of diverse data on massive parallel resources to maximize performance. Automating important scientific functions (e.g. regridding) and computational functions (e.g. data placement) allows scientists to focus on domain-specific questions instead of expending their efforts and expertise on data processing. With STARE-enabled automation, SciDB (Scientific Database) plus STARE provides a database interface, reducing costly data preparation, increasing the volume and variety of interoperable data, and easing result sharing. Using SciDB plus STARE as part of an integrated analysis infrastructure dramatically eases combining diametrically different datasets.

  10. Advanced Technology Composite Fuselage: Program Overview

    NASA Technical Reports Server (NTRS)

    Ilcewicz, L. B.; Smith, P. J.; Hanson, C. T.; Walker, T. H.; Metschan, S. L.; Mabson, G. E.; Wilden, K. S.; Flynn, B. W.; Scholz, D. B.; Polland, D. R.; hide

    1997-01-01

    The Advanced Technology Composite Aircraft Structures (ATCAS) program has studied transport fuselage structure with a large potential reduction in the total direct operating costs for wide-body commercial transports. The baseline fuselage section was divided into four 'quadrants', crown, keel, and sides, gaining the manufacturing cost advantage possible with larger panels. Key processes found to have savings potential include (1) skins laminated by automatic fiber placement, (2) braided frames using resin transfer molding, and (3) panel bond technology that minimized mechanical fastening. The cost and weight of the baseline fuselage barrel was updated to complete Phase B of the program. An assessment of the former, which included labor, material, and tooling costs, was performed with the help of design cost models. Crown, keel, and side quadrant cost distributions illustrate the importance of panel design configuration, area, and other structural details. Composite sandwich panel designs were found to have the greatest cost savings potential for most quadrants. Key technical findings are summarized as an introduction to the other contractor reports documenting Phase A and B work completed in functional areas. The current program status in resolving critical technical issues is also highlighted.

  11. Distributed Nash Equilibrium Seeking for Generalized Convex Games with Shared Constraints

    NASA Astrophysics Data System (ADS)

    Sun, Chao; Hu, Guoqiang

    2018-05-01

    In this paper, we deal with the problem of finding a Nash equilibrium for a generalized convex game. Each player is associated with a convex cost function and multiple shared constraints. Supposing that each player can exchange information with its neighbors via a connected undirected graph, the objective of this paper is to design a Nash equilibrium seeking law such that each agent minimizes its objective function in a distributed way. Consensus and singular perturbation theories are used to prove the stability of the system. A numerical example is given to show the effectiveness of the proposed algorithms.

  12. Multidisciplinary Design Optimization for Glass-Fiber Epoxy-Matrix Composite 5 MW Horizontal-Axis Wind-Turbine Blades

    NASA Astrophysics Data System (ADS)

    Grujicic, M.; Arakere, G.; Pandurangan, B.; Sellappan, V.; Vallejo, A.; Ozen, M.

    2010-11-01

    A multi-disciplinary design-optimization procedure has been introduced and used for the development of cost-effective glass-fiber reinforced epoxy-matrix composite 5 MW horizontal-axis wind-turbine (HAWT) blades. The turbine-blade cost-effectiveness has been defined using the cost of energy (CoE), i.e., a ratio of the three-blade HAWT rotor development/fabrication cost and the associated annual energy production. To assess the annual energy production as a function of the blade design and operating conditions, an aerodynamics-based computational analysis had to be employed. As far as the turbine blade cost is concerned, it is assessed for a given aerodynamic design by separately computing the blade mass and the associated blade-mass/size-dependent production cost. For each aerodynamic design analyzed, a structural finite element-based and a post-processing life-cycle assessment analyses were employed in order to determine a minimal blade mass which ensures that the functional requirements pertaining to the quasi-static strength of the blade, fatigue-controlled blade durability and blade stiffness are satisfied. To determine the turbine-blade production cost (for the currently prevailing fabrication process, the wet lay-up) available data regarding the industry manufacturing experience were combined with the attendant blade mass, surface area, and the duration of the assumed production run. The work clearly revealed the challenges associated with simultaneously satisfying the strength, durability and stiffness requirements while maintaining a high level of wind-energy capture efficiency and a lower production cost.

  13. Traveling salesman problem with a center.

    PubMed

    Lipowski, Adam; Lipowska, Dorota

    2005-06-01

    We study a traveling salesman problem where the path is optimized with a cost function that includes its length L as well as a certain measure C of its distance from the geometrical center of the graph. Using simulated annealing (SA) we show that such a problem has a transition point that separates two phases differing in the scaling behavior of L and C, in efficiency of SA, and in the shape of minimal paths.

  14. The cost of conversion in robotic and laparoscopic colorectal surgery.

    PubMed

    Cleary, Robert K; Mullard, Andrew J; Ferraro, Jane; Regenbogen, Scott E

    2018-03-01

    Conversion from minimally invasive to open colorectal surgery remains common and costly. Robotic colorectal surgery is associated with lower rates of conversion than laparoscopy, but institutions and payers remain concerned about equipment and implementation costs. Recognizing that reimbursement reform and bundled payments expand perspectives on cost to include the entire surgical episode, we evaluated the role of minimally invasive conversion in total payments. This is an observational study from a linked data registry including clinical data from the Michigan Surgical Quality Collaborative and payment data from the Michigan Value Collaborative between July 2012 and April 2015. We evaluated colorectal resections initiated with open and minimally invasive approaches, and compared reported risk-adjusted and price-standardized 30-day episode payments and their components. We identified 1061 open, 1604 laparoscopic, and 275 robotic colorectal resections. Adjusted episode payments were significantly higher for open operations than for minimally invasive procedures completed without conversion ($19,489 vs. $15,518, p < 0.001). The conversion rate was significantly higher with laparoscopic than robotic operations (15.1 vs. 7.6%, p < 0.001). Adjusted episode payments for minimally invasive operations converted to open were significantly higher than for those completed by minimally invasive approaches ($18,098 vs. $15,518, p < 0.001). Payments for operations completed robotically were greater than those completed laparoscopically ($16,949 vs. $15,250, p < 0.001), but the difference was substantially decreased when conversion to open cases was included ($16,939 vs. $15,699, p = 0.041). Episode payments for open colorectal surgery exceed both laparoscopic and robotic minimally invasive options. Conversion to open surgery significantly increases the payments associated with minimally invasive colorectal surgery. Because conversion rates in robotic colorectal operations are half of those in laparoscopy, the excess expenditures attributable to robotics are attenuated by consideration of the cost of conversions.

  15. Quality and Cost in Thoracic Surgery.

    PubMed

    Medbery, Rachel L; Force, Seth D

    2017-08-01

    The value of health care is defined as health outcomes (quality) achieved per dollars spent (cost). The current national health care landscape is focused on minimizing spending while optimizing patient outcomes. With the introduction of minimally invasive thoracic surgery, there has been concern about added cost relative to improved outcomes. Moreover, differences in postoperative hospital care further drive patient outcomes and health care costs. This article presents a comprehensive literature review on quality and cost in thoracic surgery and aims to investigate current challenges with regard to achieving the greatest value for our patients. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Emerging microengineered tools for functional analysis and phenotyping of blood cells.

    PubMed

    Li, Xiang; Chen, Weiqiang; Li, Zida; Li, Ling; Gu, Hongchen; Fu, Jianping

    2014-11-01

    The available techniques for assessing blood cell functions are limited considering the various types of blood cell and their diverse functions. In the past decade, rapid advances in microengineering have enabled an array of blood cell functional measurements that are difficult or impossible to achieve using conventional bulk platforms. Such miniaturized blood cell assay platforms also provide the attractive capabilities of reducing chemical consumption, cost, and assay time, as well as exciting opportunities for device integration, automation, and assay standardization. This review summarizes these contemporary microengineered tools and discusses their promising potential for constructing accurate in vitro models and rapid clinical diagnosis using minimal amounts of whole-blood samples. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Minimizing center of mass vertical movement increases metabolic cost in walking.

    PubMed

    Ortega, Justus D; Farley, Claire T

    2005-12-01

    A human walker vaults up and over each stance limb like an inverted pendulum. This similarity suggests that the vertical motion of a walker's center of mass reduces metabolic cost by providing a mechanism for pendulum-like mechanical energy exchange. Alternatively, some researchers have hypothesized that minimizing vertical movements of the center of mass during walking minimizes the metabolic cost, and this view remains prevalent in clinical gait analysis. We examined the relationship between vertical movement and metabolic cost by having human subjects walk normally and with minimal center of mass vertical movement ("flat-trajectory walking"). In flat-trajectory walking, subjects reduced center of mass vertical displacement by an average of 69% (P = 0.0001) but consumed approximately twice as much metabolic energy over a range of speeds (0.7-1.8 m/s) (P = 0.0001). In flat-trajectory walking, passive pendulum-like mechanical energy exchange provided only a small portion of the energy required to accelerate the center of mass because gravitational potential energy fluctuated minimally. Thus, despite the smaller vertical movements in flat-trajectory walking, the net external mechanical work needed to move the center of mass was similar in both types of walking (P = 0.73). Subjects walked with more flexed stance limbs in flat-trajectory walking (P < 0.001), and the resultant increase in stance limb force generation likely helped cause the doubling in metabolic cost compared with normal walking. Regardless of the cause, these findings clearly demonstrate that human walkers consume substantially more metabolic energy when they minimize vertical motion.

  18. Space-variant restoration of images degraded by camera motion blur.

    PubMed

    Sorel, Michal; Flusser, Jan

    2008-02-01

    We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.

  19. Comparative analysis for various redox flow batteries chemistries using a cost performance model

    NASA Astrophysics Data System (ADS)

    Crawford, Alasdair; Viswanathan, Vilayanur; Stephenson, David; Wang, Wei; Thomsen, Edwin; Reed, David; Li, Bin; Balducci, Patrick; Kintner-Meyer, Michael; Sprenkle, Vincent

    2015-10-01

    The total energy storage system cost is determined by means of a robust performance-based cost model for multiple flow battery chemistries. Systems aspects such as shunt current losses, pumping losses and various flow patterns through electrodes are accounted for. The system cost minimizing objective function determines stack design by optimizing the state of charge operating range, along with current density and current-normalized flow. The model cost estimates are validated using 2-kW stack performance data for the same size electrodes and operating conditions. Using our validated tool, it has been demonstrated that an optimized all-vanadium system has an estimated system cost of < 350 kWh-1 for 4-h application. With an anticipated decrease in component costs facilitated by economies of scale from larger production volumes, coupled with performance improvements enabled by technology development, the system cost is expected to decrease to 160 kWh-1 for a 4-h application, and to 100 kWh-1 for a 10-h application. This tool has been shared with the redox flow battery community to enable cost estimation using their stack data and guide future direction.

  20. Technological Minimalism: A Cost-Effective Alternative for Course Design and Development.

    ERIC Educational Resources Information Center

    Lorenzo, George

    2001-01-01

    Discusses the use of minimum levels of technology, or technological minimalism, for Web-based multimedia course content. Highlights include cost effectiveness; problems with video streaming, the use of XML for Web pages, and Flash and Java applets; listservs instead of proprietary software; and proper faculty training. (LRW)

  1. Smart Radiation Therapy Biomaterials.

    PubMed

    Ngwa, Wilfred; Boateng, Francis; Kumar, Rajiv; Irvine, Darrell J; Formenti, Silvia; Ngoma, Twalib; Herskind, Carsten; Veldwijk, Marlon R; Hildenbrand, Georg Lars; Hausmann, Michael; Wenz, Frederik; Hesser, Juergen

    2017-03-01

    Radiation therapy (RT) is a crucial component of cancer care, used in the treatment of over 50% of cancer patients. Patients undergoing image guided RT or brachytherapy routinely have inert RT biomaterials implanted into their tumors. The single function of these RT biomaterials is to ensure geometric accuracy during treatment. Recent studies have proposed that the inert biomaterials could be upgraded to "smart" RT biomaterials, designed to do more than 1 function. Such smart biomaterials include next-generation fiducial markers, brachytherapy spacers, and balloon applicators, designed to respond to stimuli and perform additional desirable functions like controlled delivery of therapy-enhancing payloads directly into the tumor subvolume while minimizing normal tissue toxicities. More broadly, smart RT biomaterials may include functionalized nanoparticles that can be activated to boost RT efficacy. This work reviews the rationale for smart RT biomaterials, the state of the art in this emerging cross-disciplinary research area, challenges and opportunities for further research and development, and a purview of potential clinical applications. Applications covered include using smart RT biomaterials for boosting cancer therapy with minimal side effects, combining RT with immunotherapy or chemotherapy, reducing treatment time or health care costs, and other incipient applications. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Sinogram noise reduction for low-dose CT by statistics-based nonlinear filters

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Lu, Hongbing; Li, Tianfang; Liang, Zhengrong

    2005-04-01

    Low-dose CT (computed tomography) sinogram data have been shown to be signal-dependent with an analytical relationship between the sample mean and sample variance. Spatially-invariant low-pass linear filters, such as the Butterworth and Hanning filters, could not adequately handle the data noise and statistics-based nonlinear filters may be an alternative choice, in addition to other choices of minimizing cost functions on the noisy data. Anisotropic diffusion filter and nonlinear Gaussian filters chain (NLGC) are two well-known classes of nonlinear filters based on local statistics for the purpose of edge-preserving noise reduction. These two filters can utilize the noise properties of the low-dose CT sinogram for adaptive noise reduction, but can not incorporate signal correlative information for an optimal regularized solution. Our previously-developed Karhunen-Loeve (KL) domain PWLS (penalized weighted least square) minimization considers the signal correlation via the KL strategy and seeks the PWLS cost function minimization for an optimal regularized solution for each KL component, i.e., adaptive to the KL components. This work compared the nonlinear filters with the KL-PWLS framework for low-dose CT application. Furthermore, we investigated the nonlinear filters for post KL-PWLS noise treatment in the sinogram space, where the filters were applied after ramp operation on the KL-PWLS treated sinogram data prior to backprojection operation (for image reconstruction). By both computer simulation and experimental low-dose CT data, the nonlinear filters could not outperform the KL-PWLS framework. The gain of post KL-PWLS edge-preserving noise filtering in the sinogram space is not significant, even the noise has been modulated by the ramp operation.

  3. Diet models with linear goal programming: impact of achievement functions.

    PubMed

    Gerdessen, J C; de Vries, J H M

    2015-11-01

    Diet models based on goal programming (GP) are valuable tools in designing diets that comply with nutritional, palatability and cost constraints. Results derived from GP models are usually very sensitive to the type of achievement function that is chosen.This paper aims to provide a methodological insight into several achievement functions. It describes the extended GP (EGP) achievement function, which enables the decision maker to use either a MinSum achievement function (which minimizes the sum of the unwanted deviations) or a MinMax achievement function (which minimizes the largest unwanted deviation), or a compromise between both. An additional advantage of EGP models is that from one set of data and weights multiple solutions can be obtained. We use small numerical examples to illustrate the 'mechanics' of achievement functions. Then, the EGP achievement function is demonstrated on a diet problem with 144 foods, 19 nutrients and several types of palatability constraints, in which the nutritional constraints are modeled with fuzzy sets. Choice of achievement function affects the results of diet models. MinSum achievement functions can give rise to solutions that are sensitive to weight changes, and that pile all unwanted deviations on a limited number of nutritional constraints. MinMax achievement functions spread the unwanted deviations as evenly as possible, but may create many (small) deviations. EGP comprises both types of achievement functions, as well as compromises between them. It can thus, from one data set, find a range of solutions with various properties.

  4. Multi-tasking computer control of video related equipment

    NASA Technical Reports Server (NTRS)

    Molina, Rod; Gilbert, Bob

    1989-01-01

    The flexibility, cost-effectiveness and widespread availability of personal computers now makes it possible to completely integrate the previously separate elements of video post-production into a single device. Specifically, a personal computer, such as the Commodore-Amiga, can perform multiple and simultaneous tasks from an individual unit. Relatively low cost, minimal space requirements and user-friendliness, provides the most favorable environment for the many phases of video post-production. Computers are well known for their basic abilities to process numbers, text and graphics and to reliably perform repetitive and tedious functions efficiently. These capabilities can now apply as either additions or alternatives to existing video post-production methods. A present example of computer-based video post-production technology is the RGB CVC (Computer and Video Creations) WorkSystem. A wide variety of integrated functions are made possible with an Amiga computer existing at the heart of the system.

  5. Neural network-based optimal adaptive output feedback control of a helicopter UAV.

    PubMed

    Nodland, David; Zargarzadeh, Hassan; Jagannathan, Sarangapani

    2013-07-01

    Helicopter unmanned aerial vehicles (UAVs) are widely used for both military and civilian operations. Because the helicopter UAVs are underactuated nonlinear mechanical systems, high-performance controller design for them presents a challenge. This paper introduces an optimal controller design via an output feedback for trajectory tracking of a helicopter UAV, using a neural network (NN). The output-feedback control system utilizes the backstepping methodology, employing kinematic and dynamic controllers and an NN observer. The online approximator-based dynamic controller learns the infinite-horizon Hamilton-Jacobi-Bellman equation in continuous time and calculates the corresponding optimal control input by minimizing a cost function, forward-in-time, without using the value and policy iterations. Optimal tracking is accomplished by using a single NN utilized for the cost function approximation. The overall closed-loop system stability is demonstrated using Lyapunov analysis. Finally, simulation results are provided to demonstrate the effectiveness of the proposed control design for trajectory tracking.

  6. Constellation labeling optimization for bit-interleaved coded APSK

    NASA Astrophysics Data System (ADS)

    Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.

  7. Predictive Compensator Optimization for Head Tracking Lag in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Adelstein, Barnard D.; Jung, Jae Y.; Ellis, Stephen R.

    2001-01-01

    We examined the perceptual impact of plant noise parameterization for Kalman Filter predictive compensation of time delays intrinsic to head tracked virtual environments (VEs). Subjects were tested in their ability to discriminate between the VE system's minimum latency and conditions in which artificially added latency was then predictively compensated back to the system minimum. Two head tracking predictors were parameterized off-line according to cost functions that minimized prediction errors in (1) rotation, and (2) rotation projected into translational displacement with emphasis on higher frequency human operator noise. These predictors were compared with a parameterization obtained from the VE literature for cost function (1). Results from 12 subjects showed that both parameterization type and amount of compensated latency affected discrimination. Analysis of the head motion used in the parameterizations and the subsequent discriminability results suggest that higher frequency predictor artifacts are contributory cues for discriminating the presence of predictive compensation.

  8. Orbit transfer rocket engine integrated control and health monitoring system technology readiness assessment

    NASA Technical Reports Server (NTRS)

    Bickford, R. L.; Collamore, F. N.; Gage, M. L.; Morgan, D. B.; Thomas, E. R.

    1992-01-01

    The objectives of this task were to: (1) estimate the technology readiness of an integrated control and health monitoring (ICHM) system for the Aerojet 7500 lbF Orbit Transfer Vehicle engine preliminary design assuming space based operations; and (2) estimate the remaining cost to advance this technology to a NASA defined 'readiness level 6' by 1996 wherein the technology has been demonstrated with a system validation model in a simulated environment. The work was accomplished through the conduct of four subtasks. In subtask 1 the minimally required functions for the control and monitoring system was specified. The elements required to perform these functions were specified in Subtask 2. In Subtask 3, the technology readiness level of each element was assessed. Finally, in Subtask 4, the development cost and schedule requirements were estimated for bringing each element to 'readiness level 6'.

  9. An inverse model for a free-boundary problem with a contact line: Steady case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Volkov, Oleg; Protas, Bartosz

    2009-07-20

    This paper reformulates the two-phase solidification problem (i.e., the Stefan problem) as an inverse problem in which a cost functional is minimized with respect to the position of the interface and subject to PDE constraints. An advantage of this formulation is that it allows for a thermodynamically consistent treatment of the interface conditions in the presence of a contact point involving a third phase. It is argued that such an approach in fact represents a closure model for the original system and some of its key properties are investigated. We describe an efficient iterative solution method for the Stefan problemmore » formulated in this way which uses shape differentiation and adjoint equations to determine the gradient of the cost functional. Performance of the proposed approach is illustrated with sample computations concerning 2D steady solidification phenomena.« less

  10. Are weekend inpatient rehabilitation services value for money? An economic evaluation alongside a randomized controlled trial with a 30 day follow up.

    PubMed

    Brusco, Natasha Kareem; Watts, Jennifer J; Shields, Nora; Taylor, Nicholas F

    2014-05-29

    Providing additional Saturday rehabilitation can improve functional independence and health related quality of life at discharge and it may reduce patient length of stay, yet the economic implications are not known. The aim of this study was to determine from a health service perspective if the provision of rehabilitation to inpatients on a Saturday in addition to Monday to Friday was cost effective compared to Monday to Friday rehabilitation alone. Cost utility and cost effectiveness analyses were undertaken alongside a multi-center, single-blind randomized controlled trial with a 30-day follow up after discharge. Participants were adults admitted for inpatient rehabilitation in two publicly funded metropolitan rehabilitation facilities. The control group received usual care rehabilitation services from Monday to Friday and the intervention group received usual care plus an additional rehabilitation service on Saturday. Incremental cost utility ratio was reported as cost per quality adjusted life year (QALY) gained and an incremental cost effectiveness ratio (ICER) was reported as cost for a minimal clinically important difference (MCID) in functional independence. 996 patients (mean age 74 (standard deviation 13) years) were randomly assigned to the intervention (n = 496) or the control group (n = 500). Mean difference in cost of AUD$1,673 (95% confidence interval (CI) -271 to 3,618) was a saving in favor of the intervention group. The incremental cost utility ratio found a saving of AUD$41,825 (95% CI -2,817 to 74,620) per QALY gained for the intervention group. The ICER found a saving of AUD$16,003 (95% CI -3,074 to 87,361) in achieving a MCID in functional independence for the intervention group. If the willingness to pay per QALY gained or for a MCID in functional independence was zero dollars the probability of the intervention being cost effective was 96% and 95%, respectively. A sensitivity analysis removing Saturday penalty rates did not significantly alter the outcome. From a health service perspective, the provision of rehabilitation to inpatients on a Saturday in addition to Monday to Friday, compared to Monday to Friday rehabilitation alone, is likely to be cost saving per QALY gained and for a MCID in functional independence. Australian and New Zealand Clinical Trials Registry November 2009 ACTRN12609000973213.

  11. Upon Accounting for the Impact of Isoenzyme Loss, Gene Deletion Costs Anticorrelate with Their Evolutionary Rates.

    PubMed

    Jacobs, Christopher; Lambourne, Luke; Xia, Yu; Segrè, Daniel

    2017-01-01

    System-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism's genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene's fitness contribution to an organism "here and now" and the same gene's historical importance as evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call "function-loss cost", which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.

  12. Central coast designs: The Eightball Express. Taking off with convention, cruising with improvements and landing with absolute success

    NASA Technical Reports Server (NTRS)

    Davis, Ryan Edwin; Dawson, Anne Marie; Fecht, Paul Hans; Fry, Roman Zyabash; Vantriet, Robert; Macabantad, Dominique Dujale; Miller, Robert Glenn; Perez, Gustavo, Jr.; Weise, Timothy Michael

    1994-01-01

    The airline industry is very competitive, resulting in most U.S. and many international airlines being unprofitable. Because of this competition the airlines have been engaging in fare wars (which reduce revenue generated by transporting passengers) while inflation has increased. This situation of course is not developing revenue for the airlines. To revive the airlines to profitability, the difference between revenue received and airline operational cost must be improved. To solve these extreme conditions, the Eightball Express was designed with the main philosophy of developing an aircraft with a low direct operating cost and acquisition cost. Central Coast Designs' (CCD) aircraft utilizes primarily aluminum in the structure to minimize manufacturing cost, supercritical airfoil sections to minimize drag, and fuel efficient engines to minimize fuel burn. Furthermore, the aircraft was designed using Total Quality Management and Integrated Product Development to minimize development and manufacturing costs. Using these primary cost reduction techniques, the Eightball Express was designed to meet the Lockheed/AIAA Request for Proposal (RFP) requirements of a low cost, 153 passenger, 3000 nm. range transport. The Eightball Express is able to takeoff on less than a 7000 ft. runway, cruise at Mach 0.82 at an altitude of 36,000 ft. for a range of 3,000 nm., and lands on a 5,000 ft. runway. lt is able to perform this mission at a direct operating cost of 3.51 cents/available seat mile in 1992 dollars while the acquisition cost is only $28 million in 1992 dollars. By utilizing and improving on proven technologies, CCD has produced an efficient low cost commercial transport for the future.

  13. Cost-minimization analysis of panitumumab compared with cetuximab for first-line treatment of patients with wild-type RAS metastatic colorectal cancer.

    PubMed

    Graham, Christopher N; Hechmati, Guy; Fakih, Marwan G; Knox, Hediyyih N; Maglinte, Gregory A; Hjelmgren, Jonas; Barber, Beth; Schwartzberg, Lee S

    2015-01-01

    To compare the costs of first-line treatment with panitumumab + FOLFOX in comparison to cetuximab + FOLFIRI among patients with wild-type (WT) RAS metastatic colorectal cancer (mCRC) in the US. A cost-minimization model was developed assuming similar treatment efficacy between both regimens. The model estimated the costs associated with drug acquisition, treatment administration frequency (every 2 weeks for panitumumab, weekly for cetuximab), and incidence of infusion reactions. Average anti-EGFR doses were calculated from the ASPECCT clinical trial, and average doses of chemotherapy regimens were based on product labels. Using the medical component of the consumer price index, adverse event costs were inflated to 2014 US dollars, and all other costs were reported in 2014 US dollars. The time horizon for the model was based on average first-line progression-free survival of a WT RAS patient, estimated from parametric survival analyses of PRIME clinical trial data. Relative to cetuximab + FOLFIRI in the first-line treatment of WT RAS mCRC, the cost-minimization model demonstrated lower projected drug acquisition, administration, and adverse event costs for patients who received panitumumab + FOLFOX. The overall cost per patient for first-line treatment was $179,219 for panitumumab + FOLFOX vs $202,344 for cetuximab + FOLFIRI, resulting in a per-patient saving of $23,125 (11.4%) in favor of panitumumab + FOLFOX. From a value perspective, the cost-minimization model supports panitumumab + FOLFOX instead of cetuximab + FOLFIRI as the preferred first-line treatment of WT RAS mCRC patients requiring systemic therapy.

  14. [Minimally invasive surgery and the economics of it. Can minimally invasive surgery be cost efficient from a business point of view?].

    PubMed

    Ritz, J P; Stufler, M; Buhr, H J

    2007-06-01

    Minimally invasive surgery (MIS) is now accepted as equally valid as the use of a standard access in some areas of surgery. It is not possible to decide whether this access is economically worthwhile and if so for whom without a full economic cost-benefit analysis, which must take account of the hospital's own characteristics in addition to the cost involved for surgery, staff, infrastructure and administration. In summary, the main economic advantage of MIS lies in the patient-related early postoperative results, while the main disadvantage is that the operative material costs are higher. At present, the payment made for each procedure performed under the DRG system includes 14-17% of the total cost for materials, regardless of the access route and of the technical sophistication of the operation. The actual material costs are greater by a factor of 2-50 for MIS than for a conventional procedure. The task of the hospital is thus to lower the costs for material and infrastructure; that of industry is to offer less expensive alternatives; and that of our politicians, to implement better remuneration of the material costs.

  15. Mathematical model for dynamic cell formation in fast fashion apparel manufacturing stage

    NASA Astrophysics Data System (ADS)

    Perera, Gayathri; Ratnayake, Vijitha

    2018-05-01

    This paper presents a mathematical programming model for dynamic cell formation to minimize changeover-related costs (i.e., machine relocation costs and machine setup cost) and inter-cell material handling cost to cope with the volatile production environments in apparel manufacturing industry. The model is formulated through findings of a comprehensive literature review. Developed model is validated based on data collected from three different factories in apparel industry, manufacturing fast fashion products. A program code is developed using Lingo 16.0 software package to generate optimal cells for developed model and to determine the possible cost-saving percentage when the existing layouts used in three factories are replaced by generated optimal cells. The optimal cells generated by developed mathematical model result in significant cost saving when compared with existing product layouts used in production/assembly department of selected factories in apparel industry. The developed model can be considered as effective in minimizing the considered cost terms in dynamic production environment of fast fashion apparel manufacturing industry. Findings of this paper can be used for further researches on minimizing the changeover-related costs in fast fashion apparel production stage.

  16. Costs of medical care after open or minimally invasive prostate cancer surgery: a population-based analysis.

    PubMed

    Lowrance, William T; Eastham, James A; Yee, David S; Laudone, Vincent P; Denton, Brian; Scardino, Peter T; Elkin, Elena B

    2012-06-15

    Evidence suggests that minimally invasive radical prostatectomy (MRP) and open radical prostatectomy (ORP) have similar short-term clinical and functional outcomes. MRP with robotic assistance is generally more expensive than ORP, but it is not clear whether subsequent costs of care vary by approach. In the Surveillance, Epidemiology, and End Results (SEER) cancer registry linked with Medicare claims, men aged 66 years or older who received MRP or ORP in 2003 through 2006 for prostate cancer were identified. Total cost of care was estimated as the sum of Medicare payments from all claims for hospital care, outpatient care, physician services, home health and hospice care, and durable medical equipment in the first year from the date of surgical admission. The impact of surgical approach on costs was estimated, controlling for patient and disease characteristics. Of 5445 surgically treated prostate cancer patients, 4454 (82%) had ORP and 991 (18%) had MRP. Mean total first-year costs were more than $1200 greater for MRP compared with ORP ($16,919 vs $15,692; P = .08). Controlling for patient and disease characteristics, MRP was associated with 2% greater mean total payments, but this difference was not statistically significant. First-year costs were greater for men who were older, black, lived in the Northeast, had lymph node involvement, more advanced tumor stage, or greater comorbidity. In this population-based cohort of older men, MRP and ORP had similar economic outcomes. From a payer's perspective, any benefits associated with MRP may not translate to net savings compared with ORP in the first year after surgery. Copyright © 2011 American Cancer Society.

  17. An Outcome and Cost Analysis Comparing Single-Level Minimally Invasive Transforaminal Lumbar Interbody Fusion Using Intraoperative Fluoroscopy versus Computed Tomography-Guided Navigation.

    PubMed

    Khanna, Ryan; McDevitt, Joseph L; Abecassis, Zachary A; Smith, Zachary A; Koski, Tyler R; Fessler, Richard G; Dahdaleh, Nader S

    2016-10-01

    Minimally invasive transforaminal lumbar interbody fusion (TLIF) has undergone significant evolution since its conception as a fusion technique to treat lumbar spondylosis. Minimally invasive TLIF is commonly performed using intraoperative two-dimensional fluoroscopic x-rays. However, intraoperative computed tomography (CT)-based navigation during minimally invasive TLIF is gaining popularity for improvements in visualizing anatomy and reducing intraoperative radiation to surgeons and operating room staff. This is the first study to compare clinical outcomes and cost between these 2 imaging techniques during minimally invasive TILF. For comparison, 28 patients who underwent single-level minimally invasive TLIF using fluoroscopy were matched to 28 patients undergoing single-level minimally invasive TLIF using CT navigation based on race, sex, age, smoking status, payer type, and medical comorbidities (Charlson Comorbidity Index). The minimum follow-up time was 6 months. The 2 groups were compared in regard to clinical outcomes and hospital reimbursement from the payer perspective. Average surgery time, anesthesia time, and hospital length of stay were similar for both groups, but average estimated blood loss was lower in the fluoroscopy group compared with the CT navigation group (154 mL vs. 262 mL; P = 0.016). Oswestry Disability Index, back visual analog scale, and leg visual analog scale scores similarly improved in both groups (P > 0.05) at 6-month follow-up. Cost analysis showed that average hospital payments were similar in the fluoroscopy versus the CT navigation groups ($32,347 vs. $32,656; P = 0.925) as well as payments for the operating room (P = 0.868). Single minimally invasive TLIF performed with fluoroscopy versus CT navigation showed similar clinical outcomes and cost at 6 months. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. A perverse quality incentive in surgery: implications of reimbursing surgeons less for doing laparoscopic surgery.

    PubMed

    Fader, Amanda N; Xu, Tim; Dunkin, Brian J; Makary, Martin A

    2016-11-01

    Surgery is one of the highest priced services in health care, and complications from surgery can be serious and costly. Recently, advances in surgical techniques have allowed surgeons to perform many common operations using minimally invasive methods that result in fewer complications. Despite this, the rates of open surgery remain high across multiple surgical disciplines. This is an expert commentary and review of the contemporary literature regarding minimally invasive surgery practices nationwide, the benefits of less invasive approaches, and how minimally invasive compared with open procedures are differentially reimbursed in the United States. We explore the incentive of the current surgeon reimbursement fee schedule and its potential implications. A surgeon's preference to perform minimally invasive compared with open surgery remains highly variable in the U.S., even after adjustment for patient comorbidities and surgical complexity. Nationwide administrative claims data across several surgical disciplines demonstrates that minimally invasive surgery utilization in place of open surgery is associated with reduced adverse events and cost savings. Reducing surgical complications by increasing adoption of minimally invasive operations has significant cost implications for health care. However, current U.S. payment structures may perversely incentivize open surgery and financially reward physicians who do not necessarily embrace newer or best minimally invasive surgery practices. Utilization of minimally invasive surgery varies considerably in the U.S., representing one of the greatest disparities in health care. Existing physician payment models must translate the growing body of research in surgical care into physician-level rewards for quality, including choice of operation. Promoting safe surgery should be an important component of a strong, value-based healthcare system. Resolving the potentially perverse incentives in paying for surgical approaches may help address disparities in surgical care, reduce the prevalent problem of variation, and help contain health care costs.

  19. Design for Warehouse with Product Flow Type Allocation using Linear Programming: A Case Study in a Textile Industry

    NASA Astrophysics Data System (ADS)

    Khannan, M. S. A.; Nafisah, L.; Palupi, D. L.

    2018-03-01

    Sari Warna Co. Ltd, a company engaged in the textile industry, is experiencing problems in the allocation and placement of goods in the warehouse. During this time the company has not implemented the product flow type allocation and product placement to the respective products resulting in a high total material handling cost. Therefore, this study aimed to determine the allocation and placement of goods in the warehouse corresponding to product flow type with minimal total material handling cost. This research is a quantitative research based on the theory of storage and warehouse that uses a mathematical model of optimization problem solving using mathematical optimization model approach belongs to Heragu (2005), aided by software LINGO 11.0 in the calculation of the optimization model. Results obtained from this study is the proportion of the distribution for each functional area is the area of cross-docking at 0.0734, the reserve area at 0.1894, and the forward area at 0.7372. The allocation of product flow type 1 is 5 products, the product flow type 2 is 9 products, the product flow type 3 is 2 products, and the product flow type 4 is 6 products. The optimal total material handling cost by using this mathematical model equal to Rp43.079.510 while it is equal to Rp 49.869.728 by using the company’s existing method. It saves Rp6.790.218 for the total material handling cost. Thus, all of the products can be allocated in accordance with the product flow type with minimal total material handling cost.

  20. Energy aware path planning in complex four dimensional environments

    NASA Astrophysics Data System (ADS)

    Chakrabarty, Anjan

    This dissertation addresses the problem of energy-aware path planning for small autonomous vehicles. While small autonomous vehicles can perform missions that are too risky (or infeasible) for larger vehicles, the missions are limited by the amount of energy that can be carried on board the vehicle. Path planning techniques that either minimize energy consumption or exploit energy available in the environment can thus increase range and endurance. Path planning is complicated by significant spatial (and potentially temporal) variations in the environment. While the main focus is on autonomous aircraft, this research also addresses autonomous ground vehicles. Range and endurance of small unmanned aerial vehicles (UAVs) can be greatly improved by utilizing energy from the atmosphere. Wind can be exploited to minimize energy consumption of a small UAV. But wind, like any other atmospheric component , is a space and time varying phenomenon. To effectively use wind for long range missions, both exploration and exploitation of wind is critical. This research presents a kinematics based tree algorithm which efficiently handles the four dimensional (three spatial and time) path planning problem. The Kinematic Tree algorithm provides a sequence of waypoints, airspeeds, heading and bank angle commands for each segment of the path. The planner is shown to be resolution complete and computationally efficient. Global optimality of the cost function cannot be claimed, as energy is gained from the atmosphere, making the cost function inadmissible. However the Kinematic Tree is shown to be optimal up to resolution if the cost function is admissible. Simulation results show the efficacy of this planning method for a glider in complex real wind data. Simulation results verify that the planner is able to extract energy from the atmosphere enabling long range missions. The Kinematic Tree planning framework, developed to minimize energy consumption of UAVs, is applied for path planning in ground robots. In traditional path planning problem the focus is on obstacle avoidance and navigation. The optimal Kinematic Tree algorithm named Kinematic Tree* is shown to find optimal paths to reach the destination while avoiding obstacles. A more challenging path planning scenario arises for planning in complex terrain. This research shows how the Kinematic Tree* algorithm can be extended to find minimum energy paths for a ground vehicle in difficult mountainous terrain.

  1. Solar-cell interconnect design for terrestrial photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.

    1984-01-01

    Useful solar cell interconnect reliability design and life prediction algorithms are presented, together with experimental data indicating that the classical strain cycle (fatigue) curve for the interconnect material does not account for the statistical scatter that is required in reliability predictions. This shortcoming is presently addressed by fitting a functional form to experimental cumulative interconnect failure rate data, which thereby yields statistical fatigue curves enabling not only the prediction of cumulative interconnect failures during the design life of an array field, but also the quantitative interpretation of data from accelerated thermal cycling tests. Optimal interconnect cost reliability design algorithms are also derived which may allow the minimization of energy cost over the design life of the array field.

  2. Enhanced solar energy options using earth-orbiting mirrors

    NASA Technical Reports Server (NTRS)

    Gilbreath, W. P.; Billman, K. W.; Bowen, S. W.

    1978-01-01

    A system of orbiting space reflectors is described, analyzed, and shown to economically provide nearly continuous insolation to preselected ground sites, producing benefits hitherto lacking in conventional solar farms and leading to large reductions in energy costs for such installations. Free-flying planar mirrors of about 1 sq km are shown to be optimum and can be made at under 10 g/sq m of surface, thus minimizing material needs and space transportation costs. Models are developed for both the design of such mirrors and for the analysis of expected ground insolation as a function of orbital parameters, time, and site location. Various applications (agricultural, solar-electric production, weather enhancement, etc.) are described.

  3. Ride comfort control in large flexible aircraft. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Warren, M. E.

    1971-01-01

    The problem of ameliorating the discomfort of passengers on a large air transport subject to flight disturbances is examined. The longitudinal dynamics of the aircraft, including effects of body flexing, are developed in terms of linear, constant coefficient differential equations in state variables. A cost functional, penalizing the rigid body displacements and flexure accelerations over the surface of the aircraft is formulated as a quadratic form. The resulting control problem, to minimize the cost subject to the state equation constraints, is of a class whose solutions are well known. The feedback gains for the optimal controller are calculated digitally, and the resulting autopilot is simulated on an analog computer and its performance evaluated.

  4. Solar-cell interconnect design for terrestrial photovoltaic modules

    NASA Astrophysics Data System (ADS)

    Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.

    1984-11-01

    Useful solar cell interconnect reliability design and life prediction algorithms are presented, together with experimental data indicating that the classical strain cycle (fatigue) curve for the interconnect material does not account for the statistical scatter that is required in reliability predictions. This shortcoming is presently addressed by fitting a functional form to experimental cumulative interconnect failure rate data, which thereby yields statistical fatigue curves enabling not only the prediction of cumulative interconnect failures during the design life of an array field, but also the quantitative interpretation of data from accelerated thermal cycling tests. Optimal interconnect cost reliability design algorithms are also derived which may allow the minimization of energy cost over the design life of the array field.

  5. Sustainable Life Cycles of Natural-Precursor-Derived Nanocarbons.

    PubMed

    Bazaka, Kateryna; Jacob, Mohan V; Ostrikov, Kostya Ken

    2016-01-13

    Sustainable societal and economic development relies on novel nanotechnologies that offer maximum efficiency at minimal environmental cost. Yet, it is very challenging to apply green chemistry approaches across the entire life cycle of nanotech products, from design and nanomaterial synthesis to utilization and disposal. Recently, novel, efficient methods based on nonequilibrium reactive plasma chemistries that minimize the process steps and dramatically reduce the use of expensive and hazardous reagents have been applied to low-cost natural and waste sources to produce value-added nanomaterials with a wide range of applications. This review discusses the distinctive effects of nonequilibrium reactive chemistries and how these effects can aid and advance the integration of sustainable chemistry into each stage of nanotech product life. Examples of the use of enabling plasma-based technologies in sustainable production and degradation of nanotech products are discussed-from selection of precursors derived from natural resources and their conversion into functional building units, to methods for green synthesis of useful naturally degradable carbon-based nanomaterials, to device operation and eventual disintegration into naturally degradable yet potentially reusable byproducts.

  6. Brandon/Hill selected list of print books and journals for the small medical library.

    PubMed

    Hill, D R; Stickell, H N

    2001-04-01

    After thirty-six years of biennial updates, the authors take great pride in being able to publish the nineteenth version (2001) of the "Brandon/Hill Selected List of Print Books and Journals for the Small Medical Library." This list of 630 books and 143 journals is intended as a selection guide for health sciences libraries or similar facilities. It can also function as a core collection for a library consortium. Books and journals are categorized by subject; the book list is followed by an author/editor index, and the subject list of journals, by an alphabetical title listing. Due to continuing requests from librarians, a "minimal core list" consisting of 81 titles has been pulled out from the 217 asterisked (*) initial-purchase books and marked with daggers (dagger *) before the asterisks. To purchase the entire collection of 630 books and to pay for 143 2001 journal subscriptions would require $124,000. The cost of only the asterisked items, books and journals, totals $55,000. The "minimal core list" book collection costs approximately $14,300.

  7. Brandon/Hill selected list of print books and journals for the small medical library*

    PubMed Central

    Hill, Dorothy R.; Stickell, Henry N.

    2001-01-01

    After thirty-six years of biennial updates, the authors take great pride in being able to publish the nineteenth version (2001) of the “Brandon/Hill Selected List of Print Books and Journals for the Small Medical Library.” This list of 630 books and 143 journals is intended as a selection guide for health sciences libraries or similar facilities. It can also function as a core collection for a library consortium. Books and journals are categorized by subject; the book list is followed by an author/editor index, and the subject list of journals, by an alphabetical title listing. Due to continuing requests from librarians, a “minimal core list” consisting of 81 titles has been pulled out from the 217 asterisked (*) initial-purchase books and marked with daggers (†*) before the asterisks. To purchase the entire collection of 630 books and to pay for 143 2001 journal subscriptions would require $124,000. The cost of only the asterisked items, books and journals, totals $55,000. The “minimal core list” book collection costs approximately $14,300. PMID:11337945

  8. Brandon/Hill selected list of books and journals for the small medical library.

    PubMed Central

    Hill, D R

    1999-01-01

    The interrelationship of print and electronic media in the hospital library and its relevance to the "Brandon/Hill Selected List" in 1999 are addressed in the updated list (eighteenth version) of 627 books and 145 journals. This list is intended as a selection guide for the small or medium-size library in a hospital or similar facility. More realistically, it can function as a core collection for a library consortium. Books and journals are categorized by subject; the book list is followed by an author/editor index, and the subject list of journals by an alphabetical title listing. Due to continuing requests from librarians, a "minimal core" book collection consisting of 82 titles has been pulled out from the 214 asterisked (*) initial-purchase books and marked with daggers ([symbol: see text]). To purchase the entire collection of books and to pay for 1999 journal subscriptions would require $114,900. The cost of only the asterisked items, books and journals, totals $49,100. The "minimal core" book collection costs $13,200. PMID:10219475

  9. Neutral buoyancy is optimal to minimize the cost of transport in horizontally swimming seals

    PubMed Central

    Sato, Katsufumi; Aoki, Kagari; Watanabe, Yuuki Y.; Miller, Patrick J. O.

    2013-01-01

    Flying and terrestrial animals should spend energy to move while supporting their weight against gravity. On the other hand, supported by buoyancy, aquatic animals can minimize the energy cost for supporting their body weight and neutral buoyancy has been considered advantageous for aquatic animals. However, some studies suggested that aquatic animals might use non-neutral buoyancy for gliding and thereby save energy cost for locomotion. We manipulated the body density of seals using detachable weights and floats, and compared stroke efforts of horizontally swimming seals under natural conditions using animal-borne recorders. The results indicated that seals had smaller stroke efforts to swim a given speed when they were closer to neutral buoyancy. We conclude that neutral buoyancy is likely the best body density to minimize the cost of transport in horizontal swimming by seals. PMID:23857645

  10. Neutral buoyancy is optimal to minimize the cost of transport in horizontally swimming seals.

    PubMed

    Sato, Katsufumi; Aoki, Kagari; Watanabe, Yuuki Y; Miller, Patrick J O

    2013-01-01

    Flying and terrestrial animals should spend energy to move while supporting their weight against gravity. On the other hand, supported by buoyancy, aquatic animals can minimize the energy cost for supporting their body weight and neutral buoyancy has been considered advantageous for aquatic animals. However, some studies suggested that aquatic animals might use non-neutral buoyancy for gliding and thereby save energy cost for locomotion. We manipulated the body density of seals using detachable weights and floats, and compared stroke efforts of horizontally swimming seals under natural conditions using animal-borne recorders. The results indicated that seals had smaller stroke efforts to swim a given speed when they were closer to neutral buoyancy. We conclude that neutral buoyancy is likely the best body density to minimize the cost of transport in horizontal swimming by seals.

  11. Innovative concepts for marginal fields (advanced monotower developments)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, M.T.; Marks, V.E.

    1995-12-01

    The braced monotower provides a safe, functional and cost effective solution for topsides up to 500 tonnes, with up to 8 wells and standing in water depths of up to 70 meters. It is both simple in concept and structurally efficient. The superstructure is supported by a single column which is stayed by three symmetrically orientated legs. A broad mudline base is also provided to limit pile loads. The final concept offers complete protection to the risers and conductors from ship impact, as all appurtenances are housed within the central column. The basic design philosophy of the low intervention platformmore » is to minimize the onboard equipment to that vitally needed to produce hydrocarbon. The concept eliminates the life support functions that on a normal North Sea platform can contribute up to 50% of the topside dry weight. A system of Zero Based Engineering is used that ensures each item of equipment contributes more to the NPV of the platform than the fully built-up through life cost. This effectively eliminates the operator preference factor and the ``culture`` cost.« less

  12. Is cost effectiveness sustained after weekend inpatient rehabilitation? 12 month follow up from a randomized controlled trial.

    PubMed

    Brusco, Natasha Kareem; Watts, Jennifer J; Shields, Nora; Taylor, Nicholas F

    2015-04-18

    Our previous work showed that providing additional rehabilitation on a Saturday was cost effective in the short term from the perspective of the health service provider. This study aimed to evaluate if providing additional rehabilitation on a Saturday was cost effective at 12 months, from a health system perspective inclusive of private costs. Cost effectiveness analyses alongside a single-blinded randomized controlled trial with 12 months follow up inclusive of informal care. Participants were adults admitted to two publicly funded inpatient rehabilitation facilities. The control group received usual care rehabilitation services from Monday to Friday and the intervention group received usual care plus additional Saturday rehabilitation. Incremental cost effectiveness ratios were reported as cost per quality adjusted life year (QALY) gained and for a minimal clinical important difference (MCID) in functional independence. A total of 996 patients [mean age 74 years (SD 13)] were randomly assigned to the intervention (n = 496) or control group (n = 500). The intervention was associated with improvements in QALY and MCID in function, as well as a non-significant reduction in cost from admission to 12 months (mean difference (MD) AUD$6,325; 95% CI -4,081 to 16,730; t test p = 0.23 and MWU p = 0.06), and a significant reduction in cost from admission to 6 months (MD AUD$6,445; 95% CI 3,368 to 9,522; t test p = 0.04 and MWU p = 0.01). There is a high degree of certainty that providing additional rehabilitation services on Saturday is cost effective. Sensitivity analyses varying the cost of informal carers and self-reported health service utilization, favored the intervention. From a health system perspective inclusive of private costs the provision of additional Saturday rehabilitation for inpatients is likely to have sustained cost savings per QALY gained and for a MCID in functional independence, for the inpatient stay and 12 months following discharge, without a cost shift into the community. Australian and New Zealand Clinical Trials Registry November 2009 ACTRN12609000973213.

  13. Finite difference schemes for long-time integration

    NASA Technical Reports Server (NTRS)

    Haras, Zigo; Taasan, Shlomo

    1993-01-01

    Finite difference schemes for the evaluation of first and second derivatives are presented. These second order compact schemes were designed for long-time integration of evolution equations by solving a quadratic constrained minimization problem. The quadratic cost function measures the global truncation error while taking into account the initial data. The resulting schemes are applicable for integration times fourfold, or more, longer than similar previously studied schemes. A similar approach was used to obtain improved integration schemes.

  14. A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise

    NASA Astrophysics Data System (ADS)

    Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno

    2017-09-01

    While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.

  15. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    PubMed

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  16. The Development of Patient Scheduling Groups for an Effective Appointment System

    PubMed Central

    2016-01-01

    Summary Background Patient access to care and long wait times has been identified as major problems in outpatient delivery systems. These aspects impact medical staff productivity, service quality, clinic efficiency, and health-care cost. Objectives This study proposed to redesign existing patient types into scheduling groups so that the total cost of clinic flow and scheduling flexibility was minimized. The optimal scheduling group aimed to improve clinic efficiency and accessibility. Methods The proposed approach used the simulation optimization technique and was demonstrated in a Primary Care physician clinic. Patient type included, emergency/urgent care (ER/UC), follow-up (FU), new patient (NP), office visit (OV), physical exam (PE), and well child care (WCC). One scheduling group was designed for this physician. The approach steps were to collect physician treatment time data for each patient type, form the possible scheduling groups, simulate daily clinic flow and patient appointment requests, calculate costs of clinic flow as well as appointment flexibility, and find the scheduling group that minimized the total cost. Results The cost of clinic flow was minimized at the scheduling group of four, an 8.3% reduction from the group of one. The four groups were: 1. WCC, 2. OV, 3. FU and ER/UC, and 4. PE and NP. The cost of flexibility was always minimized at the group of one. The total cost was minimized at the group of two. WCC was considered separate and the others were grouped together. The total cost reduction was 1.3% from the group of one. Conclusions This study provided an alternative method of redesigning patient scheduling groups to address the impact on both clinic flow and appointment accessibility. Balance between them ensured the feasibility to the recognized issues of patient service and access to care. The robustness of the proposed method on the changes of clinic conditions was also discussed. PMID:27081406

  17. A method for determining optimum phasing of a multiphase propulsion system for a single-stage vehicle with linearized inert weight

    NASA Technical Reports Server (NTRS)

    Martin, J. A.

    1974-01-01

    A general analytical treatment is presented of a single-stage vehicle with multiple propulsion phases. A closed-form solution for the cost and for the performance and a derivation of the optimal phasing of the propulsion are included. Linearized variations in the inert weight elements are included, and the function to be minimized can be selected. The derivation of optimal phasing results in a set of nonlinear algebraic equations for optimal fuel volumes, for which a solution method is outlined. Three specific example cases are analyzed: minimum gross lift-off weight, minimum inert weight, and a minimized general function for a two-phase vehicle. The results for the two-phase vehicle are applied to the dual-fuel rocket. Comparisons with single-fuel vehicles indicate that dual-fuel vehicles can have lower inert weight either by development of a dual-fuel engine or by parallel burning of separate engines from lift-off.

  18. Detection of content adaptive LSB matching: a game theory approach

    NASA Astrophysics Data System (ADS)

    Denemark, Tomáš; Fridrich, Jessica

    2014-02-01

    This paper is an attempt to analyze the interaction between Alice and Warden in Steganography using the Game Theory. We focus on the modern steganographic embedding paradigm based on minimizing an additive distortion function. The strategies of both players comprise of the probabilistic selection channel. The Warden is granted the knowledge of the payload and the embedding costs, and detects embedding using the likelihood ratio. In particular, the Warden is ignorant about the embedding probabilities chosen by Alice. When adopting a simple multivariate Gaussian model for the cover, the payoff function in the form of the Warden's detection error can be numerically evaluated for a mutually independent embedding operation. We demonstrate on the example of a two-pixel cover that the Nash equilibrium is different from the traditional Alice's strategy that minimizes the KL divergence between cover and stego objects under an omnipotent Warden. Practical implications of this case study include computing the loss per pixel of Warden's ability to detect embedding due to her ignorance about the selection channel.

  19. Adaptive pattern recognition by mini-max neural networks as a part of an intelligent processor

    NASA Technical Reports Server (NTRS)

    Szu, Harold H.

    1990-01-01

    In this decade and progressing into 21st Century, NASA will have missions including Space Station and the Earth related Planet Sciences. To support these missions, a high degree of sophistication in machine automation and an increasing amount of data processing throughput rate are necessary. Meeting these challenges requires intelligent machines, designed to support the necessary automations in a remote space and hazardous environment. There are two approaches to designing these intelligent machines. One of these is the knowledge-based expert system approach, namely AI. The other is a non-rule approach based on parallel and distributed computing for adaptive fault-tolerances, namely Neural or Natural Intelligence (NI). The union of AI and NI is the solution to the problem stated above. The NI segment of this unit extracts features automatically by applying Cauchy simulated annealing to a mini-max cost energy function. The feature discovered by NI can then be passed to the AI system for future processing, and vice versa. This passing increases reliability, for AI can follow the NI formulated algorithm exactly, and can provide the context knowledge base as the constraints of neurocomputing. The mini-max cost function that solves the unknown feature can furthermore give us a top-down architectural design of neural networks by means of Taylor series expansion of the cost function. A typical mini-max cost function consists of the sample variance of each class in the numerator, and separation of the center of each class in the denominator. Thus, when the total cost energy is minimized, the conflicting goals of intraclass clustering and interclass segregation are achieved simultaneously.

  20. Selection of optimal spectral sensitivity functions for color filter arrays.

    PubMed

    Parmar, Manu; Reeves, Stanley J

    2010-12-01

    A color image meant for human consumption can be appropriately displayed only if at least three distinct color channels are present. Typical digital cameras acquire three-color images with only one sensor. A color filter array (CFA) is placed on the sensor such that only one color is sampled at a particular spatial location. This sparsely sampled signal is then reconstructed to form a color image with information about all three colors at each location. In this paper, we show that the wavelength sensitivity functions of the CFA color filters affect both the color reproduction ability and the spatial reconstruction quality of recovered images. We present a method to select perceptually optimal color filter sensitivity functions based upon a unified spatial-chromatic sampling framework. A cost function independent of particular scenes is defined that expresses the error between a scene viewed by the human visual system and the reconstructed image that represents the scene. A constrained minimization of the cost function is used to obtain optimal values of color-filter sensitivity functions for several periodic CFAs. The sensitivity functions are shown to perform better than typical RGB and CMY color filters in terms of both the s-CIELAB ∆E error metric and a qualitative assessment.

  1. The Design-To-Cost Manifold

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1990-01-01

    Design-to-cost is a popular technique for controlling costs. Although qualitative techniques exist for implementing design to cost, quantitative methods are sparse. In the launch vehicle and spacecraft engineering process, the question whether to minimize mass is usually an issue. The lack of quantification in this issue leads to arguments on both sides. This paper presents a mathematical technique which both quantifies the design-to-cost process and the mass/complexity issue. Parametric cost analysis generates and applies mathematical formulas called cost estimating relationships. In their most common forms, they are continuous and differentiable. This property permits the application of the mathematics of differentiable manifolds. Although the terminology sounds formidable, the application of the techniques requires only a knowledge of linear algebra and ordinary differential equations, common subjects in undergraduate scientific and engineering curricula. When the cost c is expressed as a differentiable function of n system metrics, setting the cost c to be a constant generates an n-1 dimensional subspace of the space of system metrics such that any set of metric values in that space satisfies the constant design-to-cost criterion. This space is a differentiable manifold upon which all mathematical properties of a differentiable manifold may be applied. One important property is that an easily implemented system of ordinary differential equations exists which permits optimization of any function of the system metrics, mass for example, over the design-to-cost manifold. A dual set of equations defines the directions of maximum and minimum cost change. A simplified approximation of the PRICE H(TM) production-production cost is used to generate this set of differential equations over [mass, complexity] space. The equations are solved in closed form to obtain the one dimensional design-to-cost trade and design-for-cost spaces. Preliminary results indicate that cost is relatively insensitive to changes in mass and that the reduction of complexity, both in the manufacturing process and of the spacecraft, is dominant in reducing cost.

  2. Integrated Building Energy Systems Design Considering Storage Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stadler, Michael; Marnay, Chris; Siddiqui, Afzal

    The addition of storage technologies such as flow batteries, conventional batteries, and heat storage can improve the economic, as well as environmental attraction of micro-generation systems (e.g., PV or fuel cells with or without CHP) and contribute to enhanced demand response. The interactions among PV, solar thermal, and storage systems can be complex, depending on the tariff structure, load profile, etc. In order to examine the impact of storage technologies on demand response and CO2 emissions, a microgrid's distributed energy resources (DER) adoption problem is formulated as a mixed-integer linear program that can pursue two strategies as its objective function.more » These two strategies are minimization of its annual energy costs or of its CO2 emissions. The problem is solved for a given test year at representative customer sites, e.g., nursing homes, to obtain not only the optimal investment portfolio, but also the optimal hourly operating schedules for the selected technologies. This paper focuses on analysis of storage technologies in micro-generation optimization on a building level, with example applications in New York State and California. It shows results from a two-year research projectperformed for the U.S. Department of Energy and ongoing work. Contrary to established expectations, our results indicate that PV and electric storage adoption compete rather than supplement each other considering the tariff structure and costs of electricity supply. The work shows that high electricity tariffs during on-peak hours are a significant driver for the adoption of electric storage technologies. To satisfy the site's objective of minimizing energy costs, the batteries have to be charged by grid power during off-peak hours instead of PV during on-peak hours. In contrast, we also show a CO2 minimization strategy where the common assumption that batteries can be charged by PV can be fulfilled at extraordinarily high energy costs for the site.« less

  3. Economic analysis of threatened species conservation: The case of woodland caribou and oilsands development in Alberta, Canada.

    PubMed

    Hauer, Grant; Vic Adamowicz, W L; Boutin, Stan

    2018-07-15

    Tradeoffs between cost and recovery targets for boreal caribou herds, threatened species in Alberta, Canada, are examined using a dynamic cost minimization model. Unlike most approaches used for minimizing costs of achieving threatened species targets, we incorporate opportunity costs of surface (forests) and subsurface resources (energy) as well as direct costs of conservation (habitat restoration and direct predator control), into a forward looking model of species protection. Opportunity costs of conservation over time are minimized with an explicit target date for meeting species recovery targets; defined as the number of self-sustaining caribou herds, which requires that both habitat and population targets are met by a set date. The model was run under various scenarios including three species recovery criteria, two oil and gas price regimes, and targets for the number of herds to recover from 1 to 12. The derived cost curve follows a typical pattern as costs of recovery per herd increase as the number of herds targeted for recovery increases. The results also show that the opportunity costs for direct predator control are small compared to habitat restoration and protection costs. However, direct predator control is essential for meeting caribou population targets and reducing the risk of extirpation while habitat is recovered over time. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. A cost comparison of traditional drainage and SUDS in Scotland.

    PubMed

    Duffy, A; Jefferies, C; Waddell, G; Shanks, G; Blackwood, D; Watkins, A

    2008-01-01

    The Dunfermline Eastern Expansion (DEX) is a 350 ha mixed development which commenced in 1996. Downstream water quality and flooding issues necessitated a holistic approach to drainage planning and the site has become a European showcase for the application of Sustainable Urban Drainage Systems (SUDS). However, there is minimal data available regarding the real costs of operating and maintaining SUDS to ensure they continue to perform as per their design function. This remains one of the primary barriers to the uptake and adoption of SUDS. This paper reports on what is understood to be the only study in the UK where actual costs of constructing and maintaining SUDS have been compared to an equivalent traditional drainage solution. To compare SUDS costs with traditional drainage, capital and maintenance costs of underground storage chambers of analogous storage volumes were estimated. A whole life costing methodology was then applied to data gathered. The main objective was to produce a reliable and robust cost comparison between SUDS and traditional drainage. The cost analysis is supportive of SUDS and indicates that well designed and maintained SUDS are more cost effective to construct, and cost less to maintain than traditional drainage solutions which are unable to meet the environmental requirements of current legislation. (c) IWA Publishing 2008.

  5. Cost Optimization Model for Business Applications in Virtualized Grid Environments

    NASA Astrophysics Data System (ADS)

    Strebel, Jörg

    The advent of Grid computing gives enterprises an ever increasing choice of computing options, yet research has so far hardly addressed the problem of mixing the different computing options in a cost-minimal fashion. The following paper presents a comprehensive cost model and a mixed integer optimization model which can be used to minimize the IT expenditures of an enterprise and help in decision-making when to outsource certain business software applications. A sample scenario is analyzed and promising cost savings are demonstrated. Possible applications of the model to future research questions are outlined.

  6. Drug waste minimization as an effective strategy of cost-containment in Oncology

    PubMed Central

    2014-01-01

    Background Sustainability of cancer care is a crucial issue for health care systems worldwide, even more during a time of economic recession. Low-cost measures are highly desirable to contain and reduce expenditures without impairing the quality of care. In this paper we aim to demonstrate the efficacy of drug waste minimization in reducing drug-related costs and its importance as a structural measure in health care management. Methods We first recorded intravenous cancer drugs prescription and amount of drug waste at the Oncology Department of Udine, Italy. Than we developed and applied a protocol for drug waste minimization based on per-pathology/per-drug scheduling of chemotherapies and pre-planned rounding of dosages. Results Before the protocol, drug wastage accounted for 8,3% of the Department annual drug expenditure. Over 70% of these costs were attributable to six drugs (cetuximab, docetaxel, gemcitabine, oxaliplatin, pemetrexed and trastuzumab) that we named ‘hot drugs’. Since the protocol introduction, we observed a 45% reduction in the drug waste expenditure. This benefit was confirmed in the following years and drug waste minimazion was able to limit the impact of new pricely drugs on the Department expenditures. Conclusions Facing current budgetary constraints, the application of a drug waste minimization model is effective in drug cost containment and may produce durable benefits. PMID:24507545

  7. Design and cost analysis of rapid aquifer restoration systems using flow simulation and quadratic programming.

    USGS Publications Warehouse

    Lefkoff, L.J.; Gorelick, S.M.

    1986-01-01

    Detailed two-dimensional flow simulation of a complex ground-water system is combined with quadratic and linear programming to evaluate design alternatives for rapid aquifer restoration. Results show how treatment and pumping costs depend dynamically on the type of treatment process, and capacity of pumping and injection wells, and the number of wells. The design for an inexpensive treatment process minimizes pumping costs, while an expensive process results in the minimization of treatment costs. Substantial reductions in pumping costs occur with increases in injection capacity or in the number of wells. Treatment costs are reduced by expansions in pumping capacity or injecion capacity. The analysis identifies maximum pumping and injection capacities.-from Authors

  8. Two Fixed, Evacuated, Glass, Solar Collectors Using Nonimaging Concentration

    NASA Astrophysics Data System (ADS)

    Garrison, John D.; Winston, Roland; O'Gallagher, Joseph; Ford, Gary

    1984-01-01

    Two fixed, evacuated, glass solar thermal collectors have been designed. The incorporation of nonimaging concentration, selective absorption and vacuum insulation into their design is essential for obtaining high efficiency through low heat loss, while operating at high temperatures. Nonimaging, approximately ideal concentration with wide acceptance angle permits solar radiation collection without tracking the sun, and insures collection of much of the diffuse radiation. It also minimizes the area of the absorbing surface, thereby reducing the radiation heat loss. Functional integration, where different parts of these two collectors serve more than one function, is also important in achieving high efficiency, and it reduces cost.

  9. Globally optimal superconducting magnets part I: minimum stored energy (MSE) current density map.

    PubMed

    Tieng, Quang M; Vegh, Viktor; Brereton, Ian M

    2009-01-01

    An optimal current density map is crucial in magnet design to provide the initial values within search spaces in an optimization process for determining the final coil arrangement of the magnet. A strategy for obtaining globally optimal current density maps for the purpose of designing magnets with coaxial cylindrical coils in which the stored energy is minimized within a constrained domain is outlined. The current density maps obtained utilising the proposed method suggests that peak current densities occur around the perimeter of the magnet domain, where the adjacent peaks have alternating current directions for the most compact designs. As the dimensions of the domain are increased, the current density maps yield traditional magnet designs of positive current alone. These unique current density maps are obtained by minimizing the stored magnetic energy cost function and therefore suggest magnet coil designs of minimal system energy. Current density maps are provided for a number of different domain arrangements to illustrate the flexibility of the method and the quality of the achievable designs.

  10. A value-based medicine analysis of ranibizumab for the treatment of subfoveal neovascular macular degeneration.

    PubMed

    Brown, Melissa M; Brown, Gary C; Brown, Heidi C; Peet, Jonathan

    2008-06-01

    To assess the conferred value and average cost-utility (cost-effectiveness) for intravitreal ranibizumab used to treat occult/minimally classic subfoveal choroidal neovascularization associated with age-related macular degeneration (AMD). Value-based medicine cost-utility analysis. MARINA (Minimally Classic/Occult Trial of the Anti-Vascular Endothelial Growth Factor Antibody Ranibizumab in the Treatment of Neovascular AMD) Study patients utilizing published primary data. Reference case, third-party insurer perspective, cost-utility analysis using 2006 United States dollars. Conferred value in the forms of (1) quality-adjusted life-years (QALYs) and (2) percent improvement in health-related quality of life. Cost-utility is expressed in terms of dollars expended per QALY gained. All outcomes are discounted at a 3% annual rate, as recommended by the Panel on Cost-effectiveness in Health and Medicine. Data are presented for the second-eye model, first-eye model, and combined model. Twenty-two intravitreal injections of 0.5 mg of ranibizumab administered over a 2-year period confer 1.039 QALYs, or a 15.8% improvement in quality of life for the 12-year period of the second-eye model reference case of occult/minimally classic age-related subfoveal choroidal neovascularization. The reference case treatment cost is $52652, and the cost-utility for the second-eye model is $50691/QALY. The quality-of-life gain from the first-eye model is 6.4% and the cost-utility is $123887, whereas the most clinically simulating combined model yields a quality-of-life gain of 10.4% and cost-utility of $74169. By conventional standards and the most commonly used second-eye and combined models, intravitreal ranibizumab administered for occult/minimally classic subfoveal choroidal neovascularization is a cost-effective therapy. Ranibizumab treatment confers considerably greater value than other neovascular macular degeneration pharmaceutical therapies that have been studied in randomized clinical trials.

  11. On the Inefficiency of Equilibria in Linear Bottleneck Congestion Games

    NASA Astrophysics Data System (ADS)

    de Keijzer, Bart; Schäfer, Guido; Telelis, Orestis A.

    We study the inefficiency of equilibrium outcomes in bottleneck congestion games. These games model situations in which strategic players compete for a limited number of facilities. Each player allocates his weight to a (feasible) subset of the facilities with the goal to minimize the maximum (weight-dependent) latency that he experiences on any of these facilities. We derive upper and (asymptotically) matching lower bounds on the (strong) price of anarchy of linear bottleneck congestion games for a natural load balancing social cost objective (i.e., minimize the maximum latency of a facility). We restrict our studies to linear latency functions. Linear bottleneck congestion games still constitute a rich class of games and generalize, for example, load balancing games with identical or uniformly related machines with or without restricted assignments.

  12. Delaunay-based derivative-free optimization for efficient minimization of time-averaged statistics of turbulent flows

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya

    2016-11-01

    This work considers the problem of the efficient minimization of the infinite time average of a stationary ergodic process in the space of a handful of independent parameters which affect it. Problems of this class, derived from physical or numerical experiments which are sometimes expensive to perform, are ubiquitous in turbulence research. In such problems, any given function evaluation, determined with finite sampling, is associated with a quantifiable amount of uncertainty, which may be reduced via additional sampling. This work proposes the first algorithm of this type. Our algorithm remarkably reduces the overall cost of the optimization process for problems of this class. Further, under certain well-defined conditions, rigorous proof of convergence is established to the global minimum of the problem considered.

  13. Extension of suboptimal control theory for flow around a square cylinder

    NASA Astrophysics Data System (ADS)

    Fujita, Yosuke; Fukagata, Koji

    2017-11-01

    We extend the suboptimal control theory to control of flow around a square cylinder, which has no point symmetry on the impulse response from the wall in contrast to circular cylinders and spheres previously studied. The cost functions examined are the pressure drag (J1), the friction drag (J2), the squared difference between target pressure and wall pressure (J3) and the time-averaged dissipation (J4). The control input is assumed to be continuous blowing and suction on the cylinder wall and the feedback sensors are assumued on the entire wall surface. The control law is derived so as to minimize the cost function under the constraint of linearized Navier-Stokes equation, and the impulse response field to be convolved with the instantaneous flow quanties are numerically obtained. The amplitide of control input is fixed so that the maximum blowing/suction velocity is 40% of the freestream velocity. When J2 is used as the cost function, the friction drag is reduced as expected but the mean drag is found to increase. In constast, when J1, J3, and J4 were used, the mean drag was found to decrease by 21%, 12%, and 22%, respectively; in addition, vortex shedding is suppressed, which leads to reduction of lift fluctuations.

  14. Sail Plan Configuration Optimization for a Modern Clipper Ship

    NASA Astrophysics Data System (ADS)

    Gerritsen, Margot; Doyle, Tyler; Iaccarino, Gianluca; Moin, Parviz

    2002-11-01

    We investigate the use of gradient-based and evolutionary algorithms for sail shape optimization. We present preliminary results for the optimization of sheeting angles for the rig of the future three-masted clipper yacht Maltese Falcon. This yacht will be equipped with square-rigged masts made up of yards of circular arc cross sections. This design is especially attractive for megayachts because it provides a large sail area while maintaining aerodynamic and structural efficiency. The rig remains almost rigid in a large range of wind conditions and therefore a simple geometrical model can be constructed without accounting for the true flying shape. The sheeting angle optimization studies are performed using both gradient-based cost function minimization and evolutionary algorithms. The fluid flow is modeled by the Reynolds-averaged Navier-Stokes equations with the Spallart-Allmaras turbulence model. Unstructured non-conforming grids are used to increase robustness and computational efficiency. The optimization process is automated by integrating the system components (geometry construction, grid generation, flow solver, force calculator, optimization). We compare the optimization results to those done previously by user-controlled parametric studies using simple cost functions and user intuition. We also investigate the effectiveness of various cost functions in the optimization (driving force maximization, ratio of driving force to heeling force maximization).

  15. A strategy for improved computational efficiency of the method of anchored distributions

    NASA Astrophysics Data System (ADS)

    Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram

    2013-06-01

    This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.

  16. Does Minimally Invasive Spine Surgery Minimize Surgical Site Infections?

    PubMed

    Kulkarni, Arvind Gopalrao; Patel, Ravish Shammi; Dutta, Shumayou

    2016-12-01

    Retrospective review of prospectively collected data. To evaluate the incidence of surgical site infections (SSIs) in minimally invasive spine surgery (MISS) in a cohort of patients and compare with available historical data on SSI in open spinal surgery cohorts, and to evaluate additional direct costs incurred due to SSI. SSI can lead to prolonged antibiotic therapy, extended hospitalization, repeated operations, and implant removal. Small incisions and minimal dissection intrinsic to MISS may minimize the risk of postoperative infections. However, there is a dearth of literature on infections after MISS and their additional direct financial implications. All patients from January 2007 to January 2015 undergoing posterior spinal surgery with tubular retractor system and microscope in our institution were included. The procedures performed included tubular discectomies, tubular decompressions for spinal stenosis and minimal invasive transforaminal lumbar interbody fusion (TLIF). The incidence of postoperative SSI was calculated and compared to the range of cited SSI rates from published studies. Direct costs were calculated from medical billing for index cases and for patients with SSI. A total of 1,043 patients underwent 763 noninstrumented surgeries (discectomies, decompressions) and 280 instrumented (TLIF) procedures. The mean age was 52.2 years with male:female ratio of 1.08:1. Three infections were encountered with fusion surgeries (mean detection time, 7 days). All three required wound wash and debridement with one patient requiring unilateral implant removal. Additional direct cost due to infection was $2,678 per 100 MISS-TLIF. SSI increased hospital expenditure per patient 1.5-fold after instrumented MISS. Overall infection rate after MISS was 0.29%, with SSI rate of 0% in non-instrumented MISS and 1.07% with instrumented MISS. MISS can markedly reduce the SSI rate and can be an effective tool to minimize hospital costs.

  17. GPU-accelerated adjoint algorithmic differentiation

    NASA Astrophysics Data System (ADS)

    Gremse, Felix; Höfter, Andreas; Razik, Lukas; Kiessling, Fabian; Naumann, Uwe

    2016-03-01

    Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the ;tape;. Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5 ± 4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography.

  18. GPU-Accelerated Adjoint Algorithmic Differentiation.

    PubMed

    Gremse, Felix; Höfter, Andreas; Razik, Lukas; Kiessling, Fabian; Naumann, Uwe

    2016-03-01

    Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the "tape". Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5 ± 4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography.

  19. GPU-Accelerated Adjoint Algorithmic Differentiation

    PubMed Central

    Gremse, Felix; Höfter, Andreas; Razik, Lukas; Kiessling, Fabian; Naumann, Uwe

    2015-01-01

    Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the “tape”. Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5 ± 4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography. PMID:26941443

  20. Influence of cost functions and optimization methods on solving the inverse problem in spatially resolved diffuse reflectance spectroscopy

    NASA Astrophysics Data System (ADS)

    Rakotomanga, Prisca; Soussen, Charles; Blondel, Walter C. P. M.

    2017-03-01

    Diffuse reflectance spectroscopy (DRS) has been acknowledged as a valuable optical biopsy tool for in vivo characterizing pathological modifications in epithelial tissues such as cancer. In spatially resolved DRS, accurate and robust estimation of the optical parameters (OP) of biological tissues is a major challenge due to the complexity of the physical models. Solving this inverse problem requires to consider 3 components: the forward model, the cost function, and the optimization algorithm. This paper presents a comparative numerical study of the performances in estimating OP depending on the choice made for each of the latter components. Mono- and bi-layer tissue models are considered. Monowavelength (scalar) absorption and scattering coefficients are estimated. As a forward model, diffusion approximation analytical solutions with and without noise are implemented. Several cost functions are evaluated possibly including normalized data terms. Two local optimization methods, Levenberg-Marquardt and TrustRegion-Reflective, are considered. Because they may be sensitive to the initial setting, a global optimization approach is proposed to improve the estimation accuracy. This algorithm is based on repeated calls to the above-mentioned local methods, with initial parameters randomly sampled. Two global optimization methods, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), are also implemented. Estimation performances are evaluated in terms of relative errors between the ground truth and the estimated values for each set of unknown OP. The combination between the number of variables to be estimated, the nature of the forward model, the cost function to be minimized and the optimization method are discussed.

  1. Analysis of Seasonal Chlorophyll-a Using An Adjoint Three-Dimensional Ocean Carbon Cycle Model

    NASA Astrophysics Data System (ADS)

    Tjiputra, J.; Winguth, A.; Polzin, D.

    2004-12-01

    The misfit between numerical ocean model and observations can be reduced using data assimilation. This can be achieved by optimizing the model parameter values using adjoint model. The adjoint model minimizes the model-data misfit by estimating the sensitivity or gradient of the cost function with respect to initial condition, boundary condition, or parameters. The adjoint technique was used to assimilate seasonal chlorophyll-a data from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) satellite to a marine biogeochemical model HAMOCC5.1. An Identical Twin Experiment (ITE) was conducted to test the robustness of the model and the non-linearity level of the forward model. The ITE experiment successfully recovered most of the perturbed parameter to their initial values, and identified the most sensitive ecosystem parameters, which contribute significantly to model-data bias. The regional assimilations of SeaWiFS chlorophyll-a data into the model were able to reduce the model-data misfit (i.e. the cost function) significantly. The cost function reduction mostly occurred in the high latitudes (e.g. the model-data misfit in the northern region during summer season was reduced by 54%). On the other hand, the equatorial regions appear to be relatively stable with no strong reduction in cost function. The optimized parameter set is used to forecast the carbon fluxes between marine ecosystem compartments (e.g. Phytoplankton, Zooplankton, Nutrients, Particulate Organic Carbon, and Dissolved Organic Carbon). The a posteriori model run using the regional best-fit parameterization yields approximately 36 PgC/yr of global net primary productions in the euphotic zone.

  2. Solar satellites

    NASA Astrophysics Data System (ADS)

    Poher, C.

    A reference system design, projected costs, and the functional concepts of a satellite solar power system (SSPS) for converting sunlight falling on solar panels of a satellite in GEO to a multi-GW beam which could be received by a rectenna on earth are outlined. Electricity transmission by microwaves has been demonstrated, and a reference design system for supplying 5 GW dc to earth was devised. The system will use either monocrystalline Si or concentrator GaAs solar cells for energy collection in GEO. Development is still needed to improve the lifespan of the cells. Currently, the cell performance degrades 50 percent in efficiency after 7-8 yr in space. Each SSPS satellite would weigh either 34,000 tons (Si) or 51,000 tons (GaAs), thereby requiring the fabrication of a heavy lift launch vehicle or a single-stage-to-orbit transport in order to minimize launch costs. Costs for the solar panels have been estimated at $500/kW using the GaAs technology, with transport costs for materials to GEO being $40/kg.

  3. Rapid prototyping prosthetic hand acting by a low-cost shape-memory-alloy actuator.

    PubMed

    Soriano-Heras, Enrique; Blaya-Haro, Fernando; Molino, Carlos; de Agustín Del Burgo, José María

    2018-06-01

    The purpose of this article is to develop a new concept of modular and operative prosthetic hand based on rapid prototyping and a novel shape-memory-alloy (SMA) actuator, thus minimizing the manufacturing costs. An underactuated mechanism was needed for the design of the prosthesis to use only one input source. Taking into account the state of the art, an underactuated mechanism prosthetic hand was chosen so as to implement the modifications required for including the external SMA actuator. A modular design of a new prosthesis was developed which incorporated a novel SMA actuator for the index finger movement. The primary objective of the prosthesis is achieved, obtaining a modular and functional low-cost prosthesis based on additive manufacturing executed by a novel SMA actuator. The external SMA actuator provides a modular system which allows implementing it in different systems. This paper combines rapid prototyping and a novel SMA actuator to develop a new concept of modular and operative low-cost prosthetic hand.

  4. Cost optimization of reinforced concrete cantilever retaining walls under seismic loading using a biogeography-based optimization algorithm with Levy flights

    NASA Astrophysics Data System (ADS)

    Aydogdu, Ibrahim

    2017-03-01

    In this article, a new version of a biogeography-based optimization algorithm with Levy flight distribution (LFBBO) is introduced and used for the optimum design of reinforced concrete cantilever retaining walls under seismic loading. The cost of the wall is taken as an objective function, which is minimized under the constraints implemented by the American Concrete Institute (ACI 318-05) design code and geometric limitations. The influence of peak ground acceleration (PGA) on optimal cost is also investigated. The solution of the problem is attained by the LFBBO algorithm, which is developed by adding Levy flight distribution to the mutation part of the biogeography-based optimization (BBO) algorithm. Five design examples, of which two are used in literature studies, are optimized in the study. The results are compared to test the performance of the LFBBO and BBO algorithms, to determine the influence of the seismic load and PGA on the optimal cost of the wall.

  5. From Data to Images:. a Shape Based Approach for Fluorescence Tomography

    NASA Astrophysics Data System (ADS)

    Dorn, O.; Prieto, K. E.

    2012-12-01

    Fluorescence tomography is treated as a shape reconstruction problem for a coupled system of two linear transport equations in 2D. The shape evolution is designed in order to minimize the least squares data misfit cost functional either in the excitation frequency or in the emission frequency. Furthermore, a level set technique is employed for numerically modelling the evolving shapes. Numerical results are presented which demonstrate the performance of this novel technique in the situation of noisy simulated data in 2D.

  6. Implementation Plan for Worldwide Airborne Command Post (WWABNCP) Operator Computer-Based Training (PLATO): Decision Paper.

    DTIC Science & Technology

    1985-10-03

    Electrospace Systems, Inc. (ESI). ESI con- ducted a market search for training systems that would enhance unit level training, minimize cost-prohibitive...can be reprogrammed to simulate the UGC -129 keyboard. This keyboard is the standard keyboard used for data transmission on board the EC-135 and E-4B...with the appropriate technical order, and the functions and operation of the AN/ UGC -129 (ASR) terminals used with the AN/ASC-21 AFSATCOM system. In

  7. Computing Optimal Stochastic Portfolio Execution Strategies: A Parametric Approach Using Simulations

    NASA Astrophysics Data System (ADS)

    Moazeni, Somayeh; Coleman, Thomas F.; Li, Yuying

    2010-09-01

    Computing optimal stochastic portfolio execution strategies under appropriate risk consideration presents great computational challenge. We investigate a parametric approach for computing optimal stochastic strategies using Monte Carlo simulations. This approach allows reduction in computational complexity by computing coefficients for a parametric representation of a stochastic dynamic strategy based on static optimization. Using this technique, constraints can be similarly handled using appropriate penalty functions. We illustrate the proposed approach to minimize the expected execution cost and Conditional Value-at-Risk (CVaR).

  8. The high energy astronomy observatories

    NASA Technical Reports Server (NTRS)

    Neighbors, A. K.; Doolittle, R. F.; Halpers, R. E.

    1977-01-01

    The forthcoming NASA project of orbiting High Energy Astronomy Observatories (HEAO's) designed to probe the universe by tracing celestial radiations and particles is outlined. Solutions to engineering problems concerning HEAO's which are integrated, yet built to function independently are discussed, including the onboard digital processor, mirror assembly and the thermal shield. The principle of maximal efficiency with minimal cost and the potential capability of the project to provide explanations to black holes, pulsars and gamma-ray bursts are also stressed. The first satellite is scheduled for launch in April 1977.

  9. Optimizing Economic Indicators in the Case of Using Two Types of State-Subsidized Chemical Fertilizers for Agricultural Production

    NASA Astrophysics Data System (ADS)

    Boldea, M.; Sala, F.

    2010-09-01

    We admit that the mathematical relation between agricultural production f(x, y) and the two types of fertilizers x and y is given by function (1). The coefficients that appear are determined by using the least squares method by comparison with the experimental data. We took into consideration the following economic indicators: absolute benefit, relative benefit, profitableness and cost price. These are maximized or minimized, thus obtaining the optimal solutions by annulling the partial derivatives.

  10. Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fattebert, J.-L.; Richards, D.F.; Glosli, J.N.

    2012-12-01

    We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·10 6 particles on 65,536 MPI tasks.

  11. Experimental and Theoretical Results in Output Trajectory Redesign for Flexible Structures

    NASA Technical Reports Server (NTRS)

    Dewey, J. S.; Leang, K.; Devasia, S.

    1998-01-01

    In this paper we study the optimal redesign of output trajectories for linear invertible systems. This is particularly important for tracking control of flexible structures because the input-state trajectores, that achieve tracking of the required output may cause excessive vibrations in the structure. We pose and solve this problem, in the context of linear systems, as the minimization of a quadratic cost function. The theory is developed and applied to the output tracking of a flexible structure and experimental results are presented.

  12. Disaster warning system: Satellite feasibility and comparison with terrestrial systems. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Spoor, J. H.; Hodge, W. H.; Fluk, M. J.; Bamford, T. F.

    1974-01-01

    The Disaster Warning System (DWS) is a conceptual system which will provide the National Weather Service (NWS) with communication services in the 1980s to help minimize losses caused by natural disasters. The object of this study is a comparative analysis between a terrestrial DWS and a satellite DWS. Baseline systems satisfying the NOAA requirements were synthesized in sufficient detail so that a comparison could be made in terms of performance and cost. The cost of both baseline systems is dominated by the disaster warning and spotter reporting functions. An effort was undertaken to reduce system cost through lower-capacity alternative systems generated by modifying the baseline systems. By reducing the number of required channels and modifying the spotter reporting techniques, alternative satellite systems were synthesized. A terrestrial alternative with the coverage reduced to an estimated 95 percent of the population was considered.

  13. Analog "neuronal" networks in early vision.

    PubMed Central

    Koch, C; Marroquin, J; Yuille, A

    1986-01-01

    Many problems in early vision can be formulated in terms of minimizing a cost function. Examples are shape from shading, edge detection, motion analysis, structure from motion, and surface interpolation. As shown by Poggio and Koch [Poggio, T. & Koch, C. (1985) Proc. R. Soc. London, Ser. B 226, 303-323], quadratic variational problems, an important subset of early vision tasks, can be "solved" by linear, analog electrical, or chemical networks. However, in the presence of discontinuities, the cost function is nonquadratic, raising the question of designing efficient algorithms for computing the optimal solution. Recently, Hopfield and Tank [Hopfield, J. J. & Tank, D. W. (1985) Biol. Cybern. 52, 141-152] have shown that networks of nonlinear analog "neurons" can be effective in computing the solution of optimization problems. We show how these networks can be generalized to solve the nonconvex energy functionals of early vision. We illustrate this approach by implementing a specific analog network, solving the problem of reconstructing a smooth surface from sparse data while preserving its discontinuities. These results suggest a novel computational strategy for solving early vision problems in both biological and real-time artificial vision systems. PMID:3459172

  14. Current trends in treatment of obesity in Karachi and possibilities of cost minimization.

    PubMed

    Hussain, Mirza Izhar; Naqvi, Baqir Shyum

    2015-03-01

    Our study finds out drug usage trends in over weight and obese patients without any compelling indications in Karachi, looks for deviations of current practices from evidence based antihypertensive therapeutic guidelines and identifies not only cost minimization opportunities but also communication strategies to improve patients' awareness and compliance to achieve therapeutic goal. In present study two sets were used. Randomized stratified independent surveys were conducted in hospital doctors and family physicians (general practitioners), using pretested questionnaires. Sample size was 100. Statistical analysis was conducted on Statistical Package for Social Science (SPSS). Opportunities of cost minimization were also analyzed. One the basis of doctors' feedback, preference is given to non-pharmacologic management of obesity. Mass media campaign and media usage were recommended to increase patients awareness and patients' education along with strengthening family support systems was recommended for better compliance of the patients to doctor's advice. Local therapeutic guidelines for weight reduction were not found. Feedbacks showed that global therapeutic guidelines were followed by the doctors practicing in the community and hospitals in Karachi. However, high price branded drugs were used instead of low priced generic therapeutic equivalents. Patient's education is required for better awareness and improving patients' compliance. The doctors found preferring brand leaders instead of low cost options. This trend increases cost of therapy by 0.59 to 4.17 times. Therefore, there are great opportunities for cost minimization by using evidence-based clinically effective and safe medicines.

  15. Distributed query plan generation using multiobjective genetic algorithm.

    PubMed

    Panicker, Shina; Kumar, T V Vijay

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability.

  16. Distributed Query Plan Generation Using Multiobjective Genetic Algorithm

    PubMed Central

    Panicker, Shina; Vijay Kumar, T. V.

    2014-01-01

    A distributed query processing strategy, which is a key performance determinant in accessing distributed databases, aims to minimize the total query processing cost. One way to achieve this is by generating efficient distributed query plans that involve fewer sites for processing a query. In the case of distributed relational databases, the number of possible query plans increases exponentially with respect to the number of relations accessed by the query and the number of sites where these relations reside. Consequently, computing optimal distributed query plans becomes a complex problem. This distributed query plan generation (DQPG) problem has already been addressed using single objective genetic algorithm, where the objective is to minimize the total query processing cost comprising the local processing cost (LPC) and the site-to-site communication cost (CC). In this paper, this DQPG problem is formulated and solved as a biobjective optimization problem with the two objectives being minimize total LPC and minimize total CC. These objectives are simultaneously optimized using a multiobjective genetic algorithm NSGA-II. Experimental comparison of the proposed NSGA-II based DQPG algorithm with the single objective genetic algorithm shows that the former performs comparatively better and converges quickly towards optimal solutions for an observed crossover and mutation probability. PMID:24963513

  17. Extended polarization in 3rd order SCC-DFTB from chemical potential equilization

    PubMed Central

    Kaminski, Steve; Giese, Timothy J.; Gaus, Michael; York, Darrin M.; Elstner, Marcus

    2012-01-01

    In this work we augment the approximate density functional method SCC-DFTB (DFTB3) with the chemical potential equilization (CPE) approach in order to improve the performance for molecular electronic polarizabilities. The CPE method, originally implemented for NDDO type methods by Giese and York, has been shown to emend minimal basis methods wrt response properties significantly, and has been applied to SCC-DFTB recently. CPE allows to overcome this inherent limitation of minimal basis methods by supplying an additional response density. The systematic underestimation is thereby corrected quantitatively without the need to extend the atomic orbital basis, i.e. without increasing the overall computational cost significantly. Especially the dependency of polarizability as a function of molecular charge state was significantly improved from the CPE extension of DFTB3. The empirical parameters introduced by the CPE approach were optimized for 172 organic molecules in order to match the results from density functional methods (DFT) methods using large basis sets. However, the first order derivatives of molecular polarizabilities, as e.g. required to compute Raman activities, are not improved by the current CPE implementation, i.e. Raman spectra are not improved. PMID:22894819

  18. What makes a reach movement effortful? Physical effort discounting supports common minimization principles in decision making and motor control

    PubMed Central

    Ulbrich, Philipp; Gail, Alexander

    2017-01-01

    When deciding between alternative options, a rational agent chooses on the basis of the desirability of each outcome, including associated costs. As different options typically result in different actions, the effort associated with each action is an essential cost parameter. How do humans discount physical effort when deciding between movements? We used an action-selection task to characterize how subjective effort depends on the parameters of arm transport movements and controlled for potential confounding factors such as delay discounting and performance. First, by repeatedly asking subjects to choose between 2 arm movements of different amplitudes or durations, performed against different levels of force, we identified parameter combinations that subjects experienced as identical in effort (isoeffort curves). Movements with a long duration were judged more effortful than short-duration movements against the same force, while movement amplitudes did not influence effort. Biomechanics of the movements also affected effort, as movements towards the body midline were preferred to movements away from it. Second, by introducing movement repetitions, we further determined that the cost function for choosing between effortful movements had a quadratic relationship with force, while choices were made on the basis of the logarithm of these costs. Our results show that effort-based action selection during reaching cannot easily be explained by metabolic costs. Instead, force-loaded reaches, a widely occurring natural behavior, imposed an effort cost for decision making similar to cost functions in motor control. Our results thereby support the idea that motor control and economic choice are governed by partly overlapping optimization principles. PMID:28586347

  19. a Multi Objective Model for Optimization of a Green Supply Chain Network

    NASA Astrophysics Data System (ADS)

    Paksoy, Turan; Özceylan, Eren; Weber, Gerhard-Wilhelm

    2010-06-01

    This study develops a model of a closed-loop supply chain (CLSC) network which starts with the suppliers and recycles with the decomposition centers. As a traditional network design, we consider minimizing the all transportation costs and the raw material purchasing costs. To pay attention for the green impacts, different transportation choices are presented between echelons according to their CO2 emissions. The plants can purchase different raw materials in respect of their recyclable ratios. The focuses of this paper are conducting the minimizing total CO2 emissions. Also we try to encourage the customers to use recyclable materials as an environmental performance viewpoint besides minimizing total costs. A multi objective linear programming model is developed via presenting a numerical example. We close the paper with recommendations for future researches.

  20. An optimal control strategies using vaccination and fogging in dengue fever transmission model

    NASA Astrophysics Data System (ADS)

    Fitria, Irma; Winarni, Pancahayani, Sigit; Subchan

    2017-08-01

    This paper discussed regarding a model and an optimal control problem of dengue fever transmission. We classified the model as human and vector (mosquito) population classes. For the human population, there are three subclasses, such as susceptible, infected, and resistant classes. Then, for the vector population, we divided it into wiggler, susceptible, and infected vector classes. Thus, the model consists of six dynamic equations. To minimize the number of dengue fever cases, we designed two optimal control variables in the model, the giving of fogging and vaccination. The objective function of this optimal control problem is to minimize the number of infected human population, the number of vector, and the cost of the controlling efforts. By giving the fogging optimally, the number of vector can be minimized. In this case, we considered the giving of vaccination as a control variable because it is one of the efforts that are being developed to reduce the spreading of dengue fever. We used Pontryagin Minimum Principle to solve the optimal control problem. Furthermore, the numerical simulation results are given to show the effect of the optimal control strategies in order to minimize the epidemic of dengue fever.

  1. The affordability of minimally invasive procedures in major lung resection: a prospective study.

    PubMed

    Gondé, Henri; Laurent, Marc; Gillibert, André; Sarsam, Omar-Matthieu; Varin, Rémi; Grimandi, Gaël; Peillon, Christophe; Baste, Jean-Marc

    2017-09-01

    Minimally invasive procedures are used for the surgical treatment of lung cancer. Two techniques are proposed: video-assisted thoracic surgery or robotic-assisted thoracic surgery. Our goal was to study the economic impact of our long-standing program for minimally invasive procedures for major lung resection. We conducted a single-centre, 1-year prospective cost study. Patients who underwent lobectomy or segmentectomy were included. Patient characteristics and perioperative outcomes were collected. Medical supply expenses based on the microcosting method and capital depreciation were estimated. Total cost was evaluated using a national French database. One hundred twelve patients were included, 57 with and 55 without robotic assistance. More segmentectomies were performed with robotic assistance. The median length of stay was 5 days for robotic-assisted and 6 days for video-assisted procedures (P = 0.13). The duration of median chest drains (4 days, P = 0.36) and of operating room time (255 min, P = 0.55) was not significantly different between the groups. The overall conversion rate to thoracotomy was 9%, significantly higher in the video-assisted group than in the robotic group (16% vs 2%, P = 0.008). No difference was observed in postoperative complications. The cost of most robotic-assisted procedures ranged from €10 000 to €12 000 (median €10 972) and that of most video-assisted procedures ranged from €8 000 to €10 000 (median €9 637) (P = 0.007); median medical supply expenses were €3 236 and €2 818, respectively (P = 0.004). The overall mean cost of minimally invasive techniques (€11 759) was significantly lower than the mean French cost of lung resection surgical procedures (€13 424) (P = 0.001). The cost at our centre of performing minimally invasive surgical procedures appeared lower than the cost nationwide. Robotic-assisted thoracic surgery demonstrated acceptable additional costs for a long-standing program. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  2. Adopting a plant-based diet minimally increased food costs in WHEL Study.

    PubMed

    Hyder, Joseph A; Thomson, Cynthia A; Natarajan, Loki; Madlensky, Lisa; Pu, Minya; Emond, Jennifer; Kealey, Sheila; Rock, Cheryl L; Flatt, Shirley W; Pierce, John P

    2009-01-01

    To assess the cost of adopting a plant-based diet. Breast cancer survivors randomized to dietary intervention (n=1109) or comparison (n=1145) group; baseline and 12-month data on diet and grocery costs. At baseline, both groups reported similar food costs and dietary intake. At 12 months, only the intervention group changed their diet (vegetable-fruit: 6.3 to 8.9 serv/d.; fiber: 21.6 to 29.8 g/d; fat: 28.2 to 22.3% of E). The intervention change was associated with a significant increase of $1.22/ person/week (multivariate model, P=0.027). A major change to a plant-based diet was associated with a minimal increase in grocery costs.

  3. Optimal speeds for walking and running, and walking on a moving walkway.

    PubMed

    Srinivasan, Manoj

    2009-06-01

    Many aspects of steady human locomotion are thought to be constrained by a tendency to minimize the expenditure of metabolic cost. This paper has three parts related to the theme of energetic optimality: (1) a brief review of energetic optimality in legged locomotion, (2) an examination of the notion of optimal locomotion speed, and (3) an analysis of walking on moving walkways, such as those found in some airports. First, I describe two possible connotations of the term "optimal locomotion speed:" that which minimizes the total metabolic cost per unit distance and that which minimizes the net cost per unit distance (total minus resting cost). Minimizing the total cost per distance gives the maximum range speed and is a much better predictor of the speeds at which people and horses prefer to walk naturally. Minimizing the net cost per distance is equivalent to minimizing the total daily energy intake given an idealized modern lifestyle that requires one to walk a given distance every day--but it is not a good predictor of animals' walking speeds. Next, I critique the notion that there is no energy-optimal speed for running, making use of some recent experiments and a review of past literature. Finally, I consider the problem of predicting the speeds at which people walk on moving walkways--such as those found in some airports. I present two substantially different theories to make predictions. The first theory, minimizing total energy per distance, predicts that for a range of low walkway speeds, the optimal absolute speed of travel will be greater--but the speed relative to the walkway smaller--than the optimal walking speed on stationary ground. At higher walkway speeds, this theory predicts that the person will stand still. The second theory is based on the assumption that the human optimally reconciles the sensory conflict between the forward speed that the eye sees and the walking speed that the legs feel and tries to equate the best estimate of the forward speed to the naturally preferred speed. This sensory conflict theory also predicts that people would walk slower than usual relative to the walkway yet move faster than usual relative to the ground. These predictions agree qualitatively with available experimental observations, but there are quantitative differences.

  4. 46 CFR 252.34 - Protection and indemnity insurance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... The adjustment of the wage percentage differential shall not be used for Japan, where operators incur minimal costs for deductible absorptions, rather than no costs. For Japan, the insurance related costs...

  5. Perceived affordability of health insurance and medical financial burdens five years in to Massachusetts health reform.

    PubMed

    Zallman, Leah; Nardin, Rachel; Sayah, Assaad; McCormick, Danny

    2015-10-29

    Under the Massachusetts health reform, low income residents (those with incomes below 150 % of the Federal Poverty Level [FPL]) were eligible for Medicaid and health insurance exchange-based plans with minimal cost-sharing and no premiums. Those with slightly higher incomes (150 %-300 % FPL) were eligible for exchange-based plans that required cost-sharing and premium payments. We conducted face to face surveys in four languages with a convenience sample of 976 patients seeking care at three hospital emergency departments five years after Massachusetts reform. We compared perceived affordability of insurance, financial burden, and satisfaction among low cost sharing plan recipients (recipients of Medicaid and insurance exchange-based plans with minimal cost-sharing and no premiums), high cost sharing plan recipients (recipients of exchange-based plans that required cost-sharing and premium payments) and the commercially insured. We found that despite having higher incomes, higher cost-sharing plan recipients were less satisfied with their insurance plans and perceived more difficulty affording their insurance than those with low cost-sharing plans. Higher cost-sharing plan recipients also reported more difficulty affording medical and non-medical health care as well as insurance premiums than those with commercial insurance. In contrast, patients with low cost-sharing public plans reported higher plan satisfaction and less financial concern than the commercially insured. Policy makers with responsibility for the benefit design of public insurance available under health care reforms in the U.S. should calibrate cost-sharing to income level so as to minimize difficulty affording care and financial burdens.

  6. What difference a decade? The costs of psychosis in Australia in 2000 and 2010: comparative results from the first and second Australian national surveys of psychosis.

    PubMed

    Neil, Amanda L; Carr, Vaughan J; Mihalopoulos, Cathrine; Mackinnon, Andrew; Lewin, Terry J; Morgan, Vera A

    2014-03-01

    To assess differences in costs of psychosis between the first and second Australian national surveys of psychosis and examine them in light of policy developments. Cost differences due to changes in resource use and/or real price rises were assessed by minimizing differences in recruitment and costing methodologies between the two surveys. For each survey, average annual societal costs of persons recruited through public specialized mental health services in the census month were assessed through prevalence-based, bottom-up cost-of-illness analyses. The first survey costing methodology was employed as the reference approach. Unit costs were specific to each time period (2000, 2010) and expressed in 2010 Australian dollars. There was minimal change in the average annual costs of psychosis between the surveys, although newly included resources in the second survey's analysis cost AUD$3183 per person. Among resources common to each analysis were significant increases in the average annual cost per person for ambulatory care of AUD$7380, non-government services AUD$2488 and pharmaceuticals AUD$1892, and an upward trend in supported accommodation costs. These increases were offset by over a halving of mental health inpatient costs of AUD$11,790 per person and a 84.6% (AUD$604) decrease in crisis accommodation costs. Productivity losses, the greatest component cost, changed minimally, reflecting the magnitude and constancy of reduced employment levels of individuals with psychosis across the surveys. Between 2000 and 2010 there was little change in total average annual costs of psychosis for individuals receiving treatment at public specialized mental health services. However, there was a significant redistribution of costs within and away from the health sector in line with government initiatives arising from the Second and Third National Mental Health Plans. Non-health sector costs are now a critical component of cost-of-illness analyses of mental illnesses reflecting, at least in part, a whole-of-government approach to care.

  7. Integrated least-cost lumber grade-mix solver

    Treesearch

    U. Buehlmann; R. Buck; R.E. Thomas

    2011-01-01

    Hardwood lumber costs account for up to 70 percent of the total product costs of U.S. secondary wood products producers. Reducing these costs is difficult and often requires substantial capital investments. However, lumber-purchasing costs can be minimized by buying the least-cost lumber grade-mix that satisfies a company's component needs. Price differentials...

  8. Financial costs for families of children with Type 1 diabetes in lower-income countries.

    PubMed

    Ogle, G D; Kim, H; Middlehurst, A C; Silink, M; Jenkins, A J

    2016-06-01

    To assess the direct costs of necessary consumables for minimal care of a child with Type 1 diabetes in countries where the public health system does not regularly provide such care. Supply costs were collected between January 2013 and February 2015 from questionnaires submitted by centres requesting International Diabetes Federation Life for a Child Program support. All 20 centres in 15 countries agreed to the use of their responses. Annual costs for minimal care were estimated for: 18 × 10 ml 100 IU/ml insulin, 1/3 cost of a blood glucose meter, two blood glucose test strips/day, two syringes/week, and four HbA1c tests/year. Costs were expressed in US dollars, and as % of gross national income (purchasing power parity) per capita. The ranges (median) for the minimum supply costs through the private system were: insulin 10 ml 100 IU/ml equivalent vial: $5.10-$25 ($8.00); blood glucose meter: $15-$121 ($33.33); test strip: $0.15-$1.20 ($0.50); syringe: $0.10-$0.56 ($0.20); and HbA1c : $4.90-$20 ($9.75). Annual costs ranged from $255 (Pakistan) to $1,185 (Burkina Faso), with a median of $553. Annual % gross national income costs were 12-370% (median 56%). For the lowest 20% income earners the annual cost ranged 20-1535% (median 153%). St Lucia and Mongolia were the only countries whose governments consistently provided insulin. No government provided meters and strips, which were the most expensive supplies (62% of total cost). In less-resourced countries, even minimal care is beyond many families' means. In addition, families face additional costs such as consultations, travel and indirect costs. Action to prevent diabetes-related death and morbidity is needed. © 2015 Diabetes UK.

  9. Structural Tailoring of Advanced Turboprops (STAT). Theoretical manual

    NASA Technical Reports Server (NTRS)

    Brown, K. W.

    1992-01-01

    This manual describes the theories in the Structural Tailoring of Advanced Turboprops (STAT) computer program, which was developed to perform numerical optimizations on highly swept propfan blades. The optimization procedure seeks to minimize an objective function, defined as either direct operating cost or aeroelastic differences between a blade and its scaled model, by tuning internal and external geometry variables that must satisfy realistic blade design constraints. The STAT analyses include an aerodynamic efficiency evaluation, a finite element stress and vibration analysis, an acoustic analysis, a flutter analysis, and a once-per-revolution (1-p) forced response life prediction capability. The STAT constraints include blade stresses, blade resonances, flutter, tip displacements, and a 1-P forced response life fraction. The STAT variables include all blade internal and external geometry parameters needed to define a composite material blade. The STAT objective function is dependent upon a blade baseline definition which the user supplies to describe a current blade design for cost optimization or for the tailoring of an aeroelastic scale model.

  10. A lexicographic weighted Tchebycheff approach for multi-constrained multi-objective optimization of the surface grinding process

    NASA Astrophysics Data System (ADS)

    Khalilpourazari, Soheyl; Khalilpourazary, Saman

    2017-05-01

    In this article a multi-objective mathematical model is developed to minimize total time and cost while maximizing the production rate and surface finish quality in the grinding process. The model aims to determine optimal values of the decision variables considering process constraints. A lexicographic weighted Tchebycheff approach is developed to obtain efficient Pareto-optimal solutions of the problem in both rough and finished conditions. Utilizing a polyhedral branch-and-cut algorithm, the lexicographic weighted Tchebycheff model of the proposed multi-objective model is solved using GAMS software. The Pareto-optimal solutions provide a proper trade-off between conflicting objective functions which helps the decision maker to select the best values for the decision variables. Sensitivity analyses are performed to determine the effect of change in the grain size, grinding ratio, feed rate, labour cost per hour, length of workpiece, wheel diameter and downfeed of grinding parameters on each value of the objective function.

  11. Structural Tailoring of Advanced Turboprops (STAT). Theoretical manual

    NASA Astrophysics Data System (ADS)

    Brown, K. W.

    1992-10-01

    This manual describes the theories in the Structural Tailoring of Advanced Turboprops (STAT) computer program, which was developed to perform numerical optimizations on highly swept propfan blades. The optimization procedure seeks to minimize an objective function, defined as either direct operating cost or aeroelastic differences between a blade and its scaled model, by tuning internal and external geometry variables that must satisfy realistic blade design constraints. The STAT analyses include an aerodynamic efficiency evaluation, a finite element stress and vibration analysis, an acoustic analysis, a flutter analysis, and a once-per-revolution (1-p) forced response life prediction capability. The STAT constraints include blade stresses, blade resonances, flutter, tip displacements, and a 1-P forced response life fraction. The STAT variables include all blade internal and external geometry parameters needed to define a composite material blade. The STAT objective function is dependent upon a blade baseline definition which the user supplies to describe a current blade design for cost optimization or for the tailoring of an aeroelastic scale model.

  12. A Framework to Describe, Analyze and Generate Interactive Motor Behaviors

    PubMed Central

    Jarrassé, Nathanaël; Charalambous, Themistoklis; Burdet, Etienne

    2012-01-01

    While motor interaction between a robot and a human, or between humans, has important implications for society as well as promising applications, little research has been devoted to its investigation. In particular, it is important to understand the different ways two agents can interact and generate suitable interactive behaviors. Towards this end, this paper introduces a framework for the description and implementation of interactive behaviors of two agents performing a joint motor task. A taxonomy of interactive behaviors is introduced, which can classify tasks and cost functions that represent the way each agent interacts. The role of an agent interacting during a motor task can be directly explained from the cost function this agent is minimizing and the task constraints. The novel framework is used to interpret and classify previous works on human-robot motor interaction. Its implementation power is demonstrated by simulating representative interactions of two humans. It also enables us to interpret and explain the role distribution and switching between roles when performing joint motor tasks. PMID:23226231

  13. Quadratic Programming for Allocating Control Effort

    NASA Technical Reports Server (NTRS)

    Singh, Gurkirpal

    2005-01-01

    A computer program calculates an optimal allocation of control effort in a system that includes redundant control actuators. The program implements an iterative (but otherwise single-stage) algorithm of the quadratic-programming type. In general, in the quadratic-programming problem, one seeks the values of a set of variables that minimize a quadratic cost function, subject to a set of linear equality and inequality constraints. In this program, the cost function combines control effort (typically quantified in terms of energy or fuel consumed) and control residuals (differences between commanded and sensed values of variables to be controlled). In comparison with prior control-allocation software, this program offers approximately equal accuracy but much greater computational efficiency. In addition, this program offers flexibility, robustness to actuation failures, and a capability for selective enforcement of control requirements. The computational efficiency of this program makes it suitable for such complex, real-time applications as controlling redundant aircraft actuators or redundant spacecraft thrusters. The program is written in the C language for execution in a UNIX operating system.

  14. A framework to describe, analyze and generate interactive motor behaviors.

    PubMed

    Jarrassé, Nathanaël; Charalambous, Themistoklis; Burdet, Etienne

    2012-01-01

    While motor interaction between a robot and a human, or between humans, has important implications for society as well as promising applications, little research has been devoted to its investigation. In particular, it is important to understand the different ways two agents can interact and generate suitable interactive behaviors. Towards this end, this paper introduces a framework for the description and implementation of interactive behaviors of two agents performing a joint motor task. A taxonomy of interactive behaviors is introduced, which can classify tasks and cost functions that represent the way each agent interacts. The role of an agent interacting during a motor task can be directly explained from the cost function this agent is minimizing and the task constraints. The novel framework is used to interpret and classify previous works on human-robot motor interaction. Its implementation power is demonstrated by simulating representative interactions of two humans. It also enables us to interpret and explain the role distribution and switching between roles when performing joint motor tasks.

  15. Selected list of books and journals for the small medical library.

    PubMed Central

    Brandon, A N; Hill, D R

    1997-01-01

    The introduction to this revised list (seventeenth version) of 610 books and 141 journals addresses the origin, three decades ago, of the "Selected List of Books and Journals for the Small Medical Library," and the accomplishments of the late Alfred N. Brandon in helping health sciences librarians, and especially hospital librarians, to envision what collection development and a library collection are all about. This list is intended as a selection guide for the small or medium-size library in a hospital or similar facility. More realistically, it can function as a core collection for a library consortium. Books and journals are categorized by subject; the book list is followed by an author/editor index, and the subject list of journals by an alphabetical title listing. Due to continuing requests from librarians, a "minimal core" book collection consisting of 78 titles has been pulled out from the 200 asterisked (*) initial-purchase books and marked with daggers ([symbol: see text]). To purchase the entire collection of books and to pay for 1997 journal subscriptions would require $101,700. The cost of only the asterisked items, books and journals, totals $43,100. The "minimal core" book collection costs $12,600. PMID:9160148

  16. Selected list of books and journals for the small medical library.

    PubMed

    Brandon, A N; Hill, D R

    1997-04-01

    The introduction to this revised list (seventeenth version) of 610 books and 141 journals addresses the origin, three decades ago, of the "Selected List of Books and Journals for the Small Medical Library," and the accomplishments of the late Alfred N. Brandon in helping health sciences librarians, and especially hospital librarians, to envision what collection development and a library collection are all about. This list is intended as a selection guide for the small or medium-size library in a hospital or similar facility. More realistically, it can function as a core collection for a library consortium. Books and journals are categorized by subject; the book list is followed by an author/editor index, and the subject list of journals by an alphabetical title listing. Due to continuing requests from librarians, a "minimal core" book collection consisting of 78 titles has been pulled out from the 200 asterisked (*) initial-purchase books and marked with daggers ([symbol: see text]). To purchase the entire collection of books and to pay for 1997 journal subscriptions would require $101,700. The cost of only the asterisked items, books and journals, totals $43,100. The "minimal core" book collection costs $12,600.

  17. Multi Objective Optimization of Yarn Quality and Fibre Quality Using Evolutionary Algorithm

    NASA Astrophysics Data System (ADS)

    Ghosh, Anindya; Das, Subhasis; Banerjee, Debamalya

    2013-03-01

    The quality and cost of resulting yarn play a significant role to determine its end application. The challenging task of any spinner lies in producing a good quality yarn with added cost benefit. The present work does a multi-objective optimization on two objectives, viz. maximization of cotton yarn strength and minimization of raw material quality. The first objective function has been formulated based on the artificial neural network input-output relation between cotton fibre properties and yarn strength. The second objective function is formulated with the well known regression equation of spinning consistency index. It is obvious that these two objectives are conflicting in nature i.e. not a single combination of cotton fibre parameters does exist which produce maximum yarn strength and minimum cotton fibre quality simultaneously. Therefore, it has several optimal solutions from which a trade-off is needed depending upon the requirement of user. In this work, the optimal solutions are obtained with an elitist multi-objective evolutionary algorithm based on Non-dominated Sorting Genetic Algorithm II (NSGA-II). These optimum solutions may lead to the efficient exploitation of raw materials to produce better quality yarns at low costs.

  18. Comparison of minimally invasive parathyroidectomy under local anaesthesia and minimally invasive video-assisted parathyroidectomy for primary hyperparathyroidism: a cost analysis.

    PubMed

    Melfa, G I; Raspanti, C; Attard, M; Cocorullo, G; Attard, A; Mazzola, S; Salamone, G; Gulotta, G; Scerrino, G

    2016-01-01

    Primary hyperparathyroidism (PHPT) origins from a solitary adenoma in 70- 95% of cases. Moreover, the advances in methods for localizing an abnormal parathyroid gland made minimally invasive techniques more prominent. This study presents a micro-cost analysis of two parathyroidectomy techniques. 72 consecutive patients who underwent minimally invasive parathyroidectomy, video-assisted (MIVAP, group A, 52 patients) or "open" under local anaesthesia (OMIP, group B, 20 patients) for PHPT were reviewed. Operating room, consumable, anaesthesia, maintenance costs, equipment depreciation and surgeons/anaesthesiologists fees were evaluated. The patient's satisfaction and the rate of conversion to conventional parathyroidectomy were investigated. T-Student's, Kolmogorov-Smirnov tests and Odds Ratio were used for statistical analysis. 1 patient of the group A and 2 of the group B were excluded from the cost analysis because of the conversion to the conventional technique. Concerning the remnant patients, the overall average costs were: for Operative Room, 1186,69 € for the MIVAP group (51 patients) and 836,11 € for the OMIP group (p<0,001); for the Team, 122,93 € (group A) and 90,02 € (group B) (p<0,001); the other operative costs were 1388,32 € (group A) and 928,23 € (group B) (p<0,001). The patient's satisfaction was very strongly in favour of the group B (Odds Ratio 20,5 with a 95% confidence interval). MIVAP is more expensive compared to the "open" parathyroidectomy under local anaesthesia due to the costs of general anaesthesia and the longer operative time. Moreover, the patients generally prefer the local anaesthesia. Nevertheless, the rate of conversion to the conventional parathyroidectomy was relevant in the group of the local anaesthesia compared to the MIVAP, since the latter allows a four-gland exploration.

  19. The Conundrum of Functional Brain Networks: Small-World Efficiency or Fractal Modularity

    PubMed Central

    Gallos, Lazaros K.; Sigman, Mariano; Makse, Hernán A.

    2012-01-01

    The human brain has been studied at multiple scales, from neurons, circuits, areas with well-defined anatomical and functional boundaries, to large-scale functional networks which mediate coherent cognition. In a recent work, we addressed the problem of the hierarchical organization in the brain through network analysis. Our analysis identified functional brain modules of fractal structure that were inter-connected in a small-world topology. Here, we provide more details on the use of network science tools to elaborate on this behavior. We indicate the importance of using percolation theory to highlight the modular character of the functional brain network. These modules present a fractal, self-similar topology, identified through fractal network methods. When we lower the threshold of correlations to include weaker ties, the network as a whole assumes a small-world character. These weak ties are organized precisely as predicted by theory maximizing information transfer with minimal wiring costs. PMID:22586406

  20. Solvability of some partial functional integrodifferential equations with finite delay and optimal controls in Banach spaces.

    PubMed

    Ezzinbi, Khalil; Ndambomve, Patrice

    2016-01-01

    In this work, we consider the control system governed by some partial functional integrodifferential equations with finite delay in Banach spaces. We assume that the undelayed part admits a resolvent operator in the sense of Grimmer. Firstly, some suitable conditions are established to guarantee the existence and uniqueness of mild solutions for a broad class of partial functional integrodifferential infinite dimensional control systems. Secondly, it is proved that, under generally mild conditions of cost functional, the associated Lagrange problem has an optimal solution, and that for each optimal solution there is a minimizing sequence of the problem that converges to the optimal solution with respect to the trajectory, the control, and the functional in appropriate topologies. Our results extend and complement many other important results in the literature. Finally, a concrete example of application is given to illustrate the effectiveness of our main results.

  1. Autonomous Guidance Strategy for Spacecraft Formations and Reconfiguration Maneuvers

    NASA Astrophysics Data System (ADS)

    Wahl, Theodore P.

    A guidance strategy for autonomous spacecraft formation reconfiguration maneuvers is presented. The guidance strategy is presented as an algorithm that solves the linked assignment and delivery problems. The assignment problem is the task of assigning the member spacecraft of the formation to their new positions in the desired formation geometry. The guidance algorithm uses an auction process (also called an "auction algorithm''), presented in the dissertation, to solve the assignment problem. The auction uses the estimated maneuver and time of flight costs between the spacecraft and targets to create assignments which minimize a specific "expense'' function for the formation. The delivery problem is the task of delivering the spacecraft to their assigned positions, and it is addressed through one of two guidance schemes described in this work. The first is a delivery scheme based on artificial potential function (APF) guidance. APF guidance uses the relative distances between the spacecraft, targets, and any obstacles to design maneuvers based on gradients of potential fields. The second delivery scheme is based on model predictive control (MPC); this method uses a model of the system dynamics to plan a series of maneuvers designed to minimize a unique cost function. The guidance algorithm uses an analytic linearized approximation of the relative orbital dynamics, the Yamanaka-Ankersen state transition matrix, in the auction process and in both delivery methods. The proposed guidance strategy is successful, in simulations, in autonomously assigning the members of the formation to new positions and in delivering the spacecraft to these new positions safely using both delivery methods. This guidance algorithm can serve as the basis for future autonomous guidance strategies for spacecraft formation missions.

  2. Reducing fatigue damage for ships in transit through structured decision making

    USGS Publications Warehouse

    Nichols, J.M.; Fackler, P.L.; Pacifici, K.; Murphy, K.D.; Nichols, J.D.

    2014-01-01

    Research in structural monitoring has focused primarily on drawing inference about the health of a structure from the structure’s response to ambient or applied excitation. Knowledge of the current state can then be used to predict structural integrity at a future time and, in principle, allows one to take action to improve safety, minimize ownership costs, and/or increase the operating envelope. While much time and effort has been devoted toward data collection and system identification, research to-date has largely avoided the question of how to choose an optimal maintenance plan. This work describes a structured decision making (SDM) process for taking available information (loading data, model output, etc.) and producing a plan of action for maintaining the structure. SDM allows the practitioner to specify his/her objectives and then solves for the decision that is optimal in the sense that it maximizes those objectives. To demonstrate, we consider the problem of a Naval vessel transiting a fixed distance in varying sea-state conditions. The physics of this problem are such that minimizing transit time increases the probability of fatigue failure in the structural supports. It is shown how SDM produces the optimal trip plan in the sense that it minimizes both transit time and probability of failure in the manner of our choosing (i.e., through a user-defined cost function). The example illustrates the benefit of SDM over heuristic approaches to maintaining the vessel.

  3. Removing Barriers for Effective Deployment of Intermittent Renewable Generation

    NASA Astrophysics Data System (ADS)

    Arabali, Amirsaman

    The stochastic nature of intermittent renewable resources is the main barrier to effective integration of renewable generation. This problem can be studied from feeder-scale and grid-scale perspectives. Two new stochastic methods are proposed to meet the feeder-scale controllable load with a hybrid renewable generation (including wind and PV) and energy storage system. For the first method, an optimization problem is developed whose objective function is the cost of the hybrid system including the cost of renewable generation and storage subject to constraints on energy storage and shifted load. A smart-grid strategy is developed to shift the load and match the renewable energy generation and controllable load. Minimizing the cost function guarantees minimum PV and wind generation installation, as well as storage capacity selection for supplying the controllable load. A confidence coefficient is allocated to each stochastic constraint which shows to what degree the constraint is satisfied. In the second method, a stochastic framework is developed for optimal sizing and reliability analysis of a hybrid power system including renewable resources (PV and wind) and energy storage system. The hybrid power system is optimally sized to satisfy the controllable load with a specified reliability level. A load-shifting strategy is added to provide more flexibility for the system and decrease the installation cost. Load shifting strategies and their potential impacts on the hybrid system reliability/cost analysis are evaluated trough different scenarios. Using a compromise-solution method, the best compromise between the reliability and cost will be realized for the hybrid system. For the second problem, a grid-scale stochastic framework is developed to examine the storage application and its optimal placement for the social cost and transmission congestion relief of wind integration. Storage systems are optimally placed and adequately sized to minimize the sum of operation and congestion costs over a scheduling period. A technical assessment framework is developed to enhance the efficiency of wind integration and evaluate the economics of storage technologies and conventional gas-fired alternatives. The proposed method is used to carry out a cost-benefit analysis for the IEEE 24-bus system and determine the most economical technology. In order to mitigate the financial and technical concerns of renewable energy integration into the power system, a stochastic framework is proposed for transmission grid reinforcement studies in a power system with wind generation. A multi-stage multi-objective transmission network expansion planning (TNEP) methodology is developed which considers the investment cost, absorption of private investment and reliability of the system as the objective functions. A Non-dominated Sorting Genetic Algorithm (NSGA II) optimization approach is used in combination with a probabilistic optimal power flow (POPF) to determine the Pareto optimal solutions considering the power system uncertainties. Using a compromise-solution method, the best final plan is then realized based on the decision maker preferences. The proposed methodology is applied to the IEEE 24-bus Reliability Tests System (RTS) to evaluate the feasibility and practicality of the developed planning strategy.

  4. Upstream solutions to coral reef conservation: The payoffs of smart and cooperative decision-making.

    PubMed

    Oleson, Kirsten L L; Falinski, Kim A; Lecky, Joey; Rowe, Clara; Kappel, Carrie V; Selkoe, Kimberly A; White, Crow

    2017-04-15

    Land-based source pollutants (LBSP) actively threaten coral reef ecosystems globally. To achieve the greatest conservation outcome at the lowest cost, managers could benefit from appropriate tools that evaluate the benefits (in terms of LBSP reduction) and costs of implementing alternative land management strategies. Here we use a spatially explicit predictive model (InVEST-SDR) that quantifies change in sediment reaching the coast for evaluating the costs and benefits of alternative threat-abatement scenarios. We specifically use the model to examine trade-offs among possible agricultural road repair management actions (water bars to divert runoff and gravel to protect the road surface) across the landscape in West Maui, Hawaii, USA. We investigated changes in sediment delivery to coasts and costs incurred from management decision-making that is (1) cooperative or independent among landowners, and focused on (2) minimizing costs, reducing sediment, or both. The results illuminate which management scenarios most effectively minimize sediment while also minimizing the cost of mitigation efforts. We find targeting specific "hotspots" within all individual parcels is more cost-effective than targeting all road segments. The best outcomes are achieved when landowners cooperate and target cost-effective road repairs, however, a cooperative strategy can be counter-productive in some instances when cost-effectiveness is ignored. Simple models, such as the one developed here, have the potential to help managers make better choices about how to use limited resources. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. A Boltzmann machine for the organization of intelligent machines

    NASA Technical Reports Server (NTRS)

    Moed, Michael C.; Saridis, George N.

    1990-01-01

    A three-tier structure consisting of organization, coordination, and execution levels forms the architecture of an intelligent machine using the principle of increasing precision with decreasing intelligence from a hierarchically intelligent control. This system has been formulated as a probabilistic model, where uncertainty and imprecision can be expressed in terms of entropies. The optimal strategy for decision planning and task execution can be found by minimizing the total entropy in the system. The focus is on the design of the organization level as a Boltzmann machine. Since this level is responsible for planning the actions of the machine, the Boltzmann machine is reformulated to use entropy as the cost function to be minimized. Simulated annealing, expanding subinterval random search, and the genetic algorithm are presented as search techniques to efficiently find the desired action sequence and illustrated with numerical examples.

  6. ANOTHER LOOK AT THE FAST ITERATIVE SHRINKAGE/THRESHOLDING ALGORITHM (FISTA)*

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2017-01-01

    This paper provides a new way of developing the “Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)” [3] that is widely used for minimizing composite convex functions with a nonsmooth term such as the ℓ1 regularizer. In particular, this paper shows that FISTA corresponds to an optimized approach to accelerating the proximal gradient method with respect to a worst-case bound of the cost function. This paper then proposes a new algorithm that is derived by instead optimizing the step coefficients of the proximal gradient method with respect to a worst-case bound of the composite gradient mapping. The proof is based on the worst-case analysis called Performance Estimation Problem in [11]. PMID:29805242

  7. On the Impact of Local Taxes in a Set Cover Game

    NASA Astrophysics Data System (ADS)

    Escoffier, Bruno; Gourvès, Laurent; Monnot, Jérôme

    Given a collection C of weighted subsets of a ground set E, the SET cover problem is to find a minimum weight subset of C which covers all elements of E. We study a strategic game defined upon this classical optimization problem. Every element of E is a player which chooses one set of C where it appears. Following a public tax function, every player is charged a fraction of the weight of the set that it has selected. Our motivation is to design a tax function having the following features: it can be implemented in a distributed manner, existence of an equilibrium is guaranteed and the social cost for these equilibria is minimized.

  8. Upon accounting for the impact of isoenzyme loss, gene deletion costs anticorrelate with their evolutionary rates

    DOE PAGES

    Jacobs, Christopher; Lambourne, Luke; Xia, Yu; ...

    2017-01-20

    Here, system-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism's genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene's fitness contribution to an organism "here and now"º and the same gene's historical importance asmore » evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call "function-loss cost", which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.« less

  9. Upon accounting for the impact of isoenzyme loss, gene deletion costs anticorrelate with their evolutionary rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobs, Christopher; Lambourne, Luke; Xia, Yu

    Here, system-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism's genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene's fitness contribution to an organism "here and now"º and the same gene's historical importance asmore » evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call "function-loss cost", which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.« less

  10. Unbalanced and Minimal Point Equivalent Estimation Second-Order Split-Plot Designs

    NASA Technical Reports Server (NTRS)

    Parker, Peter A.; Kowalski, Scott M.; Vining, G. Geoffrey

    2007-01-01

    Restricting the randomization of hard-to-change factors in industrial experiments is often performed by employing a split-plot design structure. From an economic perspective, these designs minimize the experimental cost by reducing the number of resets of the hard-to- change factors. In this paper, unbalanced designs are considered for cases where the subplots are relatively expensive and the experimental apparatus accommodates an unequal number of runs per whole-plot. We provide construction methods for unbalanced second-order split- plot designs that possess the equivalence estimation optimality property, providing best linear unbiased estimates of the parameters; independent of the variance components. Unbalanced versions of the central composite and Box-Behnken designs are developed. For cases where the subplot cost approaches the whole-plot cost, minimal point designs are proposed and illustrated with a split-plot Notz design.

  11. Cost reduction in space operations - Structuring a planetary program to minimize the annual funding requirement as opposed to minimizing the program runout cost

    NASA Technical Reports Server (NTRS)

    Herman, D. H.; Niehoff, J. C.; Spadoni, D. J.

    1980-01-01

    An approach is proposed for the structuring of a planetary mission set wherein the peak annual funding is minimized to meet the annual budget restraint. One aspect of the approach is to have a transportation capability that can launch a mission in any planetary opportunity; such capability can be provided by solar electric propulsion. Another cost reduction technique is to structure a mission test in a time sequenced fashion that could utilize essentially the same spacecraft for the implementation of several missions. A third technique would be to fulfill a scientific objective in several sequential missions rather than attempt to accomplish all of the objectives with one mission. The application of the approach is illustrated by an example involving the Solar Orbiter Dual Probe mission.

  12. Ambient intelligence for monitoring and research in clinical neurophysiology and medicine: the MIMERICA* project and prototype.

    PubMed

    Pignolo, L; Riganello, F; Dolce, G; Sannita, W G

    2013-04-01

    Ambient Intelligence (AmI) provides extended but unobtrusive sensing and computing devices and ubiquitous networking for human/environment interaction. It is a new paradigm in information technology compliant with the international Integrating Healthcare Enterprise board (IHE) and eHealth HL7 technological standards in the functional integration of biomedical domotics and informatics in hospital and home care. AmI allows real-time automatic recording of biological/medical information and environmental data. It is extensively applicable to patient monitoring, medicine and neuroscience research, which require large biomedical data sets; for example, in the study of spontaneous or condition-dependent variability or chronobiology. In this respect, AML is equivalent to a traditional laboratory for data collection and processing, with minimal dedicated equipment, staff, and costs; it benefits from the integration of artificial intelligence technology with traditional/innovative sensors to monitor clinical or functional parameters. A prototype AmI platform (MIMERICA*) has been implemented and is operated in a semi-intensive unit for the vegetative and minimally conscious states, to investigate the spontaneous or environment-related fluctuations of physiological parameters in these conditions.

  13. A LiDAR data-based camera self-calibration method

    NASA Astrophysics Data System (ADS)

    Xu, Lijun; Feng, Jing; Li, Xiaolu; Chen, Jianjun

    2018-07-01

    To find the intrinsic parameters of a camera, a LiDAR data-based camera self-calibration method is presented here. Parameters have been estimated using particle swarm optimization (PSO), enhancing the optimal solution of a multivariate cost function. The main procedure of camera intrinsic parameter estimation has three parts, which include extraction and fine matching of interest points in the images, establishment of cost function, based on Kruppa equations and optimization of PSO using LiDAR data as the initialization input. To improve the precision of matching pairs, a new method of maximal information coefficient (MIC) and maximum asymmetry score (MAS) was used to remove false matching pairs based on the RANSAC algorithm. Highly precise matching pairs were used to calculate the fundamental matrix so that the new cost function (deduced from Kruppa equations in terms of the fundamental matrix) was more accurate. The cost function involving four intrinsic parameters was minimized by PSO for the optimal solution. To overcome the issue of optimization pushed to a local optimum, LiDAR data was used to determine the scope of initialization, based on the solution to the P4P problem for camera focal length. To verify the accuracy and robustness of the proposed method, simulations and experiments were implemented and compared with two typical methods. Simulation results indicated that the intrinsic parameters estimated by the proposed method had absolute errors less than 1.0 pixel and relative errors smaller than 0.01%. Based on ground truth obtained from a meter ruler, the distance inversion accuracy in the experiments was smaller than 1.0 cm. Experimental and simulated results demonstrated that the proposed method was highly accurate and robust.

  14. Similar Gender Dimorphism in the Costs of Reproduction across the Geographic Range of Fraxinus ornus

    PubMed Central

    Verdú, Miguel; Spanos, Kostas; čaňová, Ingrid; Slobodník, Branko; Paule, Ladislav

    2007-01-01

    Background and Aims The reproductive costs for individuals with the female function have been hypothesized to be greater than for those with the male function because the allocation unit per female flower is very high due to the necessity to nurture the embryos until seed dispersal occurs, while the male reproductive allocation per flower is lower because it finishes once pollen is shed. Consequently, males may invest more resources in growth than females. This prediction was tested across a wide geographical range in a tree with a dimorphic breeding system (Fraxinus ornus) consisting of males and hermaphrodites functioning as females. The contrasting ecological conditions found across the geographical range allowed the evaluation of the hypothesis that the reproductive costs of sexual dimorphism varies with environmental stressors. Methods By using random-effects meta-analysis, the differences in the reproductive and vegetative investment of male and hermaphrodite trees of F. ornus were analysed in 10 populations from the northern (Slovakia), south-eastern (Greece) and south-western (Spain) limits of its European distribution. The variation in gender-dimorphism with environmental stress was analysed by running a meta-regression between these effect sizes and the two environmental stress indicators: one related to temperature (the frost-free period) and another related to water availability (moisture deficit). Key Results Most of the effect sizes showed that males produced more flowers and grew more quickly than hermaphrodites. Gender differences in reproduction and growth were not minimized or maximized under adverse climatic conditions such as short frost-free periods or severe aridity. Conclusions The lower costs of reproduction for F. ornus males allow them to grow more quickly than hermaphrodites, although such differences in sex-specific reproductive costs are not magnified under stressful conditions. PMID:17098751

  15. Use of Linear Programming to Develop Cost-Minimized Nutritionally Adequate Health Promoting Food Baskets.

    PubMed

    Parlesak, Alexandr; Tetens, Inge; Dejgård Jensen, Jørgen; Smed, Sinne; Gabrijelčič Blenkuš, Mojca; Rayner, Mike; Darmon, Nicole; Robertson, Aileen

    2016-01-01

    Food-Based Dietary Guidelines (FBDGs) are developed to promote healthier eating patterns, but increasing food prices may make healthy eating less affordable. The aim of this study was to design a range of cost-minimized nutritionally adequate health-promoting food baskets (FBs) that help prevent both micronutrient inadequacy and diet-related non-communicable diseases at lowest cost. Average prices for 312 foods were collected within the Greater Copenhagen area. The cost and nutrient content of five different cost-minimized FBs for a family of four were calculated per day using linear programming. The FBs were defined using five different constraints: cultural acceptability (CA), or dietary guidelines (DG), or nutrient recommendations (N), or cultural acceptability and nutrient recommendations (CAN), or dietary guidelines and nutrient recommendations (DGN). The variety and number of foods in each of the resulting five baskets was increased through limiting the relative share of individual foods. The one-day version of N contained only 12 foods at the minimum cost of DKK 27 (€ 3.6). The CA, DG, and DGN were about twice of this and the CAN cost ~DKK 81 (€ 10.8). The baskets with the greater variety of foods contained from 70 (CAN) to 134 (DGN) foods and cost between DKK 60 (€ 8.1, N) and DKK 125 (€ 16.8, DGN). Ensuring that the food baskets cover both dietary guidelines and nutrient recommendations doubled the cost while cultural acceptability (CAN) tripled it. Use of linear programming facilitates the generation of low-cost food baskets that are nutritionally adequate, health promoting, and culturally acceptable.

  16. Assessing efficiency of spatial sampling using combined coverage analysis in geographical and feature spaces

    NASA Astrophysics Data System (ADS)

    Hengl, Tomislav

    2015-04-01

    Efficiency of spatial sampling largely determines success of model building. This is especially important for geostatistical mapping where an initial sampling plan should provide a good representation or coverage of both geographical (defined by the study area mask map) and feature space (defined by the multi-dimensional covariates). Otherwise the model will need to extrapolate and, hence, the overall uncertainty of the predictions will be high. In many cases, geostatisticians use point data sets which are produced using unknown or inconsistent sampling algorithms. Many point data sets in environmental sciences suffer from spatial clustering and systematic omission of feature space. But how to quantify these 'representation' problems and how to incorporate this knowledge into model building? The author has developed a generic function called 'spsample.prob' (Global Soil Information Facilities package for R) and which simultaneously determines (effective) inclusion probabilities as an average between the kernel density estimation (geographical spreading of points; analysed using the spatstat package in R) and MaxEnt analysis (feature space spreading of points; analysed using the MaxEnt software used primarily for species distribution modelling). The output 'iprob' map indicates whether the sampling plan has systematically missed some important locations and/or features, and can also be used as an input for geostatistical modelling e.g. as a weight map for geostatistical model fitting. The spsample.prob function can also be used in combination with the accessibility analysis (cost of field survey are usually function of distance from the road network, slope and land cover) to allow for simultaneous maximization of average inclusion probabilities and minimization of total survey costs. The author postulates that, by estimating effective inclusion probabilities using combined geographical and feature space analysis, and by comparing survey costs to representation efficiency, an optimal initial sampling plan can be produced which satisfies both criteria: (a) good representation (i.e. within a tolerance threshold), and (b) minimized survey costs. This sampling analysis framework could become especially interesting for generating sampling plans in new areas e.g. for which no previous spatial prediction model exists. The presentation includes data processing demos with standard soil sampling data sets Ebergotzen (Germany) and Edgeroi (Australia), also available via the GSIF package.

  17. Optimal strategies for the surveillance and control of forest pathogens: A case study with oak wilt

    Treesearch

    Tetsuya Horie; Robert G. Haight; Frances R. Homans; Robert C. Venette

    2013-01-01

    Cost-effective strategies are needed to find and remove diseased trees in forests damaged by pathogens. We develop a model of cost-minimizing surveillance and control of forest pathogens across multiple sites where there is uncertainty about the extent of the infestation in each site and when the goal is to minimize the expected number of new infections. We allow for a...

  18. L{sup {infinity}} Variational Problems with Running Costs and Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aronsson, G., E-mail: gunnar.aronsson@liu.se; Barron, E. N., E-mail: enbarron@math.luc.edu

    2012-02-15

    Various approaches are used to derive the Aronsson-Euler equations for L{sup {infinity}} calculus of variations problems with constraints. The problems considered involve holonomic, nonholonomic, isoperimetric, and isosupremic constraints on the minimizer. In addition, we derive the Aronsson-Euler equation for the basic L{sup {infinity}} problem with a running cost and then consider properties of an absolute minimizer. Many open problems are introduced for further study.

  19. Navy Program Manager’s Guide, 1985 Edition

    DTIC Science & Technology

    1985-01-01

    1-7 Relationship of Development Cost in System Life -Cycle Cost (LCC) ......................... 1-7 Realistic Costing and Budgeting...Review (PROR)..... 4-53 x MI *) First-Article Configuration Inspection (FACI) ...... 4-54 Cost Management- Life -Cycle Costing (LCC) ..................... 4...innovation and minimize costs. 4. Consideration of life -cycle cost (LCC) such that affordability is put on an equal basis with system performance, schedule

  20. Facilitating CCS Business Planning by Extending the Functionality of the SimCCS Integrated System Model

    DOE PAGES

    Ellett, Kevin M.; Middleton, Richard S.; Stauffer, Philip H.; ...

    2017-08-18

    The application of integrated system models for evaluating carbon capture and storage technology has expanded steadily over the past few years. To date, such models have focused largely on hypothetical scenarios of complex source-sink matching involving numerous large-scale CO 2 emitters, and high-volume, continuous reservoirs such as deep saline formations to function as geologic sinks for carbon storage. Though these models have provided unique insight on the potential costs and feasibility of deploying complex networks of integrated infrastructure, there remains a pressing need to translate such insight to the business community if this technology is to ever achieve a trulymore » meaningful impact in greenhouse gas mitigation. Here, we present a new integrated system modelling tool termed SimCCUS aimed at providing crucial decision support for businesses by extending the functionality of a previously developed model called SimCCS. The primary innovation of the SimCCUS tool development is the incorporation of stacked geological reservoir systems with explicit consideration of processes and costs associated with the operation of multiple CO 2 utilization and storage targets from a single geographic location. In such locations provide significant efficiencies through economies of scale, effectively minimizing CO 2 storage costs while simultaneously maximizing revenue streams via the utilization of CO 2 as a commodity for enhanced hydrocarbon recovery.« less

  1. Facilitating CCS Business Planning by Extending the Functionality of the SimCCS Integrated System Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellett, Kevin M.; Middleton, Richard S.; Stauffer, Philip H.

    The application of integrated system models for evaluating carbon capture and storage technology has expanded steadily over the past few years. To date, such models have focused largely on hypothetical scenarios of complex source-sink matching involving numerous large-scale CO 2 emitters, and high-volume, continuous reservoirs such as deep saline formations to function as geologic sinks for carbon storage. Though these models have provided unique insight on the potential costs and feasibility of deploying complex networks of integrated infrastructure, there remains a pressing need to translate such insight to the business community if this technology is to ever achieve a trulymore » meaningful impact in greenhouse gas mitigation. Here, we present a new integrated system modelling tool termed SimCCUS aimed at providing crucial decision support for businesses by extending the functionality of a previously developed model called SimCCS. The primary innovation of the SimCCUS tool development is the incorporation of stacked geological reservoir systems with explicit consideration of processes and costs associated with the operation of multiple CO 2 utilization and storage targets from a single geographic location. In such locations provide significant efficiencies through economies of scale, effectively minimizing CO 2 storage costs while simultaneously maximizing revenue streams via the utilization of CO 2 as a commodity for enhanced hydrocarbon recovery.« less

  2. The environmental cost of subsistence: Optimizing diets to minimize footprints.

    PubMed

    Gephart, Jessica A; Davis, Kyle F; Emery, Kyle A; Leach, Allison M; Galloway, James N; Pace, Michael L

    2016-05-15

    The question of how to minimize monetary cost while meeting basic nutrient requirements (a subsistence diet) was posed by George Stigler in 1945. The problem, known as Stigler's diet problem, was famously solved using the simplex algorithm. Today, we are not only concerned with the monetary cost of food, but also the environmental cost. Efforts to quantify environmental impacts led to the development of footprint (FP) indicators. The environmental footprints of food production span multiple dimensions, including greenhouse gas emissions (carbon footprint), nitrogen release (nitrogen footprint), water use (blue and green water footprint) and land use (land footprint), and a diet minimizing one of these impacts could result in higher impacts in another dimension. In this study based on nutritional and population data for the United States, we identify diets that minimize each of these four footprints subject to nutrient constraints. We then calculate tradeoffs by taking the composition of each footprint's minimum diet and calculating the other three footprints. We find that diets for the minimized footprints tend to be similar for the four footprints, suggesting there are generally synergies, rather than tradeoffs, among low footprint diets. Plant-based food and seafood (fish and other aquatic foods) commonly appear in minimized diets and tend to most efficiently supply macronutrients and micronutrients, respectively. Livestock products rarely appear in minimized diets, suggesting these foods tend to be less efficient from an environmental perspective, even when nutrient content is considered. The results' emphasis on seafood is complicated by the environmental impacts of aquaculture versus capture fisheries, increasing in aquaculture, and shifting compositions of aquaculture feeds. While this analysis does not make specific diet recommendations, our approach demonstrates potential environmental synergies of plant- and seafood-based diets. As a result, this study provides a useful tool for decision-makers in linking human nutrition and environmental impacts. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. An aqueous electrolyte, sodium ion functional, large format energy storage device for stationary applications

    NASA Astrophysics Data System (ADS)

    Whitacre, J. F.; Wiley, T.; Shanbhag, S.; Wenzhuo, Y.; Mohamed, A.; Chun, S. E.; Weber, E.; Blackwood, D.; Lynch-Bell, E.; Gulakowski, J.; Smith, C.; Humphreys, D.

    2012-09-01

    An approach to making large format economical energy storage devices based on a sodium-interactive set of electrodes in a neutral pH aqueous electrolyte is described. The economics of materials and manufacturing are examined, followed by a description of an asymmetric/hybrid device that has λ-MnO2 positive electrode material and low cost activated carbon as the negative electrode material. Data presented include materials characterization of the active materials, cyclic voltammetry, galvanostatic charge/discharge cycling, and application-specific performance of an 80 V, 2.4 kW h pack. The results indicate that this set of electrochemical couples is stable, low cost, requires minimal battery management control electronics, and therefore has potential for use in stationary applications where device energy density is not a concern.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canavan, G.H.

    This note studies the impact of maximizing the stability index rather than minimizing the first strike cost in choosing offensive missile allocations. It does so in the context of a model in which exchanges between vulnerable missile forces are modeled probabilistically, converted into first and second strike costs through approximations to the value target sets at risk, and the stability index is taken to be their ratio. The value of the allocation that minimizes the first strike cost for both attack preferences are derived analytically. The former recovers results derived earlier. The latter leads to an optimum at unity allocationmore » for which the stability index is determined analytically. For values of the attack preference greater than about unity, maximizing the stability index increases the cost of striking first 10--15%. For smaller values of the attack preference, maximizing the index increases the second strike cost a similar amount. Both are stabilizing, so if both sides could be trusted to target on missiles in order to minimize damage to value and maximize stability, the stability index for vulnerable missiles could be increased by about 15%. However, that would increase the cost to the first striker by about 15%. It is unclear why--having decided to strike--he would do so in a way that would increase damage to himself.« less

  5. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    PubMed

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  6. Prosthetic model, but not stiffness or height, affects the metabolic cost of running for athletes with unilateral transtibial amputations.

    PubMed

    Beck, Owen N; Taboga, Paolo; Grabowski, Alena M

    2017-07-01

    Running-specific prostheses enable athletes with lower limb amputations to run by emulating the spring-like function of biological legs. Current prosthetic stiffness and height recommendations aim to mitigate kinematic asymmetries for athletes with unilateral transtibial amputations. However, it is unclear how different prosthetic configurations influence the biomechanics and metabolic cost of running. Consequently, we investigated how prosthetic model, stiffness, and height affect the biomechanics and metabolic cost of running. Ten athletes with unilateral transtibial amputations each performed 15 running trials at 2.5 or 3.0 m/s while we measured ground reaction forces and metabolic rates. Athletes ran using three different prosthetic models with five different stiffness category and height combinations per model. Use of an Ottobock 1E90 Sprinter prosthesis reduced metabolic cost by 4.3 and 3.4% compared with use of Freedom Innovations Catapult [fixed effect (β) = -0.177; P < 0.001] and Össur Flex-Run (β = -0.139; P = 0.002) prostheses, respectively. Neither prosthetic stiffness ( P ≥ 0.180) nor height ( P = 0.062) affected the metabolic cost of running. The metabolic cost of running was related to lower peak (β = 0.649; P = 0.001) and stance average (β = 0.772; P = 0.018) vertical ground reaction forces, prolonged ground contact times (β = -4.349; P = 0.012), and decreased leg stiffness (β = 0.071; P < 0.001) averaged from both legs. Metabolic cost was reduced with more symmetric peak vertical ground reaction forces (β = 0.007; P = 0.003) but was unrelated to stride kinematic symmetry ( P ≥ 0.636). Therefore, prosthetic recommendations based on symmetric stride kinematics do not necessarily minimize the metabolic cost of running. Instead, an optimal prosthetic model, which improves overall biomechanics, minimizes the metabolic cost of running for athletes with unilateral transtibial amputations. NEW & NOTEWORTHY The metabolic cost of running for athletes with unilateral transtibial amputations depends on prosthetic model and is associated with lower peak and stance average vertical ground reaction forces, longer contact times, and reduced leg stiffness. Metabolic cost is unrelated to prosthetic stiffness, height, and stride kinematic symmetry. Unlike nonamputees who decrease leg stiffness with increased in-series surface stiffness, biological limb stiffness for athletes with unilateral transtibial amputations is positively correlated with increased in-series (prosthetic) stiffness.

  7. Estimation of the cost of large-scale school deworming programmes with benzimidazoles

    PubMed Central

    Montresor, A.; Gabrielli, A.F.; Engels, D.

    2017-01-01

    Summary This study estimates the cost of distributing benzimidazole tablets in the context of school deworming programmes: we analysed studies reporting the cost of school deworming from seven countries in four WHO regions. The estimated cost for drug procurement to cover one million children (including customs clearance and international transport) is approximately US$20 000. The estimated financial costs (including the cost of training of personnel, drug transport, social mobilization and monitoring) is, on average, equivalent to US$33 000 per million school-age children with minimal variation in different countries and continents. The estimated economic costs of distribution (including the time spent by teachers, and health personnel at central, provincial and district level) to cover one million children approximately corresponds to US$19 000. This study shows the minimal cost of school deworming activities, but also shows the significant contribution (corresponding to a quarter of the entire cost of the programme) provided by health and education systems in endemic countries even in the case of drug donations and donor support of distribution costs. PMID:19926104

  8. Numerical scheme approximating solution and parameters in a beam equation

    NASA Astrophysics Data System (ADS)

    Ferdinand, Robert R.

    2003-12-01

    We present a mathematical model which describes vibration in a metallic beam about its equilibrium position. This model takes the form of a nonlinear second-order (in time) and fourth-order (in space) partial differential equation with boundary and initial conditions. A finite-element Galerkin approximation scheme is used to estimate model solution. Infinite-dimensional model parameters are then estimated numerically using an inverse method procedure which involves the minimization of a least-squares cost functional. Numerical results are presented and future work to be done is discussed.

  9. XMM-Newton Mobile Web Application

    NASA Astrophysics Data System (ADS)

    Ibarra, A.; Kennedy, M.; Rodríguez, P.; Hernández, C.; Saxton, R.; Gabriel, C.

    2013-10-01

    We present the first XMM-Newton web mobile application, coded using new web technologies such as HTML5, the Query mobile framework, and D3 JavaScript data-driven library. This new web mobile application focuses on re-formatted contents extracted directly from the XMM-Newton web, optimizing the contents for mobile devices. The main goals of this development were to reach all kind of handheld devices and operating systems, while minimizing software maintenance. The application therefore has been developed as a web mobile implementation rather than a more costly native application. New functionality will be added regularly.

  10. Fly-by-Wireless Update

    NASA Technical Reports Server (NTRS)

    Studor, George

    2010-01-01

    The presentation reviews what is meant by the term 'fly-by-wireless', common problems and motivation, provides recent examples, and examines NASA's future and basis for collaboration. The vision is to minimize cables and connectors and increase functionality across the aerospace industry by providing reliable, lower cost, modular, and higher performance alternatives to wired data connectivity to benefit the entire vehicle/program life-cycle. Focus areas are system engineering and integration methods to reduce cables and connectors, vehicle provisions for modularity and accessibility, and a 'tool box' of alternatives to wired connectivity.

  11. Experimental and Theoretical Results in Output-Trajectory Redesign for Flexible Structures

    NASA Technical Reports Server (NTRS)

    Dewey, J. S.; Devasia, Santosh

    1996-01-01

    In this paper we study the optimal redesign of output trajectory for linear invertible systems. This is particularly important for tracking control of flexible structures because the input-state trajectories that achieve the required output may cause excessive vibrations in the structure. A trade-off is then required between tracking and vibrations reduction. We pose and solve this problem as the minimization of a quadratic cost function. The theory is developed and applied to the output tracking of a flexible structure and experimental results are presented.

  12. Essays on wholesale auctions in deregulated electricity markets

    NASA Astrophysics Data System (ADS)

    Baltaduonis, Rimvydas

    2007-12-01

    The early experience in the restructured electric power markets raised several issues, including price spikes, inefficiency, security, and the overall relationship of market clearing prices to generation costs. Unsatisfactory outcomes in these markets are thought to have resulted in part from strategic generator behaviors encouraged by inappropriate market design features. In this dissertation, I examine the performance of three auction mechanisms for wholesale power markets - Offer Cost Minimization auction, Payment Cost Minimization auction and Simple-Offer auction - when electricity suppliers act strategically. A Payment Cost Minimization auction has been proposed as an alternative to the traditional Offer Cost Minimization auction with the intention to solve the problem of inflated wholesale electricity prices. Efficiency concerns for this proposal were voiced due to insights predicated on the assumption of true production cost revelation. Using a game theoretic approach and an experimental method, I compare the two auctions, strictly controlling for the level of unilateral market power. A specific feature of these complex-offer auctions is that the sellers submit not only the quantities and the minimum prices that they are willing to sell at, but also the start-up fees, which are designed to reimburse the fixed start-up costs of the generation plants. I find that the complex structure of the offers leaves considerable room for strategic behavior, which consequently leads to anti-competitive and inefficient market outcomes. In the last chapter of my dissertation, I use laboratory experiments to contrast the performance of two complex-offer auctions against the performance of a simple-offer auction, in which the sellers have to recover all their generation costs - fixed and variable - through a uniform market-clearing price. I find that a simple-offer auction significantly reduces consumer prices and lowers price volatility. It mitigates anti-competitive effects that are present in the complex-offer auctions and achieves allocative efficiency more quickly.

  13. A comparative cost analysis of robotic-assisted surgery versus laparoscopic surgery and open surgery: the necessity of investing knowledgeably.

    PubMed

    Tedesco, Giorgia; Faggiano, Francesco C; Leo, Erica; Derrico, Pietro; Ritrovato, Matteo

    2016-11-01

    Robotic surgery has been proposed as a minimally invasive surgical technique with advantages for both surgeons and patients, but is associated with high costs (installation, use and maintenance). The Health Technology Assessment Unit of the Bambino Gesù Children's Hospital sought to investigate the economic sustainability of robotic surgery, having foreseen its impact on the hospital budget METHODS: Break-even and cost-minimization analyses were performed. A deterministic approach for sensitivity analysis was applied by varying the values of parameters between pre-defined ranges in different scenarios to see how the outcomes might differ. The break-even analysis indicated that at least 349 annual interventions would need to be carried out to reach the break-even point. The cost-minimization analysis showed that robotic surgery was the most expensive procedure among the considered alternatives (in terms of the contribution margin). Robotic surgery is a good clinical alternative to laparoscopic and open surgery (for many pediatric operations). However, the costs of robotic procedures are higher than the equivalent laparoscopic and open surgical interventions. Therefore, in the short run, these findings do not seem to support the decision to introduce a robotic system in our hospital.

  14. Current trends in treatment of hypertension in Karachi and cost minimization possibilities.

    PubMed

    Hussain, Izhar M; Naqvi, Baqir S; Qasim, Rao M; Ali, Nasir

    2015-01-01

    This study finds out drug usage trends in Stage I Hypertensive Patients without any compelling indications in Karachi, deviations of current practices from evidence based antihypertensive therapeutic guidelines and looks for cost minimization opportunities. In the present study conducted during June 2012 to August 2012, two sets were used. Randomized stratified independent surveys were conducted in doctors and general population - including patients, using pretested questionnaires. Sample sizes for doctors and general population were 100 and 400 respectively. Statistical analysis was conducted on Statistical Package for Social Science (SPSS). Financial impact was also analyzed. On the basis of patients' doctors' feedback, Beta Blockers, and Angiotensin Converting Enzyme Inhibitors were used more frequently than other drugs. Thiazides and low-priced generics were hardly prescribed. Beta blockers were prescribed widely and considered cost effective. This trend increases cost by two to ten times. Feedbacks showed that therapeutic guidelines were not followed by the doctors practicing in the community and hospitals in Karachi. Thiazide diuretics were hardly used. Beta blockers were widely prescribed. High priced market leaders or expensive branded generics were commonly prescribed. Therefore, there are great opportunities for cost minimization by using evidence-based clinically effective and safe medicines.

  15. Application of improved Vogel’s approximation method in minimization of rice distribution costs of Perum BULOG

    NASA Astrophysics Data System (ADS)

    Nahar, J.; Rusyaman, E.; Putri, S. D. V. E.

    2018-03-01

    This research was conducted at Perum BULOG Sub-Divre Medan which is the implementing institution of Raskin program for several regencies and cities in North Sumatera. Raskin is a program of distributing rice to the poor. In order to minimize rice distribution costs then rice should be allocated optimally. The method used in this study consists of the Improved Vogel Approximation Method (IVAM) to analyse the initial feasible solution, and Modified Distribution (MODI) to test the optimum solution. This study aims to determine whether the IVAM method can provide savings or cost efficiency of rice distribution. From the calculation with IVAM obtained the optimum cost is lower than the company's calculation of Rp945.241.715,5 while the cost of the company's calculation of Rp958.073.750,40. Thus, the use of IVAM can save rice distribution costs of Rp12.832.034,9.

  16. Cost-minimization analysis of phenytoin and fosphenytoin in the emergency department.

    PubMed

    Touchette, D R; Rhoney, D H

    2000-08-01

    To determine the value of fosphenytoin compared with phenytoin for treating patients admitted to an emergency department following a seizure. Cost-minimization analysis performed from a hospital perspective. Hospital emergency department. Two hundred fifty-six patients participating in a comparative clinical trial. Estimation of adverse event rates and resource use. In our base case, phenytoin was the preferred option, with an expected total treatment cost of $5.39 compared with $110.14 for fosphenytoin. One-way sensitivity analyses showed that the frequency and cost of treating purple glove syndrome (PGS) possibly could affect the decision. Monte Carlo simulation showed phenytoin to be the preferred option 97.3% of the time. When variable costs of care are used to calculate the value of phenytoin compared with fosphenytoin in the emergency department, phenytoin is preferred. The decision to administer phenytoin was very robust and changed only when both the frequency and cost of PGS was high.

  17. A Cost-Minimization Analysis of Tissue-Engineered Constructs for Corneal Endothelial Transplantation

    PubMed Central

    Tan, Tien-En; Peh, Gary S. L.; George, Benjamin L.; Cajucom-Uy, Howard Y.; Dong, Di; Finkelstein, Eric A.; Mehta, Jodhbir S.

    2014-01-01

    Corneal endothelial transplantation or endothelial keratoplasty has become the preferred choice of transplantation for patients with corneal blindness due to endothelial dysfunction. Currently, there is a worldwide shortage of transplantable tissue, and demand is expected to increase further with aging populations. Tissue-engineered alternatives are being developed, and are likely to be available soon. However, the cost of these constructs may impair their widespread use. A cost-minimization analysis comparing tissue-engineered constructs to donor tissue procured from eye banks for endothelial keratoplasty was performed. Both initial investment costs and recurring costs were considered in the analysis to arrive at a final tissue cost per transplant. The clinical outcomes of endothelial keratoplasty with tissue-engineered constructs and with donor tissue procured from eye banks were assumed to be equivalent. One-way and probabilistic sensitivity analyses were performed to simulate various possible scenarios, and to determine the robustness of the results. A tissue engineering strategy was cheaper in both investment cost and recurring cost. Tissue-engineered constructs for endothelial keratoplasty could be produced at a cost of US$880 per transplant. In contrast, utilizing donor tissue procured from eye banks for endothelial keratoplasty required US$3,710 per transplant. Sensitivity analyses performed further support the results of this cost-minimization analysis across a wide range of possible scenarios. The use of tissue-engineered constructs for endothelial keratoplasty could potentially increase the supply of transplantable tissue and bring the costs of corneal endothelial transplantation down, making this intervention accessible to a larger group of patients. Tissue-engineering strategies for corneal epithelial constructs or other tissue types, such as pancreatic islet cells, should also be subject to similar pharmacoeconomic analyses. PMID:24949869

  18. A cost-minimization analysis of tissue-engineered constructs for corneal endothelial transplantation.

    PubMed

    Tan, Tien-En; Peh, Gary S L; George, Benjamin L; Cajucom-Uy, Howard Y; Dong, Di; Finkelstein, Eric A; Mehta, Jodhbir S

    2014-01-01

    Corneal endothelial transplantation or endothelial keratoplasty has become the preferred choice of transplantation for patients with corneal blindness due to endothelial dysfunction. Currently, there is a worldwide shortage of transplantable tissue, and demand is expected to increase further with aging populations. Tissue-engineered alternatives are being developed, and are likely to be available soon. However, the cost of these constructs may impair their widespread use. A cost-minimization analysis comparing tissue-engineered constructs to donor tissue procured from eye banks for endothelial keratoplasty was performed. Both initial investment costs and recurring costs were considered in the analysis to arrive at a final tissue cost per transplant. The clinical outcomes of endothelial keratoplasty with tissue-engineered constructs and with donor tissue procured from eye banks were assumed to be equivalent. One-way and probabilistic sensitivity analyses were performed to simulate various possible scenarios, and to determine the robustness of the results. A tissue engineering strategy was cheaper in both investment cost and recurring cost. Tissue-engineered constructs for endothelial keratoplasty could be produced at a cost of US$880 per transplant. In contrast, utilizing donor tissue procured from eye banks for endothelial keratoplasty required US$3,710 per transplant. Sensitivity analyses performed further support the results of this cost-minimization analysis across a wide range of possible scenarios. The use of tissue-engineered constructs for endothelial keratoplasty could potentially increase the supply of transplantable tissue and bring the costs of corneal endothelial transplantation down, making this intervention accessible to a larger group of patients. Tissue-engineering strategies for corneal epithelial constructs or other tissue types, such as pancreatic islet cells, should also be subject to similar pharmacoeconomic analyses.

  19. National trends of perioperative outcomes and costs for open, laparoscopic and robotic pediatric pyeloplasty.

    PubMed

    Varda, Briony K; Johnson, Emilie K; Clark, Curtis; Chung, Benjamin I; Nelson, Caleb P; Chang, Steven L

    2014-04-01

    We performed a population based study comparing trends in perioperative outcomes and costs for open, laparoscopic and robotic pediatric pyeloplasty. Specific billing items contributing to cost were also investigated. Using the Perspective database (Premier, Inc., Charlotte, North Carolina), we identified 12,662 pediatric patients who underwent open, laparoscopic and robotic pyeloplasty (ICD-9 55.87) in the United States from 2003 to 2010. Univariate and multivariate statistics were used to evaluate perioperative outcomes, complications and costs for the competing surgical approaches. Propensity weighting was used to minimize selection bias. Sampling weights were used to yield a nationally representative sample. A decrease in open pyeloplasty and an increase in minimally invasive pyeloplasty were observed. All procedures had low complication rates. Compared to open pyeloplasty, laparoscopic and robotic pyeloplasty had longer median operative times (240 minutes, p <0.0001 and 270 minutes, p <0.0001, respectively). There was no difference in median length of stay. Median total cost was lower among patients undergoing open vs robotic pyeloplasty ($7,221 vs $10,780, p <0.001). This cost difference was largely attributable to robotic supply costs. During the study period open pyeloplasty made up a declining majority of cases. Use of laparoscopic pyeloplasty plateaued, while robotic pyeloplasty increased. Operative time was longer for minimally invasive pyeloplasty, while length of stay was equivalent across all procedures. A higher cost associated with robotic pyeloplasty was driven by operating room use and robotic equipment costs, which nullified low room and board cost. This study reflects an adoption period for robotic pyeloplasty. With time, perioperative outcomes and cost may improve. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  20. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  1. Choosing colors for map display icons using models of visual search.

    PubMed

    Shive, Joshua; Francis, Gregory

    2013-04-01

    We show how to choose colors for icons on maps to minimize search time using predictions of a model of visual search. The model analyzes digital images of a search target (an icon on a map) and a search display (the map containing the icon) and predicts search time as a function of target-distractor color distinctiveness and target eccentricity. We parameterized the model using data from a visual search task and performed a series of optimization tasks to test the model's ability to choose colors for icons to minimize search time across icons. Map display designs made by this procedure were tested experimentally. In a follow-up experiment, we examined the model's flexibility to assign colors in novel search situations. The model fits human performance, performs well on the optimization tasks, and can choose colors for icons on maps with novel stimuli to minimize search time without requiring additional model parameter fitting. Models of visual search can suggest color choices that produce search time reductions for display icons. Designers should consider constructing visual search models as a low-cost method of evaluating color assignments.

  2. Single-level anterior cervical discectomy and fusion versus minimally invasive posterior cervical foraminotomy for patients with cervical radiculopathy: a cost analysis.

    PubMed

    Mansfield, Haley E; Canar, W Jeffrey; Gerard, Carter S; O'Toole, John E

    2014-11-01

    Patients suffering from cervical radiculopathy in whom a course of nonoperative treatment has failed are often candidates for a single-level anterior cervical discectomy and fusion (ACDF) or posterior cervical foraminotomy (PCF). The objective of this analysis was to identify any significant cost differences between these surgical methods by comparing direct costs to the hospital. Furthermore, patient-specific characteristics were also considered for their effect on component costs. After obtaining approval from the medical center institutional review board, the authors conducted a retrospective cross-sectional comparative cohort study, with a sample of 101 patients diagnosed with cervical radiculopathy and who underwent an initial single-level ACDF or minimally invasive PCF during a 3-year period. Using these data, bivariate analyses were conducted to determine significant differences in direct total procedure and component costs between surgical techniques. Factorial ANOVAs were also conducted to determine any relationship between patient sex and smoking status to the component costs per surgery. The mean total direct cost for an ACDF was $8192, and the mean total direct cost for a PCF was $4320. There were significant differences in the cost components for direct costs and operating room supply costs. It was found that there was no statistically significant difference in component costs with regard to patient sex or smoking status. In the management of single-level cervical radiculopathy, the present analysis has revealed that the average cost of an ACDF is 89% more than a PCF. This increased cost is largely due to the cost of surgical implants. These results do not appear to be dependent on patient sex or smoking status. When combined with results from previous studies highlighting the comparable patient outcomes for either procedure, the authors' findings suggest that from a health care economics standpoint, physicians should consider a minimally invasive PCF in the treatment of cervical radiculopathy.

  3. Evaluation of thyroid stimulating hormone (TSH) alone as a first-line thyroid function test (TFT) in Papua New Guinea.

    PubMed

    Kende, M; Kandapu, S

    2002-01-01

    In the Port Moresby General Hospital, the Chemical Pathology Department assays both thyroid stimulating hormone (TSH) and free thyroxine (FT4) on all requests for a thyroid function test (TFT). The cost of assaying both tests is obviously higher than either test alone. In order to minimize the cost of a TFT we aimed to determine if TSH or FT4 alone as a first-line test would be adequate in assessing the thyroid hormone status of patients. We analyzed TFT records from January 1996 to May 2000 in the Port Moresby General Hospital. A total of 3089 TSH and 2867 FT4 were assayed at an annual reagent cost of Papua New Guinea kina 14,500. When TSH alone is used as a first-line test at the Port Moresby General Hospital, the biochemical status of 95% of patients will be appropriately categorized as euthyroidism, hypothyroidism or hyperthyroidism with only 5% discrepant (ie, normal TSH with abnormal FT4) results. In contrast, using FT4 alone as a first-line test correctly classifies only 84% of TFTs. Euthyroid status is observed in 50% of patients and FT4 assays on these samples will be excluded appropriately if a TSH-only protocol is adopted. Furthermore, we will save a quarter of the yearly cost of TFTs on reagents alone by performing TSH only. We conclude that TSH alone is an adequate first-line thyroid function test in Papua New Guinea and when it is normal no further FT4 test is necessary unless clinically indicated.

  4. Optimization of Highway Work Zone Decisions Considering Short-Term and Long-Term Impacts

    DTIC Science & Technology

    2010-01-01

    strategies which can minimize the one-time work zone cost. Considering the complex and combinatorial nature of this optimization problem, a heuristic...combination of lane closure and traffic control strategies which can minimize the one-time work zone cost. Considering the complex and combinatorial nature ...zone) NV # the number of vehicle classes NPV $ Net Present Value p’(t) % Adjusted traffic diversion rate at time t p(t) % Natural diversion rate

  5. Optimal Technology Investment and Operation in Zero-Net-Energy Buildings with Demand Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stadler , Michael; Siddiqui, Afzal; Marnay, Chris

    The US Department of Energy has launched the Zero-Net-Energy (ZNE) Commercial Building Initiative (CBI) in order to develop commercial buildings that produce as much energy as they use. Its objective is to make these buildings marketable by 2025 such that they minimize their energy use through cutting-edge energy-efficient technologies and meet their remaining energy needs through on-site renewable energy generation. We examine how such buildings may be implemented within the context of a cost- or carbon-minimizing microgrid that is able to adopt and operate various technologies, such as photovoltaic (PV) on-site generation, heat exchangers, solar thermal collectors, absorption chillers, andmore » passive / demand-response technologies. We use a mixed-integer linear program (MILP) that has a multi-criteria objective function: the minimization of a weighted average of the building's annual energy costs and carbon / CO2 emissions. The MILP's constraints ensure energy balance and capacity limits. In addition, constraining the building's energy consumed to equal its energy exports enables us to explore how energy sales and demand-response measures may enable compliance with the CBI. Using a nursing home in northern California and New York with existing tariff rates and technology data, we find that a ZNE building requires ample PV capacity installed to ensure electricity sales during the day. This is complemented by investment in energy-efficient combined heat and power equipment, while occasional demand response shaves energy consumption. A large amount of storage is also adopted, which may be impractical. Nevertheless, it shows the nature of the solutions and costs necessary to achieve ZNE. For comparison, we analyze a nursing home facility in New York to examine the effects of a flatter tariff structure and different load profiles. It has trouble reaching ZNE status and its load reductions as well as efficiency measures need to be more effective than those in the CA case. Finally, we illustrate that the multi-criteria frontier that considers costs and carbon emissions in the presence of demand response dominates the one without it.« less

  6. Using optimal transport theory to estimate transition probabilities in metapopulation dynamics

    USGS Publications Warehouse

    Nichols, Jonathan M.; Spendelow, Jeffrey A.; Nichols, James D.

    2017-01-01

    This work considers the estimation of transition probabilities associated with populations moving among multiple spatial locations based on numbers of individuals at each location at two points in time. The problem is generally underdetermined as there exists an extremely large number of ways in which individuals can move from one set of locations to another. A unique solution therefore requires a constraint. The theory of optimal transport provides such a constraint in the form of a cost function, to be minimized in expectation over the space of possible transition matrices. We demonstrate the optimal transport approach on marked bird data and compare to the probabilities obtained via maximum likelihood estimation based on marked individuals. It is shown that by choosing the squared Euclidean distance as the cost, the estimated transition probabilities compare favorably to those obtained via maximum likelihood with marked individuals. Other implications of this cost are discussed, including the ability to accurately interpolate the population's spatial distribution at unobserved points in time and the more general relationship between the cost and minimum transport energy.

  7. Hospital downsizing and workforce reduction strategies: some inner workings.

    PubMed

    Weil, Thomas P

    2003-02-01

    Downsizing, manpower reductions, re-engineering, and resizing are used extensively in the United States to reduce cost and to evaluate the effectiveness and efficiency of various functions and processes. Published studies report that these managerial strategies result in a minimal impact on access to services, quality of care, and the ability to reduce costs. But, these approaches certainly alienate employees. These findings are usually explained by the significant difficulties experienced in eliminating nursing and other similar direct patient care-oriented positions and in terminating white-collar employees. Possibly an equally plausible reason why hospitals and physician practices react so poorly to these management strategies is their cost structure-high fixed (85%) and low variable (15%)-and that simply generating greater volume does not necessarily achieve economies of scale. More workable alternatives for health executives to effectuate cost reductions consist of simplifying prepayment, decreasing the overall availability and centralizing tertiary services at academic health centres, and closing superfluous hospitals and other health facilities. America's pluralistic values and these proposals having serious political repercussions for health executives and elected officials often present serious barriers in their implementation.

  8. Do Medicare Advantage Plans Minimize Costs? Investigating the Relationship Between Benchmarks, Costs, and Rebates.

    PubMed

    Zuckerman, Stephen; Skopec, Laura; Guterman, Stuart

    2017-12-01

    Medicare Advantage (MA), the program that allows people to receive their Medicare benefits through private health plans, uses a benchmark-and-bidding system to induce plans to provide benefits at lower costs. However, prior research suggests medical costs, profits, and other plan costs are not as low under this system as they might otherwise be. To examine how well the current system encourages MA plans to bid their lowest cost by examining the relationship between costs and bonuses (rebates) and the benchmarks Medicare uses in determining plan payments. Regression analysis using 2015 data for HMO and local PPO plans. Costs and rebates are higher for MA plans in areas with higher benchmarks, and plan costs vary less than benchmarks do. A one-dollar increase in benchmarks is associated with 32-cent-higher plan costs and a 52-cent-higher rebate, even when controlling for market and plan factors that can affect costs. This suggests the current benchmark-and-bidding system allows plans to bid higher than local input prices and other market conditions would seem to warrant. To incentivize MA plans to maximize efficiency and minimize costs, Medicare could change the way benchmarks are set or used.

  9. The environmental impact of the Glostavent® anesthetic machine.

    PubMed

    Eltringham, Roger J; Neighbour, Robert C

    2015-06-01

    Because anesthetic machines have become more complex and more expensive, they have become less suitable for use in the many isolated hospitals in the poorest countries in the world. In these situations, they are frequently unable to function at all because of interruptions in the supply of oxygen or electricity and the absence of skilled technicians for maintenance and servicing. Despite these disadvantages, these machines are still delivered in large numbers, thereby expending precious resources without any benefit to patients. The Glostavent was introduced primarily to enable an anesthetic service to be delivered in these difficult circumstances. It is smaller and less complex than standard anesthetic machines and much less expensive to produce. It combines a drawover anesthetic system with an oxygen concentrator and a gas-driven ventilator. It greatly reduces the need for the purchase and transport of cylinders of compressed gases, reduces the impact on the environment, and enables considerable savings. Cylinder oxygen is expensive to produce and difficult to transport over long distances on poor roads. Consequently, the supply may run out. However, when using the Glostavent, oxygen is normally produced at a fraction of the cost of cylinders by the oxygen concentrator, which is an integral part of the Glostavent. This enables great savings in the purchase and transport cost of oxygen cylinders. If the electricity fails and the oxygen concentrator ceases to function, oxygen from a reserve cylinder automatically provides the pressure to drive the ventilator and oxygen for the breathing circuit. Consequently, economy is achieved because the ventilator has been designed to minimize the amount of driving gas required to one-seventh of the patient's tidal volume. Additional economies are achieved by completely eliminating spillage of oxygen from the breathing system and by recycling the driving gas into the breathing system to increase the Fraction of Inspired Oxygen (FIO2) at no extra cost. Savings also are accrued when using the drawover breathing system as the need for nitrous oxide, compressed air, and soda lime are eliminated. The Glostavent enables the administration of safe anesthesia to be continued when standard machines are unable to function and can do so with minimal harm to the environment.

  10. Real-time terminal area trajectory planning for runway independent aircraft

    NASA Astrophysics Data System (ADS)

    Xue, Min

    The increasing demand for commercial air transportation results in delays due to traffic queues that form bottlenecks along final approach and departure corridors. In urban areas, it is often infeasible to build new runways, and regardless of automation upgrades traffic must remain separated to avoid the wakes of previous aircraft. Vertical or short takeoff and landing aircraft as Runway Independent Aircraft (RIA) can increase passenger throughput at major urban airports via the use of vertiports or stub runways. The concept of simultaneous non-interfering (SNI) operations has been proposed to reduce traffic delays by creating approach and departure corridors that do not intersect existing fixed-wing routes. However, SNI trajectories open new routes that may overfly noise-sensitive areas, and RIA may generate more noise than traditional jet aircraft, particularly on approach. In this dissertation, we develop efficient SNI noise abatement procedures applicable to RIA. First, we introduce a methodology based on modified approximated cell-decomposition and Dijkstra's search algorithm to optimize longitudinal plane (2-D) RIA trajectories over a cost function that minimizes noise, time, and fuel use. Then, we extend the trajectory optimization model to 3-D with a k-ary tree as the discrete search space. We incorporate geography information system (GIS) data, specifically population, into our objective function, and focus on a practical case study: the design of SNI RIA approach procedures to Baltimore-Washington International airport. Because solutions were represented as trim state sequences, we incorporated smooth transition between segments to enable more realistic cost estimates. Due to the significant computational complexity, we investigated alternative more efficient optimization techniques applicable to our nonlinear, non-convex, heavily constrained, and discontinuous objective function. Comparing genetic algorithm (GA) and adaptive simulated annealing (ASA) with our original Dijkstra's algorithm, ASA is identified as the most efficient algorithm for terminal area trajectory optimization. The effects of design parameter discretization are analyzed, with results indicating a SNI procedure with 3-4 segments effectively balances simplicity with cost minimization. Finally, pilot control commands were implemented and generated via optimization-base inverse simulation to validate execution of the optimal approach trajectories.

  11. Is minimal access spine surgery more cost-effective than conventional spine surgery?

    PubMed

    Lubelski, Daniel; Mihalovich, Kathryn E; Skelly, Andrea C; Fehlings, Michael G; Harrop, James S; Mummaneni, Praveen V; Wang, Michael Y; Steinmetz, Michael P

    2014-10-15

    Systematic review. To summarize and critically review the economic literature evaluating the cost-effectiveness of minimal access surgery (MAS) compared with conventional open procedures for the cervical and lumbar spine. MAS techniques may improve perioperative parameters (length of hospital stay and extent of blood loss) compared with conventional open approaches. However, some have questioned the clinical efficacy of these differences and the associated cost-effectiveness implications. When considering the long-term outcomes, there seem to be no significant differences between MAS and open surgery. PubMed, EMBASE, the Cochrane Collaboration database, University of York, Centre for Reviews and Dissemination (NHS-EED and HTA), and the Tufts CEA Registry were reviewed to identify full economic studies comparing MAS with open techniques prior to December 24, 2013, based on the key questions established a priori. Only economic studies that evaluated and synthesized the costs and consequences of MAS compared with conventional open procedures (i.e., cost-minimization, cost-benefit, cost-effectiveness, or cost-utility) were considered for inclusion. Full text of the articles meeting inclusion criteria were reviewed by 2 independent investigators to obtain the final collection of included studies. The Quality of Health Economic Studies instrument was scored by 2 independent reviewers to provide an initial basis for critical appraisal of included economic studies. The search strategy yielded 198 potentially relevant citations, and 6 studies met the inclusion criteria, evaluating the costs and consequences of MAS versus conventional open procedures performed for the lumbar spine; no studies for the cervical spine met the inclusion criteria. Studies compared MAS tubular discectomy with conventional microdiscectomy, minimal access transforaminal lumbar interbody fusion versus open transforaminal lumbar interbody fusion, and multilevel hemilaminectomy via MAS versus open approach. Overall, the included cost-effectiveness studies generally supported no significant differences between open surgery and MAS lumbar approaches. However, these conclusions are preliminary because there was a paucity of high-quality evidence. Much of the evidence lacked details on methodology for modeling, related assumptions, justification of economic model chosen, and sources and types of included costs and consequences. The follow-up periods were highly variable, indirect costs were not frequently analyzed or reported, and many of the studies were conducted by a single group, thereby limiting generalizability. Prospective studies are needed to define differences and optimal treatment algorithms. 3.

  12. 48 CFR 5215.605 - Evaluation factors.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... REGULATIONS CONTRACTING BY NEGOTIATION Source Selection 5215.605 Evaluation factors. (S-90)(1) When a cost... the proposed costs may be adjusted, for purposes of evaluation, based upon the results of the cost... minimally acceptable approach or a cost/benefit approach. When the quality desired is that necessary to meet...

  13. 48 CFR 5215.402 - General.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... pricing in the Navy is through the use of competition, without the need for cost or pricing data and cost... procurement leadtime as a result of minimizing the requirement for cost or pricing data and associated audit reports. As competition is increasingly relied upon and the need for cost or pricing data is reduced...

  14. A New Distributed Optimization for Community Microgrids Scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Starke, Michael R; Tomsovic, Kevin

    This paper proposes a distributed optimization model for community microgrids considering the building thermal dynamics and customer comfort preference. The microgrid central controller (MCC) minimizes the total cost of operating the community microgrid, including fuel cost, purchasing cost, battery degradation cost and voluntary load shedding cost based on the customers' consumption, while the building energy management systems (BEMS) minimize their electricity bills as well as the cost associated with customer discomfort due to room temperature deviation from the set point. The BEMSs and the MCC exchange information on energy consumption and prices. When the optimization converges, the distributed generation scheduling,more » energy storage charging/discharging and customers' consumption as well as the energy prices are determined. In particular, we integrate the detailed thermal dynamic characteristics of buildings into the proposed model. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of proposed model.« less

  15. Cost-efficient scheduling of FAST observations

    NASA Astrophysics Data System (ADS)

    Luo, Qi; Zhao, Laiping; Yu, Ce; Xiao, Jian; Sun, Jizhou; Zhu, Ming; Zhong, Yi

    2018-03-01

    A cost-efficient schedule for the Five-hundred-meter Aperture Spherical radio Telescope (FAST) requires to maximize the number of observable proposals and the overall scientific priority, and minimize the overall slew-cost generated by telescope shifting, while taking into account the constraints including the astronomical objects visibility, user-defined observable times, avoiding Radio Frequency Interference (RFI). In this contribution, first we solve the problem of maximizing the number of observable proposals and scientific priority by modeling it as a Minimum Cost Maximum Flow (MCMF) problem. The optimal schedule can be found by any MCMF solution algorithm. Then, for minimizing the slew-cost of the generated schedule, we devise a maximally-matchable edges detection-based method to reduce the problem size, and propose a backtracking algorithm to find the perfect matching with minimum slew-cost. Experiments on a real dataset from NASA/IPAC Extragalactic Database (NED) show that, the proposed scheduler can increase the usage of available times with high scientific priority and reduce the slew-cost significantly in a very short time.

  16. Do Medicare Beneficiaries Living With HIV/AIDS Choose Prescription Drug Plans That Minimize Their Total Spending?

    PubMed

    Desmond, Katherine A; Rice, Thomas H; Leibowitz, Arleen A

    2017-01-01

    This article examines whether California Medicare beneficiaries with HIV/AIDS choose Part D prescription drug plans that minimize their expenses. Among beneficiaries without low-income supplementation, we estimate the excess cost, and the insurance policy and beneficiary characteristics responsible, when the lowest cost plan is not chosen. We use a cost calculator developed for this study, and 2010 drug use data on 1453 California Medicare beneficiaries with HIV who were taking antiretroviral medications. Excess spending is defined as the difference between projected total spending (premium and cost sharing) for the beneficiary's current drug regimen in own plan vs spending for the lowest cost alternative plan. Regression analyses related this excess spending to individual and plan characteristics. We find that beneficiaries pay more for Medicare Part D plans with gap coverage and no deductible. Higher premiums for more extensive coverage exceeded savings in deductible and copayment/coinsurance costs. We conclude that many beneficiaries pay for plan features whose costs exceed their benefits.

  17. Do Medicare Beneficiaries Living With HIV/AIDS Choose Prescription Drug Plans That Minimize Their Total Spending?

    PubMed Central

    Desmond, Katherine A.; Rice, Thomas H.; Leibowitz, Arleen A.

    2017-01-01

    This article examines whether California Medicare beneficiaries with HIV/AIDS choose Part D prescription drug plans that minimize their expenses. Among beneficiaries without low-income supplementation, we estimate the excess cost, and the insurance policy and beneficiary characteristics responsible, when the lowest cost plan is not chosen. We use a cost calculator developed for this study, and 2010 drug use data on 1453 California Medicare beneficiaries with HIV who were taking antiretroviral medications. Excess spending is defined as the difference between projected total spending (premium and cost sharing) for the beneficiary’s current drug regimen in own plan vs spending for the lowest cost alternative plan. Regression analyses related this excess spending to individual and plan characteristics. We find that beneficiaries pay more for Medicare Part D plans with gap coverage and no deductible. Higher premiums for more extensive coverage exceeded savings in deductible and copayment/coinsurance costs. We conclude that many beneficiaries pay for plan features whose costs exceed their benefits. PMID:28990452

  18. Reliable Adaptive Data Aggregation Route Strategy for a Trade-off between Energy and Lifetime in WSNs

    PubMed Central

    Guo, Wenzhong; Hong, Wei; Zhang, Bin; Chen, Yuzhong; Xiong, Naixue

    2014-01-01

    Mobile security is one of the most fundamental problems in Wireless Sensor Networks (WSNs). The data transmission path will be compromised for some disabled nodes. To construct a secure and reliable network, designing an adaptive route strategy which optimizes energy consumption and network lifetime of the aggregation cost is of great importance. In this paper, we address the reliable data aggregation route problem for WSNs. Firstly, to ensure nodes work properly, we propose a data aggregation route algorithm which improves the energy efficiency in the WSN. The construction process achieved through discrete particle swarm optimization (DPSO) saves node energy costs. Then, to balance the network load and establish a reliable network, an adaptive route algorithm with the minimal energy and the maximum lifetime is proposed. Since it is a non-linear constrained multi-objective optimization problem, in this paper we propose a DPSO with the multi-objective fitness function combined with the phenotype sharing function and penalty function to find available routes. Experimental results show that compared with other tree routing algorithms our algorithm can effectively reduce energy consumption and trade off energy consumption and network lifetime. PMID:25215944

  19. Retrospective Cost Adaptive Control with Concurrent Closed-Loop Identification

    NASA Astrophysics Data System (ADS)

    Sobolic, Frantisek M.

    Retrospective cost adaptive control (RCAC) is a discrete-time direct adaptive control algorithm for stabilization, command following, and disturbance rejection. RCAC is known to work on systems given minimal modeling information which is the leading numerator coefficient and any nonminimum-phase (NMP) zeros of the plant transfer function. This information is normally needed a priori and is key in the development of the filter, also known as the target model, within the retrospective performance variable. A novel approach to alleviate the need for prior modeling of both the leading coefficient of the plant transfer function as well as any NMP zeros is developed. The extension to the RCAC algorithm is the use of concurrent optimization of both the target model and the controller coefficients. Concurrent optimization of the target model and controller coefficients is a quadratic optimization problem in the target model and controller coefficients separately. However, this optimization problem is not convex as a joint function of both variables, and therefore nonconvex optimization methods are needed. Finally, insights within RCAC that include intercalated injection between the controller numerator and the denominator, unveil the workings of RCAC fitting a specific closed-loop transfer function to the target model. We exploit this interpretation by investigating several closed-loop identification architectures in order to extract this information for use in the target model.

  20. Optimal design of the satellite constellation arrangement reconfiguration process

    NASA Astrophysics Data System (ADS)

    Fakoor, Mahdi; Bakhtiari, Majid; Soleymani, Mahshid

    2016-08-01

    In this article, a novel approach is introduced for the satellite constellation reconfiguration based on Lambert's theorem. Some critical problems are raised in reconfiguration phase, such as overall fuel cost minimization, collision avoidance between the satellites on the final orbital pattern, and necessary maneuvers for the satellites in order to be deployed in the desired position on the target constellation. To implement the reconfiguration phase of the satellite constellation arrangement at minimal cost, the hybrid Invasive Weed Optimization/Particle Swarm Optimization (IWO/PSO) algorithm is used to design sub-optimal transfer orbits for the satellites existing in the constellation. Also, the dynamic model of the problem will be modeled in such a way that, optimal assignment of the satellites to the initial and target orbits and optimal orbital transfer are combined in one step. Finally, we claim that our presented idea i.e. coupled non-simultaneous flight of satellites from the initial orbital pattern will lead to minimal cost. The obtained results show that by employing the presented method, the cost of reconfiguration process is reduced obviously.

  1. Use of Linear Programming to Develop Cost-Minimized Nutritionally Adequate Health Promoting Food Baskets

    PubMed Central

    Tetens, Inge; Dejgård Jensen, Jørgen; Smed, Sinne; Gabrijelčič Blenkuš, Mojca; Rayner, Mike; Darmon, Nicole; Robertson, Aileen

    2016-01-01

    Background Food-Based Dietary Guidelines (FBDGs) are developed to promote healthier eating patterns, but increasing food prices may make healthy eating less affordable. The aim of this study was to design a range of cost-minimized nutritionally adequate health-promoting food baskets (FBs) that help prevent both micronutrient inadequacy and diet-related non-communicable diseases at lowest cost. Methods Average prices for 312 foods were collected within the Greater Copenhagen area. The cost and nutrient content of five different cost-minimized FBs for a family of four were calculated per day using linear programming. The FBs were defined using five different constraints: cultural acceptability (CA), or dietary guidelines (DG), or nutrient recommendations (N), or cultural acceptability and nutrient recommendations (CAN), or dietary guidelines and nutrient recommendations (DGN). The variety and number of foods in each of the resulting five baskets was increased through limiting the relative share of individual foods. Results The one-day version of N contained only 12 foods at the minimum cost of DKK 27 (€ 3.6). The CA, DG, and DGN were about twice of this and the CAN cost ~DKK 81 (€ 10.8). The baskets with the greater variety of foods contained from 70 (CAN) to 134 (DGN) foods and cost between DKK 60 (€ 8.1, N) and DKK 125 (€ 16.8, DGN). Ensuring that the food baskets cover both dietary guidelines and nutrient recommendations doubled the cost while cultural acceptability (CAN) tripled it. Conclusion Use of linear programming facilitates the generation of low-cost food baskets that are nutritionally adequate, health promoting, and culturally acceptable. PMID:27760131

  2. Community disruptions and business costs for distant tsunami evacuations using maximum versus scenario-based zones

    USGS Publications Warehouse

    Wood, Nathan J.; Wilson, Rick I.; Ratliff, Jamie L.; Peters, Jeff; MacMullan, Ed; Krebs, Tessa; Shoaf, Kimberley; Miller, Kevin

    2017-01-01

    Well-executed evacuations are key to minimizing loss of life from tsunamis, yet they also disrupt communities and business productivity in the process. Most coastal communities implement evacuations based on a previously delineated maximum-inundation zone that integrates zones from multiple tsunami sources. To support consistent evacuation planning that protects lives but attempts to minimize community disruptions, we explore the implications of scenario-based evacuation procedures and use the California (USA) coastline as our case study. We focus on the land in coastal communities that is in maximum-evacuation zones, but is not expected to be flooded by a tsunami generated by a Chilean earthquake scenario. Results suggest that a scenario-based evacuation could greatly reduce the number of residents and employees that would be advised to evacuate for 24–36 h (178,646 and 159,271 fewer individuals, respectively) and these reductions are concentrated primarily in three counties for this scenario. Private evacuation spending is estimated to be greater than public expenditures for operating shelters in the area of potential over-evacuations ($13 million compared to $1 million for a 1.5-day evacuation). Short-term disruption costs for businesses in the area of potential over-evacuation are approximately $122 million for a 1.5-day evacuation, with one-third of this cost associated with manufacturing, suggesting that some disruption costs may be recouped over time with increased short-term production. There are many businesses and organizations in this area that contain individuals with limited mobility or access and functional needs that may have substantial evacuation challenges. This study demonstrates and discusses the difficulties of tsunami-evacuation decision-making for relatively small to moderate events faced by emergency managers, not only in California but in coastal communities throughout the world.

  3. Groupwise Registration and Atlas Construction of 4th-Order Tensor Fields Using the ℝ+ Riemannian Metric*

    PubMed Central

    Barmpoutis, Angelos

    2010-01-01

    Registration of Diffusion-Weighted MR Images (DW-MRI) can be achieved by registering the corresponding 2nd-order Diffusion Tensor Images (DTI). However, it has been shown that higher-order diffusion tensors (e.g. order-4) outperform the traditional DTI in approximating complex fiber structures such as fiber crossings. In this paper we present a novel method for unbiased group-wise non-rigid registration and atlas construction of 4th-order diffusion tensor fields. To the best of our knowledge there is no other existing method to achieve this task. First we define a metric on the space of positive-valued functions based on the Riemannian metric of real positive numbers (denoted by ℝ+). Then, we use this metric in a novel functional minimization method for non-rigid 4th-order tensor field registration. We define a cost function that accounts for the 4th-order tensor re-orientation during the registration process and has analytic derivatives with respect to the transformation parameters. Finally, the tensor field atlas is computed as the minimizer of the variance defined using the Riemannian metric. We quantitatively compare the proposed method with other techniques that register scalar-valued or diffusion tensor (rank-2) representations of the DWMRI. PMID:20436782

  4. A Concept for a Mobile Remote Manipulator System

    NASA Technical Reports Server (NTRS)

    Mikulus, M. M., Jr.; Bush, H. G.; Wallsom, R. E.; Jensen, J. K.

    1985-01-01

    A conceptual design for a Mobile Remote Manipulator System (MRMS) is presented. This concept does not require continuous rails for mobility (only guide pins at truss hardpoints) and is very compact, being only one bay square. The MRMS proposed is highly maneuverable and is able to move in any direction along the orthogonal guide pin array under complete control at all times. The proposed concept would greatly enhance the safety and operational capabilities of astronauts performing EVA functions such as structural assembly, payload transport and attachment, space station maintenance, repair or modification, and future spacecraft construction or servicing. The MRMS drive system conceptual design presented is a reasonably simple mechanical device which can be designed to exhibit high reliability. Developmentally, all components of the proposed MRMS either exist or are considered to be completely state of the art designs requiring minimal development, features which should enhance reliability and minimize costs.

  5. Single-shot full resolution region-of-interest (ROI) reconstruction in image plane digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Singh, Mandeep; Khare, Kedar

    2018-05-01

    We describe a numerical processing technique that allows single-shot region-of-interest (ROI) reconstruction in image plane digital holographic microscopy with full pixel resolution. The ROI reconstruction is modelled as an optimization problem where the cost function to be minimized consists of an L2-norm squared data fitting term and a modified Huber penalty term that are minimized alternately in an adaptive fashion. The technique can provide full pixel resolution complex-valued images of the selected ROI which is not possible to achieve with the commonly used Fourier transform method. The technique can facilitate holographic reconstruction of individual cells of interest from a large field-of-view digital holographic microscopy data. The complementary phase information in addition to the usual absorption information already available in the form of bright field microscopy can make the methodology attractive to the biomedical user community.

  6. Optimal groundwater remediation design of pump and treat systems via a simulation-optimization approach and firefly algorithm

    NASA Astrophysics Data System (ADS)

    Javad Kazemzadeh-Parsi, Mohammad; Daneshmand, Farhang; Ahmadfard, Mohammad Amin; Adamowski, Jan; Martel, Richard

    2015-01-01

    In the present study, an optimization approach based on the firefly algorithm (FA) is combined with a finite element simulation method (FEM) to determine the optimum design of pump and treat remediation systems. Three multi-objective functions in which pumping rate and clean-up time are design variables are considered and the proposed FA-FEM model is used to minimize operating costs, total pumping volumes and total pumping rates in three scenarios while meeting water quality requirements. The groundwater lift and contaminant concentration are also minimized through the optimization process. The obtained results show the applicability of the FA in conjunction with the FEM for the optimal design of groundwater remediation systems. The performance of the FA is also compared with the genetic algorithm (GA) and the FA is found to have a better convergence rate than the GA.

  7. Simple Approaches to Minimally-Instrumented, Microfluidic-Based Point-of-Care Nucleic Acid Amplification Tests

    PubMed Central

    Mauk, Michael G.; Song, Jinzhao; Liu, Changchun; Bau, Haim H.

    2018-01-01

    Designs and applications of microfluidics-based devices for molecular diagnostics (Nucleic Acid Amplification Tests, NAATs) in infectious disease testing are reviewed, with emphasis on minimally instrumented, point-of-care (POC) tests for resource-limited settings. Microfluidic cartridges (‘chips’) that combine solid-phase nucleic acid extraction; isothermal enzymatic nucleic acid amplification; pre-stored, paraffin-encapsulated lyophilized reagents; and real-time or endpoint optical detection are described. These chips can be used with a companion module for separating plasma from blood through a combined sedimentation-filtration effect. Three reporter types: Fluorescence, colorimetric dyes, and bioluminescence; and a new paradigm for end-point detection based on a diffusion-reaction column are compared. Multiplexing (parallel amplification and detection of multiple targets) is demonstrated. Low-cost detection and added functionality (data analysis, control, communication) can be realized using a cellphone platform with the chip. Some related and similar-purposed approaches by others are surveyed. PMID:29495424

  8. An autonomous payload controller for the Space Shuttle

    NASA Technical Reports Server (NTRS)

    Hudgins, J. I.

    1979-01-01

    The Autonomous Payload Control (APC) system discussed in the present paper was designed on the basis of such criteria as minimal cost of implementation, minimal space required in the flight-deck area, simple operation with verification of the results, minimal additional weight, minimal impact on Orbiter design, and minimal impact on Orbiter payload integration. In its present configuration, the APC provides a means for the Orbiter crew to control as many as 31 autononous payloads. The avionics and human engineering aspects of the system are discussed.

  9. Constrained Total Generalized p-Variation Minimization for Few-View X-Ray Computed Tomography Image Reconstruction.

    PubMed

    Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen

    2016-01-01

    Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.

  10. Cost-utility analysis of minimally invasive versus open multilevel hemilaminectomy for lumbar stenosis.

    PubMed

    Parker, Scott L; Adogwa, Owoicho; Davis, Brandon J; Fulchiero, Erin; Aaronson, Oran; Cheng, Joseph; Devin, Clinton J; McGirt, Matthew J

    2013-02-01

    Two-year cost-utility study comparing minimally invasive (MIS) versus open multilevel hemilaminectomy in patients with degenerative lumbar spinal stenosis. The objective of the study was to determine whether MIS versus open multilevel hemilaminectomy for degenerative lumbar spinal stenosis is a cost-effective advancement in lumbar decompression surgery. MIS-multilevel hemilaminectomy for degenerative lumbar spinal stenosis allows for effective treatment of back and leg pain while theoretically minimizing blood loss, tissue injury, and postoperative recovery. No studies have evaluated comprehensive healthcare costs associated with multilevel hemilaminectomy procedures, nor assessed cost-effectiveness of MIS versus open multilevel hemilaminectomy. Fifty-four consecutive patients with lumbar stenosis undergoing multilevel hemilaminectomy through an MIS paramedian tubular approach (n=27) versus midline open approach (n=27) were included. Total back-related medical resource utilization, missed work, and health state values [quality adjusted life years (QALYs), calculated from EuroQuol-5D with US valuation] were assessed after 2-year follow-up. Two-year resource use was multiplied by unit costs based on Medicare national allowable payment amounts (direct cost) and work-day losses were multiplied by the self-reported gross-of-tax wage rate (indirect cost). Difference in mean total cost per QALY gained for MIS versus open hemilaminectomy was assessed as incremental cost-effectiveness ratio (ICER: COST(MIS)-COST(OPEN)/QALY(MIS)-QALY(OPEN)). MIS versus open cohorts were similar at baseline. MIS and open hemilaminectomy were associated with an equivalent cumulative gain of 0.72 QALYs 2 years after surgery. Mean direct medical costs, indirect societal costs, and total 2-year cost ($23,109 vs. $25,420; P=0.21) were similar between MIS and open hemilaminectomy. MIS versus open approach was associated with similar total costs and utility, making it a cost equivalent technology compared with the traditional open approach. MIS versus open multilevel hemilaminectomy was associated with similar cost over 2 years while providing equivalent improvement in QALYs. In our experience, MIS versus open multilevel hemilaminectomy is a cost equivalent technology for patients with lumbar stenosis-associated radicular pain.

  11. Comparison of the costs of nonoperative care to minimally invasive surgery for sacroiliac joint disruption and degenerative sacroiliitis in a United States commercial payer population: potential economic implications of a new minimally invasive technology

    PubMed Central

    Ackerman, Stacey J; Polly, David W; Knight, Tyler; Schneider, Karen; Holt, Tim; Cummings, John

    2014-01-01

    Introduction Low back pain is common and treatment costly with substantial lost productivity and lost wages in the working-age population. Chronic low back pain originating in the sacroiliac (SI) joint (15%–30% of cases) is commonly treated with nonoperative care, but new minimally invasive surgery (MIS) options are also effective in treating SI joint disruption. We assessed whether the higher initial MIS SI joint fusion procedure costs were offset by decreased nonoperative care costs from a US commercial payer perspective. Methods An economic model compared the costs of treating SI joint disruption with either MIS SI joint fusion or continued nonoperative care. Nonoperative care costs (diagnostic testing, treatment, follow-up, and retail pharmacy pain medication) were from a retrospective study of Truven Health MarketScan® data. MIS fusion costs were based on the Premier’s Perspective™ Comparative Database and professional fees on 2012 Medicare payment for Current Procedural Terminology code 27280. Results The cumulative 3-year (base-case analysis) and 5-year (sensitivity analysis) differentials in commercial insurance payments (cost of nonoperative care minus cost of MIS) were $14,545 and $6,137 per patient, respectively (2012 US dollars). Cost neutrality was achieved at 6 years; MIS costs accrued largely in year 1 whereas nonoperative care costs accrued over time with 92% of up front MIS procedure costs offset by year 5. For patients with lumbar spinal fusion, cost neutrality was achieved in year 1. Conclusion Cost offsets from new interventions for chronic conditions such as MIS SI joint fusion accrue over time. Higher initial procedure costs for MIS were largely offset by decreased nonoperative care costs over a 5-year time horizon. Optimizing effective resource use in both nonoperative and operative patients will facilitate cost-effective health care delivery. The impact of SI joint disruption on direct and indirect costs to commercial insurers, health plan beneficiaries, and employers warrants further consideration. PMID:24904218

  12. Rough sets and Laplacian score based cost-sensitive feature selection

    PubMed Central

    Yu, Shenglong

    2018-01-01

    Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of “good” features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms. PMID:29912884

  13. Rough sets and Laplacian score based cost-sensitive feature selection.

    PubMed

    Yu, Shenglong; Zhao, Hong

    2018-01-01

    Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of "good" features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms.

  14. NEWSUMT: A FORTRAN program for inequality constrained function minimization, users guide

    NASA Technical Reports Server (NTRS)

    Miura, H.; Schmit, L. A., Jr.

    1979-01-01

    A computer program written in FORTRAN subroutine form for the solution of linear and nonlinear constrained and unconstrained function minimization problems is presented. The algorithm is the sequence of unconstrained minimizations using the Newton's method for unconstrained function minimizations. The use of NEWSUMT and the definition of all parameters are described.

  15. Accounting for the economic risk caused by variation in disease severity in fungicide dose decisions, exemplified for Mycosphaerella graminicola on winter wheat.

    PubMed

    Te Beest, D E; Paveley, N D; Shaw, M W; van den Bosch, F

    2013-07-01

    A method is presented to calculate economic optimum fungicide doses accounting for the risk aversion of growers responding to variability in disease severity between crops. Simple dose-response and disease-yield loss functions are used to estimate net disease-related costs (fungicide cost plus disease-induced yield loss) as a function of dose and untreated severity. With fairly general assumptions about the shapes of the probability distribution of disease severity and the other functions involved, we show that a choice of fungicide dose which minimizes net costs, on average, across seasons results in occasional large net costs caused by inadequate control in high disease seasons. This may be unacceptable to a grower with limited capital. A risk-averse grower can choose to reduce the size and frequency of such losses by applying a higher dose as insurance. For example, a grower may decide to accept "high-loss" years 1 year in 10 or 1 year in 20 (i.e., specifying a proportion of years in which disease severity and net costs will be above a specified level). Our analysis shows that taking into account disease severity variation and risk aversion will usually increase the dose applied by an economically rational grower. The analysis is illustrated with data on Septoria tritici leaf blotch of wheat caused by Mycosphaerella graminicola. Observations from untreated field plots at sites across England over 3 years were used to estimate the probability distribution of disease severities at mid-grain filling. In the absence of a fully reliable disease forecasting scheme, reducing the frequency of high-loss years requires substantially higher doses to be applied to all crops. Disease-resistant cultivars reduce both the optimal dose at all levels of risk and the disease-related costs at all doses.

  16. Chance of Vulnerability Reduction in Application-Specific NoC through Distance Aware Mapping Algorithm

    NASA Astrophysics Data System (ADS)

    Janidarmian, Majid; Fekr, Atena Roshan; Bokharaei, Vahhab Samadi

    2011-08-01

    Mapping algorithm which means which core should be linked to which router is one of the key issues in the design flow of network-on-chip. To achieve an application-specific NoC design procedure that minimizes the communication cost and improves the fault tolerant property, first a heuristic mapping algorithm that produces a set of different mappings in a reasonable time is presented. This algorithm allows the designers to identify the set of most promising solutions in a large design space, which has low communication costs while yielding optimum communication costs in some cases. Another evaluated parameter, vulnerability index, is then considered as a principle of estimating the fault-tolerance property in all produced mappings. Finally, in order to yield a mapping which considers trade-offs between these two parameters, a linear function is defined and introduced. It is also observed that more flexibility to prioritize solutions within the design space is possible by adjusting a set of if-then rules in fuzzy logic.

  17. Systems and technologies for high-speed inter-office/datacenter interface

    NASA Astrophysics Data System (ADS)

    Sone, Y.; Nishizawa, H.; Yamamoto, S.; Fukutoku, M.; Yoshimatsu, T.

    2017-01-01

    Emerging requirements for inter-office/inter-datacenter short reach links for data center interconnects (DCI) and metro transport networks have led to various inter-office and inter-datacenter optical interface technologies. These technologies are bringing significant changes to systems and network architectures. In this paper, we present a system and ZR optical interface technologies for DCI and metro transport networks, then introduce the latest challenges facing the system framework. There are two trends in reach extension; one is to use Ethernet and the other is to use digital coherent technologies. The first approach achieves reach extension while using as many existing Ethernet components as possible. It offers low costs as reuses the cost-effective components created for the large Ethernet market. The second approach adopts low-cost and low power coherent DSPs that implement the minimal set long haul transmission functions. This paper introduces an architecture that integrates both trends. The architecture satisfies both datacom and telecom needs with a common control and management interface and automated configuration.

  18. Eighth-order explicit two-step hybrid methods with symmetric nodes and weights for solving orbital and oscillatory IVPs

    NASA Astrophysics Data System (ADS)

    Franco, J. M.; Rández, L.

    The construction of new two-step hybrid (TSH) methods of explicit type with symmetric nodes and weights for the numerical integration of orbital and oscillatory second-order initial value problems (IVPs) is analyzed. These methods attain algebraic order eight with a computational cost of six or eight function evaluations per step (it is one of the lowest costs that we know in the literature) and they are optimal among the TSH methods in the sense that they reach a certain order of accuracy with minimal cost per step. The new TSH schemes also have high dispersion and dissipation orders (greater than 8) in order to be adapted to the solution of IVPs with oscillatory solutions. The numerical experiments carried out with several orbital and oscillatory problems show that the new eighth-order explicit TSH methods are more efficient than other standard TSH or Numerov-type methods proposed in the scientific literature.

  19. An integrated production-inventory model for the singlevendor two-buyer problem with partial backorder, stochastic demand, and service level constraints

    NASA Astrophysics Data System (ADS)

    Arfawi Kurdhi, Nughthoh; Adi Diwiryo, Toray; Sutanto

    2016-02-01

    This paper presents an integrated single-vendor two-buyer production-inventory model with stochastic demand and service level constraints. Shortage is permitted in the model, and partial backordered partial lost sale. The lead time demand is assumed follows a normal distribution and the lead time can be reduced by adding crashing cost. The lead time and ordering cost reductions are interdependent with logaritmic function relationship. A service level constraint policy corresponding to each buyer is considered in the model in order to limit the level of inventory shortages. The purpose of this research is to minimize joint total cost inventory model by finding the optimal order quantity, safety stock, lead time, and the number of lots delivered in one production run. The optimal production-inventory policy gained by the Lagrange method is shaped to account for the service level restrictions. Finally, a numerical example and effects of the key parameters are performed to illustrate the results of the proposed model.

  20. Launch Vehicle Propulsion Parameter Design Multiple Selection Criteria

    NASA Technical Reports Server (NTRS)

    Shelton, Joey Dewayne

    2004-01-01

    The optimization tool described herein addresses and emphasizes the use of computer tools to model a system and focuses on a concept development approach for a liquid hydrogen/liquid oxygen single-stage-to-orbit system, but more particularly the development of the optimized system using new techniques. This methodology uses new and innovative tools to run Monte Carlo simulations, genetic algorithm solvers, and statistical models in order to optimize a design concept. The concept launch vehicle and propulsion system were modeled and optimized to determine the best design for weight and cost by varying design and technology parameters. Uncertainty levels were applied using Monte Carlo Simulations and the model output was compared to the National Aeronautics and Space Administration Space Shuttle Main Engine. Several key conclusions are summarized here for the model results. First, the Gross Liftoff Weight and Dry Weight were 67% higher for the design case for minimization of Design, Development, Test and Evaluation cost when compared to the weights determined by the minimization of Gross Liftoff Weight case. In turn, the Design, Development, Test and Evaluation cost was 53% higher for optimized Gross Liftoff Weight case when compared to the cost determined by case for minimization of Design, Development, Test and Evaluation cost. Therefore, a 53% increase in Design, Development, Test and Evaluation cost results in a 67% reduction in Gross Liftoff Weight. Secondly, the tool outputs define the sensitivity of propulsion parameters, technology and cost factors and how these parameters differ when cost and weight are optimized separately. A key finding was that for a Space Shuttle Main Engine thrust level the oxidizer/fuel ratio of 6.6 resulted in the lowest Gross Liftoff Weight rather than at 5.2 for the maximum specific impulse, demonstrating the relationships between specific impulse, engine weight, tank volume and tank weight. Lastly, the optimum chamber pressure for Gross Liftoff Weight minimization was 2713 pounds per square inch as compared to 3162 for the Design, Development, Test and Evaluation cost optimization case. This chamber pressure range is close to 3000 pounds per square inch for the Space Shuttle Main Engine.

  1. JWST Wavefront Control Toolbox

    NASA Technical Reports Server (NTRS)

    Shin, Shahram Ron; Aronstein, David L.

    2011-01-01

    A Matlab-based toolbox has been developed for the wavefront control and optimization of segmented optical surfaces to correct for possible misalignments of James Webb Space Telescope (JWST) using influence functions. The toolbox employs both iterative and non-iterative methods to converge to an optimal solution by minimizing the cost function. The toolbox could be used in either of constrained and unconstrained optimizations. The control process involves 1 to 7 degrees-of-freedom perturbations per segment of primary mirror in addition to the 5 degrees of freedom of secondary mirror. The toolbox consists of a series of Matlab/Simulink functions and modules, developed based on a "wrapper" approach, that handles the interface and data flow between existing commercial optical modeling software packages such as Zemax and Code V. The limitations of the algorithm are dictated by the constraints of the moving parts in the mirrors.

  2. On a stochastic control method for weakly coupled linear systems. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kwong, R. H.

    1972-01-01

    The stochastic control of two weakly coupled linear systems with different controllers is considered. Each controller only makes measurements about his own system; no information about the other system is assumed to be available. Based on the noisy measurements, the controllers are to generate independently suitable control policies which minimize a quadratic cost functional. To account for the effects of weak coupling directly, an approximate model, which involves replacing the influence of one system on the other by a white noise process is proposed. Simple suboptimal control problem for calculating the covariances of these noises is solved using the matrix minimum principle. The overall system performance based on this scheme is analyzed as a function of the degree of intersystem coupling.

  3. Evaluation of broiler litter transportation in northern Alabama, USA.

    PubMed

    Paudel, Krishna P; Adhikari, Murali; Martin, Neil R

    2004-10-01

    The profitability of using broiler litter as a source of crop nutrients was calculated using a phosphorus-consistent litter application rule. A ton of litter can cost effectively be transported up to 164 miles from the production facility. A cost-minimizing phosphorus-consistent transportation model developed to meet the nutrient needs of 29 counties in northern Alabama revealed that not all of the litter can be utilized in the region. The total cost increased when transportation of the litter out of the heavily surplus counties was prioritized. Total litter use was minimally affected by changes in chemical fertilizer prices. Shadow prices indicated the robustness of the model.

  4. A multi-objective simulation-optimization model for in situ bioremediation of groundwater contamination: Application of bargaining theory

    NASA Astrophysics Data System (ADS)

    Raei, Ehsan; Nikoo, Mohammad Reza; Pourshahabi, Shokoufeh

    2017-08-01

    In the present study, a BIOPLUME III simulation model is coupled with a non-dominating sorting genetic algorithm (NSGA-II)-based model for optimal design of in situ groundwater bioremediation system, considering preferences of stakeholders. Ministry of Energy (MOE), Department of Environment (DOE), and National Disaster Management Organization (NDMO) are three stakeholders in the groundwater bioremediation problem in Iran. Based on the preferences of these stakeholders, the multi-objective optimization model tries to minimize: (1) cost; (2) sum of contaminant concentrations that violate standard; (3) contaminant plume fragmentation. The NSGA-II multi-objective optimization method gives Pareto-optimal solutions. A compromised solution is determined using fallback bargaining with impasse to achieve a consensus among the stakeholders. In this study, two different approaches are investigated and compared based on two different domains for locations of injection and extraction wells. At the first approach, a limited number of predefined locations is considered according to previous similar studies. At the second approach, all possible points in study area are investigated to find optimal locations, arrangement, and flow rate of injection and extraction wells. Involvement of the stakeholders, investigating all possible points instead of a limited number of locations for wells, and minimizing the contaminant plume fragmentation during bioremediation are new innovations in this research. Besides, the simulation period is divided into smaller time intervals for more efficient optimization. Image processing toolbox in MATLAB® software is utilized for calculation of the third objective function. In comparison with previous studies, cost is reduced using the proposed methodology. Dispersion of the contaminant plume is reduced in both presented approaches using the third objective function. Considering all possible points in the study area for determining the optimal locations of the wells in the second approach leads to more desirable results, i.e. decreasing the contaminant concentrations to a standard level and 20% to 40% cost reduction.

  5. Cost-effectiveness and budget impact analyses of a long-term hypertension detection and control program for stroke prevention.

    PubMed

    Yamagishi, Kazumasa; Sato, Shinichi; Kitamura, Akihiko; Kiyama, Masahiko; Okada, Takeo; Tanigawa, Takeshi; Ohira, Tetsuya; Imano, Hironori; Kondo, Masahide; Okubo, Ichiro; Ishikawa, Yoshinori; Shimamoto, Takashi; Iso, Hiroyasu

    2012-09-01

    The nation-wide, community-based intensive hypertension detection and control program, as well as universal health insurance coverage, may well be contributing factors for helping Japan rank near the top among countries with the longest life expectancy. We sought to examine the cost-effectiveness of such a community-based intervention program, as no evidence has been available for this issue. The hypertension detection and control program was initiated in 1963 in full intervention and minimal intervention communities in Akita, Japan. We performed comparative cost-effectiveness and budget-impact analyses for the period 1964-1987 of the costs of public health services and treatment of patients with hypertension and stroke on the one hand, and incidence of stroke on the other in the full intervention and minimal intervention communities. The program provided in the full intervention community was found to be cost saving 13 years after the beginning of program in addition to the fact of effectiveness that; the prevalence and incidence of stroke were consistently lower in the full intervention community than in the minimal intervention community throughout the same period. The incremental cost was minus 28,358 yen per capita over 24 years. The community-based intensive hypertension detection and control program was found to be both effective and cost saving. The national government's policy to support this program may have contributed in part to the substantial decline in stroke incidence and mortality, which was largely responsible for the increase in Japanese life expectancy.

  6. Comparison of minimally invasive parathyroidectomy under local anaesthesia and minimally invasive video-assisted parathyroidectomy for primary hyperparathyroidism: a cost analysis

    PubMed Central

    MELFA, G.I.; RASPANTI, C.; ATTARD, M.; COCORULLO, G.; ATTARD, A.; MAZZOLA, S.; SALAMONE, G.; GULOTTA, G.; SCERRINO, G.

    2016-01-01

    Background Primary hyperparathyroidism (PHPT) origins from a solitary adenoma in 70–95% of cases. Moreover, the advances in methods for localizing an abnormal parathyroid gland made minimally invasive techniques more prominent. This study presents a micro-cost analysis of two parathyroidectomy techniques. Patients and methods 72 consecutive patients who underwent minimally invasive parathyroidectomy, video-assisted (MIVAP, group A, 52 patients) or “open” under local anaesthesia (OMIP, group B, 20 patients) for PHPT were reviewed. Operating room, consumable, anaesthesia, maintenance costs, equipment depreciation and surgeons/anaesthesiologists fees were evaluated. The patient’s satisfaction and the rate of conversion to conventional parathyroidectomy were investigated. T-Student’s, Kolmogorov-Smirnov tests and Odds Ratio were used for statistical analysis. Results 1 patient of the group A and 2 of the group B were excluded from the cost analysis because of the conversion to the conventional technique. Concerning the remnant patients, the overall average costs were: for Operative Room, 1186,69 € for the MIVAP group (51 patients) and 836,11 € for the OMIP group (p<0,001); for the Team, 122,93 € (group A) and 90,02 € (group B) (p<0,001); the other operative costs were 1388,32 € (group A) and 928,23 € (group B) (p<0,001). The patient’s satisfaction was very strongly in favour of the group B (Odds Ratio 20,5 with a 95% confidence interval). Conclusions MIVAP is more expensive compared to the “open” parathyroidectomy under local anaesthesia due to the costs of general anaesthesia and the longer operative time. Moreover, the patients generally prefer the local anaesthesia. Nevertheless, the rate of conversion to the conventional parathyroidectomy was relevant in the group of the local anaesthesia compared to the MIVAP, since the latter allows a four-gland exploration. PMID:27381690

  7. Defining Continuous Improvement and Cost Minimization Possibilities through School Choice Experiments

    ERIC Educational Resources Information Center

    Merrifield, John

    2009-01-01

    Studies of existing best practices cannot determine whether the current "best" schooling practices could be even better, less costly, or more effective and/or improve at a faster rate, but we can discover a cost effective menu of schooling options and each item's minimum cost through market accountability experiments. This paper describes…

  8. Development of type transfer functions for regional-scale nonpoint source groundwater vulnerability assessments

    NASA Astrophysics Data System (ADS)

    Stewart, Iris T.; Loague, Keith

    2003-12-01

    Groundwater vulnerability assessments of nonpoint source agrochemical contamination at regional scales are either qualitative in nature or require prohibitively costly computational efforts. By contrast, the type transfer function (TTF) modeling approach for vadose zone pesticide leaching presented here estimates solute concentrations at a depth of interest, only uses available soil survey, climatic, and irrigation information, and requires minimal computational cost for application. TTFs are soil texture based travel time probability density functions that describe a characteristic leaching behavior for soil profiles with similar soil hydraulic properties. Seven sets of TTFs, representing different levels of upscaling, were developed for six loam soil textural classes with the aid of simulated breakthrough curves from synthetic data sets. For each TTF set, TTFs were determined from a group or subgroup of breakthrough curves for each soil texture by identifying the effective parameters of the function that described the average leaching behavior of the group. The grouping of the breakthrough curves was based on the TTF index, a measure of the magnitude of the peak concentration, the peak arrival time, and the concentration spread. Comparison to process-based simulations show that the TTFs perform well with respect to mass balance, concentration magnitude, and the timing of concentration peaks. Sets of TTFs based on individual soil textures perform better for all the evaluation criteria than sets that span all textures. As prediction accuracy and computational cost increase with the number of TTFs in a set, the selection of a TTF set is determined by a given application.

  9. An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics

    NASA Astrophysics Data System (ADS)

    Turkington, Bruce

    2013-08-01

    A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.

  10. Comparing conventional physical therapy rehabilitation with neuromuscular electrical stimulation after TKA.

    PubMed

    Levine, Michael; McElroy, Karen; Stakich, Valerie; Cicco, Jodie

    2013-03-01

    Rehabilitation following total knee arthroplasty (TKA) is a costly, cumbersome, and often painful process. Physical therapy contributes to the successful outcome of TKA but can be expensive. Alternative methods of obtaining good functional results that help minimize costs are desirable. Neuromuscular electrical stimulation (NMES) is a potential option. Neuromuscular electrical stimulation has been shown to increase quadriceps muscle strength and activation following TKA. Functional scores also improve following TKA when NMES is added to conventional therapy protocols vs therapy alone. The authors hypothesized that rehabilitation managed by a physical therapist would not result in a functional advantage for patients undergoing TKA when compared with NMES and an unsupervised at-home range of motion exercise program and that patient satisfaction would not differ between the 2 groups. Seventy patients were randomized into a postoperative protocol of conventional physical therapy with a licensed therapist, including range of motion exercises and strengthening exercises, or into a program of NMES and range of motion exercises performed at home without therapist supervision. Noninferiority of the NMES program was obtained 6 weeks postoperatively (Knee Society pain/function scores, Western Ontario and McMaster Universities Osteoarthritis Index, flexion). Noninferiority was shown 6 months postoperatively for all parameters. The results suggest that rehabilitation managed by a physical therapist results in no functional advantage or difference in patient satisfaction when compared with NMES and an unsupervised at-home range of motion program. Neuromuscular electrical stimulation and unsupervised at-home range of motion exercises may provide an option for reducing the cost of the postoperative TKA recovery process without compromising quadriceps strength or patient satisfaction. Copyright 2013, SLACK Incorporated.

  11. Stochastic Optimization for Nuclear Facility Deployment Scenarios

    NASA Astrophysics Data System (ADS)

    Hays, Ross Daniel

    Single-use, low-enriched uranium oxide fuel, consumed through several cycles in a light-water reactor (LWR) before being disposed, has become the dominant source of commercial-scale nuclear electric generation in the United States and throughout the world. However, it is not without its drawbacks and is not the only potential nuclear fuel cycle available. Numerous alternative fuel cycles have been proposed at various times which, through the use of different reactor and recycling technologies, offer to counteract many of the perceived shortcomings with regards to waste management, resource utilization, and proliferation resistance. However, due to the varying maturity levels of these technologies, the complicated material flow feedback interactions their use would require, and the large capital investments in the current technology, one should not deploy these advanced designs without first investigating the potential costs and benefits of so doing. As the interactions among these systems can be complicated, and the ways in which they may be deployed are many, the application of automated numerical optimization to the simulation of the fuel cycle could potentially be of great benefit to researchers and interested policy planners. To investigate the potential of these methods, a computational program has been developed that applies a parallel, multi-objective simulated annealing algorithm to a computational optimization problem defined by a library of relevant objective functions applied to the Ver ifiable Fuel Cycle Simulati on Model (VISION, developed at the Idaho National Laboratory). The VISION model, when given a specified fuel cycle deployment scenario, computes the numbers and types of, and construction, operation, and utilization schedules for, the nuclear facilities required to meet a predetermined electric power demand function. Additionally, it calculates the location and composition of the nuclear fuels within the fuel cycle, from initial mining through to eventual disposal. By varying the specifications of the deployment scenario, the simulated annealing algorithm will seek to either minimize the value of a single objective function, or enumerate the trade-off surface between multiple competing objective functions. The available objective functions represent key stakeholder values, minimizing such important factors as high-level waste disposal burden, required uranium ore supply, relative proliferation potential, and economic cost and uncertainty. The optimization program itself is designed to be modular, allowing for continued expansion and exploration as research needs and curiosity indicate. The utility and functionality of this optimization program are demonstrated through its application to one potential fuel cycle scenario of interest. In this scenario, an existing legacy LWR fleet is assumed at the year 2000. The electric power demand grows exponentially at a rate of 1.8% per year through the year 2100. Initially, new demand is met by the construction of 1-GW(e) LWRs. However, beginning in the year 2040, 600-MW(e) sodium-cooled, fast-spectrum reactors operating in a transuranic burning regime with full recycling of spent fuel become available to meet demand. By varying the fraction of new capacity allocated to each reactor type, the optimization program is able to explicitly show the relationships that exist between uranium utilization, long-term heat for geologic disposal, and cost-of-electricity objective functions. The trends associated with these trade-off surfaces tend to confirm many common expectations about the use of nuclear power, namely that while overall it is quite insensitive to variations in the cost of uranium ore, it is quite sensitive to changes in the capital costs of facilities. The optimization algorithm has shown itself to be robust and extensible, with possible extensions to many further fuel cycle optimization problems of interest.

  12. Convert a low-cost sensor to a colorimeter using an improved regression method

    NASA Astrophysics Data System (ADS)

    Wu, Yifeng

    2008-01-01

    Closed loop color calibration is a process to maintain consistent color reproduction for color printers. To perform closed loop color calibration, a pre-designed color target should be printed, and automatically measured by a color measuring instrument. A low cost sensor has been embedded to the printer to perform the color measurement. A series of sensor calibration and color conversion methods have been developed. The purpose is to get accurate colorimetric measurement from the data measured by the low cost sensor. In order to get high accuracy colorimetric measurement, we need carefully calibrate the sensor, and minimize all possible errors during the color conversion. After comparing several classical color conversion methods, a regression based color conversion method has been selected. The regression is a powerful method to estimate the color conversion functions. But the main difficulty to use this method is to find an appropriate function to describe the relationship between the input and the output data. In this paper, we propose to use 1D pre-linearization tables to improve the linearity between the input sensor measuring data and the output colorimetric data. Using this method, we can increase the accuracy of the regression method, so as to improve the accuracy of the color conversion.

  13. Design and development of a low-cost biphasic charge-balanced functional electric stimulator and its clinical validation.

    PubMed

    Shendkar, Chandrashekhar; Lenka, Prasanna K; Biswas, Abhishek; Kumar, Ratnesh; Mahadevappa, Manjunatha

    2015-10-01

    Functional electric stimulators that produce near-ideal, charge-balanced biphasic stimulation waveforms with interphase delay are considered safer and more efficacious than conventional stimulators. An indigenously designed, low-cost, portable FES device named InStim is developed. It features a charge-balanced biphasic single channel. The authors present the complete design, mathematical analysis of the circuit and the clinical evaluation of the device. The developed circuit was tested on stroke patients affected by foot drop problems. It was tested both under laboratory conditions and in clinical settings. The key building blocks of this circuit are low dropout regulators, a DC-DC voltage booster and a single high-power current source OP-Amp with current-limiting capabilities. This allows the device to deliver high-voltage, constant current, biphasic pulses without the use of a bulky step-up transformer. The advantages of the proposed design over the currently existing devices include improved safety features (zero DC current, current-limiting mechanism and safe pulses), waveform morphology that causes less muscle fatigue, cost-effectiveness and compact power-efficient circuit design with minimal components. The device is also capable of producing appropriate ankle dorsiflexion in patients having foot drop problems of various Medical Research Council scale grades.

  14. Distributed Method to Optimal Profile Descent

    NASA Astrophysics Data System (ADS)

    Kim, Geun I.

    Current ground automation tools for Optimal Profile Descent (OPD) procedures utilize path stretching and speed profile change to maintain proper merging and spacing requirements at high traffic terminal area. However, low predictability of aircraft's vertical profile and path deviation during decent add uncertainty to computing estimated time of arrival, a key information that enables the ground control center to manage airspace traffic effectively. This paper uses an OPD procedure that is based on a constant flight path angle to increase the predictability of the vertical profile and defines an OPD optimization problem that uses both path stretching and speed profile change while largely maintaining the original OPD procedure. This problem minimizes the cumulative cost of performing OPD procedures for a group of aircraft by assigning a time cost function to each aircraft and a separation cost function to a pair of aircraft. The OPD optimization problem is then solved in a decentralized manner using dual decomposition techniques under inter-aircraft ADS-B mechanism. This method divides the optimization problem into more manageable sub-problems which are then distributed to the group of aircraft. Each aircraft solves its assigned sub-problem and communicate the solutions to other aircraft in an iterative process until an optimal solution is achieved thus decentralizing the computation of the optimization problem.

  15. Distributed Optimal Dispatch of Distributed Energy Resources Over Lossy Communication Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Junfeng; Yang, Tao; Wu, Di

    In this paper, we consider the economic dispatch problem (EDP), where a cost function that is assumed to be strictly convex is assigned to each of distributed energy resources (DERs), over packet dropping networks. The goal of a standard EDP is to minimize the total generation cost while meeting total demand and satisfying individual generator output limit. We propose a distributed algorithm for solving the EDP over networks. The proposed algorithm is resilient against packet drops over communication links. Under the assumption that the underlying communication network is strongly connected with a positive probability and the packet drops are independentmore » and identically distributed (i.i.d.), we show that the proposed algorithm is able to solve the EDP. Numerical simulation results are used to validate and illustrate the main results of the paper.« less

  16. Beyond bricks and mortar: recent research on substance use disorder recovery management.

    PubMed

    Dennis, Michael L; Scott, Christy K; Laudet, Alexandre

    2014-04-01

    Scientific advances in the past 15 years have clearly highlighted the need for recovery management approaches to help individuals sustain recovery from chronic substance use disorders. This article reviews some of the recent findings related to recovery management: (1) continuing care, (2) recovery management checkups, (3) 12-step or mutual aid, and (4) technology-based interventions. The core assumption underlying these approaches is that earlier detection and re-intervention will improve long-term outcomes by minimizing the harmful consequences of the condition and maximizing or promoting opportunities for maintaining healthy levels of functioning in related life domains. Economic analysis is important because it can take a year or longer for such interventions to offset their costs. The article also examines the potential of smartphones and other recent technological developments to facilitate more cost-effective recovery management options.

  17. Stabilization for sampled-data neural-network-based control systems.

    PubMed

    Zhu, Xun-Lin; Wang, Youyi

    2011-02-01

    This paper studies the problem of stabilization for sampled-data neural-network-based control systems with an optimal guaranteed cost. Unlike previous works, the resulting closed-loop system with variable uncertain sampling cannot simply be regarded as an ordinary continuous-time system with a fast-varying delay in the state. By defining a novel piecewise Lyapunov functional and using a convex combination technique, the characteristic of sampled-data systems is captured. A new delay-dependent stabilization criterion is established in terms of linear matrix inequalities such that the maximal sampling interval and the minimal guaranteed cost control performance can be obtained. It is shown that the newly proposed approach can lead to less conservative and less complex results than the existing ones. Application examples are given to illustrate the effectiveness and the benefits of the proposed method.

  18. Guidance, navigation, and control trades for an Electric Orbit Transfer Vehicle

    NASA Astrophysics Data System (ADS)

    Zondervan, K. P.; Bauer, T. A.; Jenkin, A. B.; Metzler, R. A.; Shieh, R. A.

    The USAF Space Division initiated the Electric Insertion Transfer Experiment (ELITE) in the fall of 1988. The ELITE space mission is planned for the mid 1990s and will demonstrate technological readiness for the development of operational solar-powered electric orbit transfer vehicles (EOTVs). To minimize the cost of ground operations, autonomous flight is desirable. Thus, the guidance, navigation, and control (GNC) functions of an EOTV should reside on board. In order to define GNC requirements for ELITE, parametric trades must be performed for an operational solar-powered EOTV so that a clearer understanding of the performance aspects is obtained. Parametric trades for the GNC subsystems have provided insight into the relationship between pointing accuracy, transfer time, and propellant utilization. Additional trades need to be performed, taking into account weight, cost, and degree of autonomy.

  19. Optimal regulation in systems with stochastic time sampling

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1980-01-01

    An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.

  20. Current treatment of gram-positive infections: focus on efficacy, safety, and cost minimalization analysis of teicoplanin.

    PubMed

    Crane, V S; Garabedian-Ruffalo, S M

    1992-12-01

    The current health care environment has had a significant impact on hospital Pharmacy and Therapeutics Committee formulary decisions. In evaluating a new therapy for formulary inclusion, a cost savings along with equivalent or an improvement in patient care and safety is optimal. Teicoplanin is an investigational glycopeptide antimicrobial agent with a spectrum of activity similar to vancomycin. Unlike vancomycin, however, teicoplanin has a long elimination half-life permitting administration once daily, and is well tolerated when given intramuscularly. In addition, teicoplanin is associated with a favorable safety profile. Red man syndrome does not appear to be a significant clinical problem. Results of our cost minimalization analysis using the average acquisition costs of vancomycin revealed that teicoplanin (400 mg), at an average acquisition cost of less than $28.46 when administered intravenously and $30.93 when administered intramuscularly, offers a clinically efficacious, safe, and less expensive alternative to vancomycin therapy.

  1. Application of multi-objective optimization to pooled experiments of next generation sequencing for detection of rare mutations.

    PubMed

    Zilinskas, Julius; Lančinskas, Algirdas; Guarracino, Mario Rosario

    2014-01-01

    In this paper we propose some mathematical models to plan a Next Generation Sequencing experiment to detect rare mutations in pools of patients. A mathematical optimization problem is formulated for optimal pooling, with respect to minimization of the experiment cost. Then, two different strategies to replicate patients in pools are proposed, which have the advantage to decrease the overall costs. Finally, a multi-objective optimization formulation is proposed, where the trade-off between the probability to detect a mutation and overall costs is taken into account. The proposed solutions are devised in pursuance of the following advantages: (i) the solution guarantees mutations are detectable in the experimental setting, and (ii) the cost of the NGS experiment and its biological validation using Sanger sequencing is minimized. Simulations show replicating pools can decrease overall experimental cost, thus making pooling an interesting option.

  2. Karolinska prostatectomy: a robot-assisted laparoscopic radical prostatectomy technique.

    PubMed

    Nilsson, Andreas E; Carlsson, Stefan; Laven, Brett A; Wiklund, N Peter

    2006-01-01

    The last decade has witnessed an increasing trend towards minimally invasive management of prostate cancer, including laparoscopic and, more recently, robot-assisted laparoscopic prostatectomy. Several different laparoscopic approaches have been continuously developed during the last 5 years and it is still unclear which technique yields the best outcome. We present our current technique of robot-assisted laparoscopic radical prostatectomy. The technique described has evolved during the course of >400 robotic prostatectomies performed by the robotic team since the robot-assisted laparoscopic radical prostatectomy program was introduced at Karolinska University Hospital in January 2002. Our procedure comprises several modifications of previously reported ones, and we utilize fewer robotic instruments to reduce costs. An extended posterior dissection is performed to aid in the bladder neck-sparing dissection. In nerve-sparing procedures the vesicles are divided to avoid damage to the erectile nerves. In order to preserve the apical anatomy the dorsal venous complex is incised sharply and is first over-sewn after the apical dissection is completed. Our technique enables a more fluent dissection than previously described robotic techniques. Minimizing changes of instruments and the camera not only cuts costs but also reduces inefficient operating maneuvers, such as switching between 30 degrees and 0 degrees lenses during the procedure. We present a technique which in our hands has achieved excellent functional and oncological results.

  3. A time scheduling model of logistics service supply chain based on the customer order decoupling point: a perspective from the constant service operation time.

    PubMed

    Liu, Weihua; Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng

    2014-01-01

    In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC.

  4. From the ground up: building a minimally invasive aortic valve surgery program

    PubMed Central

    Lamelas, Joseph

    2015-01-01

    Minimally invasive aortic valve replacement (MIAVR) is associated with numerous advantages including improved patient satisfaction, cosmesis, decreased transfusion requirements, and cost-effectiveness. Despite these advantages, little information exists on how to build a MIAVR program from the ground up. The steps to build a MIAVR program include compiling a multi-disciplinary team composed of surgeons, cardiologists, anesthesiologists, perfusionists, operating room (OR) technicians, and nurses. Once assembled, this team can then approach hospital administrators to present a cost-benefit analysis of MIAVR, emphasizing the importance of reduced resource utilization in the long-term to offset the initial financial investment that will be required. With hospital approval, training can commence to provide surgeons and other staff with the necessary knowledge and skills in MIAVR procedures and outcomes. Marketing and advertising of the program through the use of social media, educational conferences, grand rounds, and printed media will attract the initial patients. A dedicated website for the program can function as a “virtual lobby” for patients wanting to learn more. Initially, conservative selection criteria of cases that qualify for MIAVR will set the program up for success by avoiding complex co-morbidities and surgical techniques. During the learning curve phase of the program, patient safety should be a priority. PMID:25870815

  5. A Time Scheduling Model of Logistics Service Supply Chain Based on the Customer Order Decoupling Point: A Perspective from the Constant Service Operation Time

    PubMed Central

    Yang, Yi; Xu, Haitao; Liu, Xiaoyan; Wang, Yijia; Liang, Zhicheng

    2014-01-01

    In mass customization logistics service, reasonable scheduling of the logistics service supply chain (LSSC), especially time scheduling, is benefit to increase its competitiveness. Therefore, the effect of a customer order decoupling point (CODP) on the time scheduling performance should be considered. To minimize the total order operation cost of the LSSC, minimize the difference between the expected and actual time of completing the service orders, and maximize the satisfaction of functional logistics service providers, this study establishes an LSSC time scheduling model based on the CODP. Matlab 7.8 software is used in the numerical analysis for a specific example. Results show that the order completion time of the LSSC can be delayed or be ahead of schedule but cannot be infinitely advanced or infinitely delayed. Obtaining the optimal comprehensive performance can be effective if the expected order completion time is appropriately delayed. The increase in supply chain comprehensive performance caused by the increase in the relationship coefficient of logistics service integrator (LSI) is limited. The relative concern degree of LSI on cost and service delivery punctuality leads to not only changes in CODP but also to those in the scheduling performance of the LSSC. PMID:24715818

  6. From the ground up: building a minimally invasive aortic valve surgery program.

    PubMed

    Nguyen, Tom C; Lamelas, Joseph

    2015-03-01

    Minimally invasive aortic valve replacement (MIAVR) is associated with numerous advantages including improved patient satisfaction, cosmesis, decreased transfusion requirements, and cost-effectiveness. Despite these advantages, little information exists on how to build a MIAVR program from the ground up. The steps to build a MIAVR program include compiling a multi-disciplinary team composed of surgeons, cardiologists, anesthesiologists, perfusionists, operating room (OR) technicians, and nurses. Once assembled, this team can then approach hospital administrators to present a cost-benefit analysis of MIAVR, emphasizing the importance of reduced resource utilization in the long-term to offset the initial financial investment that will be required. With hospital approval, training can commence to provide surgeons and other staff with the necessary knowledge and skills in MIAVR procedures and outcomes. Marketing and advertising of the program through the use of social media, educational conferences, grand rounds, and printed media will attract the initial patients. A dedicated website for the program can function as a "virtual lobby" for patients wanting to learn more. Initially, conservative selection criteria of cases that qualify for MIAVR will set the program up for success by avoiding complex co-morbidities and surgical techniques. During the learning curve phase of the program, patient safety should be a priority.

  7. Utilization of Optimization for Design of Morphing Wing Structures for Enhanced Flight

    NASA Astrophysics Data System (ADS)

    Detrick, Matthew Scott

    Conventional aircraft control surfaces constrain maneuverability. This work is a comprehensive study that looks at both smart material and conventional actuation methods to achieve wing twist to potentially improve flight capability using minimal actuation energy while allowing minimal wing deformation under aerodynamic loading. A continuous wing is used in order to reduce drag while allowing the aircraft to more closely approximate the wing deformation used by birds while loitering. The morphing wing for this work consists of a skin supported by an underlying truss structure whose goal is to achieve a given roll moment using less actuation energy than conventional control surfaces. A structural optimization code has been written in order to achieve minimal wing deformation under aerodynamic loading while allowing wing twist under actuation. The multi-objective cost function for the optimization consists of terms that ensure small deformation under aerodynamic loading, small change in airfoil shape during wing twist, a linear variation of wing twist along the length of the wing, small deviation from the desired wing twist, minimal number of truss members, minimal wing weight, and minimal actuation energy. Hydraulic cylinders and a two member linkage driven by a DC motor are tested separately to provide actuation. Since the goal of the current work is simply to provide a roll moment, only one actuator is implemented along the wing span. Optimization is also used to find the best location within the truss structure for the actuator. The active structure produced by optimization is then compared to simulated and experimental results from other researchers as well as characteristics of conventional aircraft.

  8. Effects of well spacing on geological storage site distribution costs and surface footprint.

    PubMed

    Eccles, Jordan; Pratson, Lincoln F; Chandel, Munish Kumar

    2012-04-17

    Geological storage studies thus far have not evaluated the scale and cost of the network of distribution pipelines that will be needed to move CO(2) from a central receiving point at a storage site to injection wells distributed about the site. Using possible injection rates for deep-saline sandstone aquifers, we estimate that the footprint of a sequestration site could range from <100 km(2) to >100,000 km(2), and that distribution costs could be <$0.10/tonne to >$10/tonne. Our findings are based on two models for determining well spacing: one which minimizes spacing in order to maximize use of the volumetric capacity of the reservoir, and a second that determines spacing to minimize subsurface pressure interference between injection wells. The interference model, which we believe more accurately reflects reservoir dynamics, produces wider well spacings and a counterintuitive relationship whereby total injection site footprint and thus distribution cost declines with decreasing permeability for a given reservoir thickness. This implies that volumetric capacity estimates should be reexamined to include well spacing constraints, since wells will need to be spaced further apart than void space calculations might suggest. We conclude that site-selection criteria should include thick, low-permeability reservoirs to minimize distribution costs and site footprint.

  9. Regulator Loss Functions and Hierarchical Modeling for Safety Decision Making.

    PubMed

    Hatfield, Laura A; Baugh, Christine M; Azzone, Vanessa; Normand, Sharon-Lise T

    2017-07-01

    Regulators must act to protect the public when evidence indicates safety problems with medical devices. This requires complex tradeoffs among risks and benefits, which conventional safety surveillance methods do not incorporate. To combine explicit regulator loss functions with statistical evidence on medical device safety signals to improve decision making. In the Hospital Cost and Utilization Project National Inpatient Sample, we select pediatric inpatient admissions and identify adverse medical device events (AMDEs). We fit hierarchical Bayesian models to the annual hospital-level AMDE rates, accounting for patient and hospital characteristics. These models produce expected AMDE rates (a safety target), against which we compare the observed rates in a test year to compute a safety signal. We specify a set of loss functions that quantify the costs and benefits of each action as a function of the safety signal. We integrate the loss functions over the posterior distribution of the safety signal to obtain the posterior (Bayes) risk; the preferred action has the smallest Bayes risk. Using simulation and an analysis of AMDE data, we compare our minimum-risk decisions to a conventional Z score approach for classifying safety signals. The 2 rules produced different actions for nearly half of hospitals (45%). In the simulation, decisions that minimize Bayes risk outperform Z score-based decisions, even when the loss functions or hierarchical models are misspecified. Our method is sensitive to the choice of loss functions; eliciting quantitative inputs to the loss functions from regulators is challenging. A decision-theoretic approach to acting on safety signals is potentially promising but requires careful specification of loss functions in consultation with subject matter experts.

  10. Effects of drilling variables on burr properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillespie, L.K.

    1976-09-01

    An investigation utilizing 303Se stainless steel, 17-4PH stainless steel, 1018 steel, and 6061-T6 aluminum was conducted to determine the influence of drilling variables in controlling burr size to minimize burr-removal cost and improve the quality and reliability of parts for small precision mechanisms. Burr thickness can be minimized by reducing feedrate and cutting velocity, and by using drills having high helix angles. High helix angles reduce burr thickness, length, and radius, while most other variables reduce only one of these properties. Radial-lip drills minimize burrs from 303Se stainless steel when large numbers of holes are drilled; this material stretches 10more » percent before drill-breakthrough. Entrance burrs can be minimized by the use of subland drills at a greatly increased tool cost. Backup-rods used in cross-drilled holes may be difficult to remove and may scratch the hole walls.« less

  11. The Economics of NASA Mission Cost Reserves

    NASA Technical Reports Server (NTRS)

    Whitley, Sally; Shinn, Stephen

    2012-01-01

    Increases in NASA mission costs are well-noted but not well-understood, and there is little evidence that they are decreasing in frequency or amount over time. The need to control spending has led to analysis of the causes and magnitude of historical mission overruns, and many program control efforts are being implemented to attempt to prevent or mitigate the problem (NPR 7120). However, cost overruns have not abated, and while some direct causes of increased spending may be obvious (requirements creep, launch delays, directed changes, etc.), the underlying impetus to spend past the original budget may be more subtle. Gaining better insight into the causes of cost overruns will help NASA and its contracting organizations to avoid .them. This paper hypothesizes that one cause of NASA mission cost overruns is that the availability of reserves gives project team members an incentive to make decisions and behave in ways that increase costs. We theorize that the presence of reserves is a contributing factor to cost overruns because it causes organizations to use their funds less efficiently or to control spending less effectively. We draw a comparison to the insurance industry concept of moral hazard, the phenomenon that the presence of insurance causes insureds to have more frequent and higher insurance losses, and we attempt to apply actuarial techniques to quantifY the increase in the expected cost of a mission due to the availability of reserves. We create a theoretical model of reserve spending motivation by defining a variable ReserveSpending as a function of total reserves. This function has a positive slope; for every dollar of reserves available, there is a positive probability of spending it. Finally, the function should be concave down; the probability of spending each incremental dollar of reserves decreases progressively. We test the model against available NASA CADRe data by examining missions with reserve dollars initially available and testing whether they are more likely to spend those dollars, and whether larger levels of reserves lead to higher cost overruns. Finally, we address the question of how to prevent reserves from increasing mission spending without increasing cost risk to projects budgeted without any reserves. Is there a "sweet spot"? How can we derive the maximum benefit associated with risk reduction from reserves while minimizing the effects of reserve spending motivation?

  12. The perverse impact of external reference pricing (ERP): a comparison of orphan drugs affordability in 12 European countries. A call for policy change

    PubMed Central

    Young, K. E.; Soussi, I.; Toumi, M.

    2017-01-01

    ABSTRACT Objective: The study compared the relative cost differences of similar orphan drugs among high and low GDP countries in Europe: Bulgaria, France, Germany, Greece, Hungary, Italy, Norway, Poland, Romania, Spain, Sweden, UK. Methods: Annual treatment costs per patient were calculated. Relative costs were computed by dividing the costs by each economic parameter: nominal GDP per capita, GDP in PPP per capita, % GDP contributed by the government, government budget per inhabitant, % GDP spent on healthcare, % GDP spent on pharmaceuticals, and average annual salary. An international comparison of the relative costs was done using UK as the reference country and results were analysed descriptively. Results: 120 orphan drugs were included. The median annual costs of orphan drugs in all countries varied minimally (cost ratios: 0.87 to 1.08). When the costs were adjusted using GDP per capita, the EU-5 and Nordic countries maintained minimal difference in median cost. However, the lower GDP countries showed three to six times higher relative costs. The same pattern was evident when costs were adjusted using the other economic parameters. Conclusion: When the country’s ability to pay is taken into consideration, lower GDP countries pay relatively higher costs for similarly available orphan drugs in Europe. PMID:29081920

  13. Low-Cost Learning Systems: The General Concept and Some Specific Examples.

    ERIC Educational Resources Information Center

    Nichols, Daryl G.

    1982-01-01

    Discusses Low Cost Learning (LCL), an instructional system that maximizes available resources while minimizing cost per pupil, and its use in primary education in Southeast Asia. Short descriptions of LCL programs in the Philippines, Indonesia, Thailand, and Liberia are provided. (JJD)

  14. Connecting source aggregating areas with distributive regions via Optimal Transportation theory.

    NASA Astrophysics Data System (ADS)

    Lanzoni, S.; Putti, M.

    2016-12-01

    We study the application of Optimal Transport (OT) theory to the transfer of water and sediments from a distributed aggregating source to a distributing area connected by a erodible hillslope. Starting from the Monge-Kantorovich equations, We derive a global energy functional that nonlinearly combines the cost of constructing the drainage network over the entire domain and the cost of water and sediment transportation through the network. It can be shown that the minimization of this functional is equivalent to the infinite time solution of a system of diffusion partial differential equations coupled with transient ordinary differential equations, that closely resemble the classical conservation laws of water and sediments mass and momentum. We present several numerical simulations applied to realstic test cases. For example, the solution of the proposed model forms network configurations that share strong similiratities with rill channels formed on an hillslope. At a larger scale, we obtain promising results in simulating the network patterns that ensure a progressive and continuous transition from a drainage drainage area to a distributive receiving region.

  15. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  16. Health costs of reproduction are minimal despite high fertility, mortality and subsistence lifestyle.

    PubMed

    Gurven, Michael; Costa, Megan; Ben Trumble; Stieglitz, Jonathan; Beheim, Bret; Eid Rodriguez, Daniel; Hooper, Paul L; Kaplan, Hillard

    2016-07-20

    Women exhibit greater morbidity than men despite higher life expectancy. An evolutionary life history framework predicts that energy invested in reproduction trades-off against investments in maintenance and survival. Direct costs of reproduction may therefore contribute to higher morbidity, especially for women given their greater direct energetic contributions to reproduction. We explore multiple indicators of somatic condition among Tsimane forager-horticulturalist women (Total Fertility Rate = 9.1; n =  592 aged 15-44 years, n = 277 aged 45+). We test whether cumulative live births and the pace of reproduction are associated with nutritional status and immune function using longitudinal data spanning 10 years. Higher parity and faster reproductive pace are associated with lower nutritional status (indicated by weight, body mass index, body fat) in a cross-section, but longitudinal analyses show improvements in women's nutritional status with age. Biomarkers of immune function and anemia vary little with parity or pace of reproduction. Our findings demonstrate that even under energy-limited and infectious conditions, women are buffered from the potential depleting effects of rapid reproduction and compound offspring dependency characteristic of human life histories.

  17. Minimum energy control for a two-compartment neuron to extracellular electric fields

    NASA Astrophysics Data System (ADS)

    Yi, Guo-Sheng; Wang, Jiang; Li, Hui-Yan; Wei, Xi-Le; Deng, Bin

    2016-11-01

    The energy optimization of extracellular electric field (EF) stimulus for a neuron is considered in this paper. We employ the optimal control theory to design a low energy EF input for a reduced two-compartment model. It works by driving the neuron to closely track a prescriptive spike train. A cost function is introduced to balance the contradictory objectives, i.e., tracking errors and EF stimulus energy. By using the calculus of variations, we transform the minimization of cost function to a six-dimensional two-point boundary value problem (BVP). Through solving the obtained BVP in the cases of three fundamental bifurcations, it is shown that the control method is able to provide an optimal EF stimulus of reduced energy for the neuron to effectively track a prescriptive spike train. Further, the feasibility of the adopted method is interpreted from the point of view of the biophysical basis of spike initiation. These investigations are conducive to designing stimulating dose for extracellular neural stimulation, which are also helpful to interpret the effects of extracellular field on neural activity.

  18. Biokinetic model-based multi-objective optimization of Dunaliella tertiolecta cultivation using elitist non-dominated sorting genetic algorithm with inheritance.

    PubMed

    Sinha, Snehal K; Kumar, Mithilesh; Guria, Chandan; Kumar, Anup; Banerjee, Chiranjib

    2017-10-01

    Algal model based multi-objective optimization using elitist non-dominated sorting genetic algorithm with inheritance was carried out for batch cultivation of Dunaliella tertiolecta using NPK-fertilizer. Optimization problems involving two- and three-objective functions were solved simultaneously. The objective functions are: maximization of algae-biomass and lipid productivity with minimization of cultivation time and cost. Time variant light intensity and temperature including NPK-fertilizer, NaCl and NaHCO 3 loadings are the important decision variables. Algal model involving Monod/Andrews adsorption kinetics and Droop model with internal nutrient cell quota was used for optimization studies. Sets of non-dominated (equally good) Pareto optimal solutions were obtained for the problems studied. It was observed that time variant optimal light intensity and temperature trajectories, including optimum NPK fertilizer, NaCl and NaHCO 3 concentration has significant influence to improve biomass and lipid productivity under minimum cultivation time and cost. Proposed optimization studies may be helpful to implement the control strategy in scale-up operation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Molecular approaches to solar energy conversion: the energetic cost of charge separation from molecular-excited states.

    PubMed

    Durrant, James R

    2013-08-13

    This review starts with a brief overview of the technological potential of molecular-based solar cell technologies. It then goes on to focus on the core scientific challenge associated with using molecular light-absorbing materials for solar energy conversion, namely the separation of short-lived, molecular-excited states into sufficiently long-lived, energetic, separated charges capable of generating an external photocurrent. Comparisons are made between different molecular-based solar cell technologies, with particular focus on the function of dye-sensitized photoelectrochemical solar cells as well as parallels with the function of photosynthetic reaction centres. The core theme of this review is that generating charge carriers with sufficient lifetime and a high quantum yield from molecular-excited states comes at a significant energetic cost-such that the energy stored in these charge-separated states is typically substantially less than the energy of the initially generated excited state. The role of this energetic loss in limiting the efficiency of solar energy conversion by such devices is emphasized, and strategies to minimize this energy loss are compared and contrasted.

  20. The role of retinal bipolar cell in early vision: an implication with analogue networks and regularization theory.

    PubMed

    Yagi, T; Ohshima, S; Funahashi, Y

    1997-09-01

    A linear analogue network model is proposed to describe the neuronal circuit of the outer retina consisting of cones, horizontal cells, and bipolar cells. The model reflects previous physiological findings on the spatial response properties of these neurons to dim illumination and is expressed by physiological mechanisms, i.e., membrane conductances, gap-junctional conductances, and strengths of chemical synaptic interactions. Using the model, we characterized the spatial filtering properties of the bipolar cell receptive field with the standard regularization theory, in which the early vision problems are attributed to minimization of a cost function. The cost function accompanying the present characterization is derived from the linear analogue network model, and one can gain intuitive insights on how physiological mechanisms contribute to the spatial filtering properties of the bipolar cell receptive field. We also elucidated a quantitative relation between the Laplacian of Gaussian operator and the bipolar cell receptive field. From the computational point of view, the dopaminergic modulation of the gap-junctional conductance between horizontal cells is inferred to be a suitable neural adaptation mechanism for transition between photopic and mesopic vision.

Top