Sample records for linear cost functions

  1. Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered

    PubMed Central

    2011-01-01

    Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023

  2. Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.

    PubMed

    Mathiassen, Svend Erik; Bolin, Kristian

    2011-05-21

    Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.

  3. A study of the use of linear programming techniques to improve the performance in design optimization problems

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.

  4. Gain optimization with non-linear controls

    NASA Technical Reports Server (NTRS)

    Slater, G. L.; Kandadai, R. D.

    1984-01-01

    An algorithm has been developed for the analysis and design of controls for non-linear systems. The technical approach is to use statistical linearization to model the non-linear dynamics of a system by a quasi-Gaussian model. A covariance analysis is performed to determine the behavior of the dynamical system and a quadratic cost function. Expressions for the cost function and its derivatives are determined so that numerical optimization techniques can be applied to determine optimal feedback laws. The primary application for this paper is centered about the design of controls for nominally linear systems but where the controls are saturated or limited by fixed constraints. The analysis is general, however, and numerical computation requires only that the specific non-linearity be considered in the analysis.

  5. Evaluation of linearly solvable Markov decision process with dynamic model learning in a mobile robot navigation task.

    PubMed

    Kinjo, Ken; Uchibe, Eiji; Doya, Kenji

    2013-01-01

    Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.

  6. A neural network approach to job-shop scheduling.

    PubMed

    Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E

    1991-01-01

    A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.

  7. Piecewise linear approximation for hereditary control problems

    NASA Technical Reports Server (NTRS)

    Propst, Georg

    1987-01-01

    Finite dimensional approximations are presented for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems when a quadratic cost integral has to be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in case the cost integral ranges over a finite time interval as well as in the case it ranges over an infinite time interval. The arguments in the latter case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense. This feature is established using a vector-component stability criterion in the state space R(n) x L(2) and the favorable eigenvalue behavior of the piecewise linear approximations.

  8. Locally optimal control under unknown dynamics with learnt cost function: application to industrial robot positioning

    NASA Astrophysics Data System (ADS)

    Guérin, Joris; Gibaru, Olivier; Thiery, Stéphane; Nyiri, Eric

    2017-01-01

    Recent methods of Reinforcement Learning have enabled to solve difficult, high dimensional, robotic tasks under unknown dynamics using iterative Linear Quadratic Gaussian control theory. These algorithms are based on building a local time-varying linear model of the dynamics from data gathered through interaction with the environment. In such tasks, the cost function is often expressed directly in terms of the state and control variables so that it can be locally quadratized to run the algorithm. If the cost is expressed in terms of other variables, a model is required to compute the cost function from the variables manipulated. We propose a method to learn the cost function directly from the data, in the same way as for the dynamics. This way, the cost function can be defined in terms of any measurable quantity and thus can be chosen more appropriately for the task to be carried out. With our method, any sensor information can be used to design the cost function. We demonstrate the efficiency of this method through simulating, with the V-REP software, the learning of a Cartesian positioning task on several industrial robots with different characteristics. The robots are controlled in joint space and no model is provided a priori. Our results are compared with another model free technique, consisting in writing the cost function as a state variable.

  9. Analytical and numerical analysis of inverse optimization problems: conditions of uniqueness and computational methods

    PubMed Central

    Zatsiorsky, Vladimir M.

    2011-01-01

    One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907

  10. Cost drivers and resource allocation in military health care systems.

    PubMed

    Fulton, Larry; Lasdon, Leon S; McDaniel, Reuben R

    2007-03-01

    This study illustrates the feasibility of incorporating technical efficiency considerations in the funding of military hospitals and identifies the primary drivers for hospital costs. Secondary data collected for 24 U.S.-based Army hospitals and medical centers for the years 2001 to 2003 are the basis for this analysis. Technical efficiency was measured by using data envelopment analysis; subsequently, efficiency estimates were included in logarithmic-linear cost models that specified cost as a function of volume, complexity, efficiency, time, and facility type. These logarithmic-linear models were compared against stochastic frontier analysis models. A parsimonious, three-variable, logarithmic-linear model composed of volume, complexity, and efficiency variables exhibited a strong linear relationship with observed costs (R(2) = 0.98). This model also proved reliable in forecasting (R(2) = 0.96). Based on our analysis, as much as $120 million might be reallocated to improve the United States-based Army hospital performance evaluated in this study.

  11. Grouping in decomposition method for multi-item capacitated lot-sizing problem with immediate lost sales and joint and item-dependent setup cost

    NASA Astrophysics Data System (ADS)

    Narenji, M.; Fatemi Ghomi, S. M. T.; Nooraie, S. V. R.

    2011-03-01

    This article examines a dynamic and discrete multi-item capacitated lot-sizing problem in a completely deterministic production or procurement environment with limited production/procurement capacity where lost sales (the loss of customer demand) are permitted. There is no inventory space capacity and the production activity incurs a fixed charge linear cost function. Similarly, the inventory holding cost and the cost of lost demand are both associated with a linear no-fixed charge function. For the sake of simplicity, a unit of each item is assumed to consume one unit of production/procurement capacity. We analyse a different version of setup costs incurred by a production or procurement activity in a given period of the planning horizon. In this version, called the joint and item-dependent setup cost, an additional item-dependent setup cost is incurred separately for each produced or ordered item on top of the joint setup cost.

  12. Effect of the shape of the exposure-response function on estimated hospital costs in a study on non-elective pneumonia hospitalizations related to particulate matter.

    PubMed

    Devos, Stefanie; Cox, Bianca; van Lier, Tom; Nawrot, Tim S; Putman, Koen

    2016-09-01

    We used log-linear and log-log exposure-response (E-R) functions to model the association between PM2.5 exposure and non-elective hospitalizations for pneumonia, and estimated the attributable hospital costs by using the effect estimates obtained from both functions. We used hospital discharge data on 3519 non-elective pneumonia admissions from UZ Brussels between 2007 and 2012 and we combined a case-crossover design with distributed lag models. The annual averted pneumonia hospitalization costs for a reduction in PM2.5 exposure from the mean (21.4μg/m(3)) to the WHO guideline for annual mean PM2.5 (10μg/m(3)) were estimated and extrapolated for Belgium. Non-elective hospitalizations for pneumonia were significantly associated with PM2.5 exposure in both models. Using a log-linear E-R function, the estimated risk reduction for pneumonia hospitalization associated with a decrease in mean PM2.5 exposure to 10μg/m(3) was 4.9%. The corresponding estimate for the log-log model was 10.7%. These estimates translate to an annual pneumonia hospital cost saving in Belgium of €15.5 million and almost €34 million for the log-linear and log-log E-R function, respectively. Although further research is required to assess the shape of the association between PM2.5 exposure and pneumonia hospitalizations, we demonstrated that estimates for health effects and associated costs heavily depend on the assumed E-R function. These results are important for policy making, as supra-linear E-R associations imply that significant health benefits may still be obtained from additional pollution control measures in areas where PM levels have already been reduced. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Development of Regional Supply Functions and a Least-Cost Model for Allocating Water Resources in Utah: A Parametric Linear Programming Approach.

    DTIC Science & Technology

    SYSTEMS ANALYSIS, * WATER SUPPLIES, MATHEMATICAL MODELS, OPTIMIZATION, ECONOMICS, LINEAR PROGRAMMING, HYDROLOGY, REGIONS, ALLOCATIONS, RESTRAINT, RIVERS, EVAPORATION, LAKES, UTAH, SALVAGE, MINES(EXCAVATIONS).

  14. Active control of panel vibrations induced by boundary-layer flow

    NASA Technical Reports Server (NTRS)

    Chow, Pao-Liu

    1991-01-01

    Some problems in active control of panel vibration excited by a boundary layer flow over a flat plate are studied. In the first phase of the study, the optimal control problem of vibrating elastic panel induced by a fluid dynamical loading was studied. For a simply supported rectangular plate, the vibration control problem can be analyzed by a modal analysis. The control objective is to minimize the total cost functional, which is the sum of a vibrational energy and the control cost. By means of the modal expansion, the dynamical equation for the plate and the cost functional are reduced to a system of ordinary differential equations and the cost functions for the modes. For the linear elastic plate, the modes become uncoupled. The control of each modal amplitude reduces to the so-called linear regulator problem in control theory. Such problems can then be solved by the method of adjoint state. The optimality system of equations was solved numerically by a shooting method. The results are summarized.

  15. Optimal Operation System of the Integrated District Heating System with Multiple Regional Branches

    NASA Astrophysics Data System (ADS)

    Kim, Ui Sik; Park, Tae Chang; Kim, Lae-Hyun; Yeo, Yeong Koo

    This paper presents an optimal production and distribution management for structural and operational optimization of the integrated district heating system (DHS) with multiple regional branches. A DHS consists of energy suppliers and consumers, district heating pipelines network and heat storage facilities in the covered region. In the optimal management system, production of heat and electric power, regional heat demand, electric power bidding and sales, transport and storage of heat at each regional DHS are taken into account. The optimal management system is formulated as a mixed integer linear programming (MILP) where the objectives is to minimize the overall cost of the integrated DHS while satisfying the operation constraints of heat units and networks as well as fulfilling heating demands from consumers. Piecewise linear formulation of the production cost function and stairwise formulation of the start-up cost function are used to compute nonlinear cost function approximately. Evaluation of the total overall cost is based on weekly operations at each district heat branches. Numerical simulations show the increase of energy efficiency due to the introduction of the present optimal management system.

  16. Implementation of software-based sensor linearization algorithms on low-cost microcontrollers.

    PubMed

    Erdem, Hamit

    2010-10-01

    Nonlinear sensors and microcontrollers are used in many embedded system designs. As the input-output characteristic of most sensors is nonlinear in nature, obtaining data from a nonlinear sensor by using an integer microcontroller has always been a design challenge. This paper discusses the implementation of six software-based sensor linearization algorithms for low-cost microcontrollers. The comparative study of the linearization algorithms is performed by using a nonlinear optical distance-measuring sensor. The performance of the algorithms is examined with respect to memory space usage, linearization accuracy and algorithm execution time. The implementation and comparison results can be used for selection of a linearization algorithm based on the sensor transfer function, expected linearization accuracy and microcontroller capacity. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  17. A Case Study on the Application of a Structured Experimental Method for Optimal Parameter Design of a Complex Control System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.

  18. Estimating cost of large-fire suppression for three Forest Service regions

    Treesearch

    Eric L. Smith; Gonz& aacute; lez-Cab& aacute; n Armando

    1987-01-01

    The annual costs attributable to large fire suppression in three Forest Service Regions (1970-1981) were estimated as a function of fire perimeters using linear regression. Costs calculated on a per chain of perimeterbasis were highest for the Pacific Northwest Region, next highest for the Northern Region, and lowest for the Intermountain Region. Recent costs in real...

  19. Toward a low-cost, low-power, low-complexity DAC-based multilevel (M-ary QAM) coherent transmitter using compact linear optical field modulator

    NASA Astrophysics Data System (ADS)

    Dingel, Benjamin

    2017-01-01

    In this invited paper, we summarize the current developments in linear optical field modulators (LOFMs) for coherent multilevel optical transmitters. Our focus is the presentation of a new, novel LOFM design that provides beneficial and necessary features such as lowest hardware component counts, lowered insertion loss, smaller RF power consumption, smaller footprint, simple structure, and lowered cost. We refer to this modulator as called Double-Pass LOFM (DP-LOFM) that becomes the building block for high-performance, linear Dual-Polarization, In-Phase- Quadrature-Phase (DP-IQ) modulator. We analyze its performance in term of slope linearity, and present one of its unique feature -- a built-in compensation functionality that no other linear modulators possessed till now.

  20. Development of Activity-based Cost Functions for Cellulase, Invertase, and Other Enzymes

    NASA Astrophysics Data System (ADS)

    Stowers, Chris C.; Ferguson, Elizabeth M.; Tanner, Robert D.

    As enzyme chemistry plays an increasingly important role in the chemical industry, cost analysis of these enzymes becomes a necessity. In this paper, we examine the aspects that affect the cost of enzymes based upon enzyme activity. The basis for this study stems from a previously developed objective function that quantifies the tradeoffs in enzyme purification via the foam fractionation process (Cherry et al., Braz J Chem Eng 17:233-238, 2000). A generalized cost function is developed from our results that could be used to aid in both industrial and lab scale chemical processing. The generalized cost function shows several nonobvious results that could lead to significant savings. Additionally, the parameters involved in the operation and scaling up of enzyme processing could be optimized to minimize costs. We show that there are typically three regimes in the enzyme cost analysis function: the low activity prelinear region, the moderate activity linear region, and high activity power-law region. The overall form of the cost analysis function appears to robustly fit the power law form.

  1. X-ray spectrometer with a low-cost SiC photodiode

    NASA Astrophysics Data System (ADS)

    Zhao, S.; Lioliou, G.; Barnett, A. M.

    2018-04-01

    A low-cost Commercial-Off-The-Shelf (COTS) 4H-SiC 0.06 mm2 UV p-n photodiode was coupled to a low-noise charge-sensitive preamplifier and used as photon counting X-ray spectrometer. The photodiode/spectrometer was investigated at X-ray energies from 4.95 keV to 21.17 keV: a Mo cathode X-ray tube was used to fluoresce eight high-purity metal foils to produce characteristic X-ray emission lines which were used to characterise the instrument. The energy resolution (full width at half maximum, FWHM) of the spectrometer was found to be 1.6 keV to 1.8 keV, across the energy range. The energy linearity of the detector/spectrometer (i.e. the detector's charge output per photon as a function of incident photon energy across the 4.95 keV to 21.17 keV energy range), as well as the count rate linearity of the detector/spectrometer (i.e. number of detected photons as a function of photon fluence at a specific energy) were investigated. The energy linearity of the detector/spectrometer was linear with an error < ± 0.7 %; the count rate linearity of the detector/spectrometer was linear with an error < ± 2 %. The use of COTS SiC photodiodes as detectors for X-ray spectrometers is attractive for nanosatellite/CubeSat applications (including solar flare monitoring), and for cost sensitive industrial uses.

  2. Design and performance evaluation of a dispersion compensation unit using several chirping functions in a tanh apodized FBG and comparison with dispersion compensation fiber.

    PubMed

    Mohammed, Nazmi A; Solaiman, Mohammad; Aly, Moustafa H

    2014-10-10

    In this work, various dispersion compensation methods are designed and evaluated to search for a cost-effective technique with remarkable dispersion compensation and a good pulse shape. The techniques consist of different chirp functions applied to a tanh fiber Bragg grating (FBG), a dispersion compensation fiber (DCF), and a DCF merged with an optimized linearly chirped tanh FBG (joint technique). The techniques are evaluated using a standard 10 Gb/s optical link over a 100 km long haul. The linear chirp function is the most appropriate choice of chirping function, with a pulse width reduction percentage (PWRP) of 75.15%, lower price, and poor pulse shape. The DCF yields an enhanced PWRP of 93.34% with a better pulse quality; however, it is the most costly of the evaluated techniques. Finally, the joint technique achieved the optimum PWRP (96.36%) among all the evaluated techniques and exhibited a remarkable pulse shape; it is less costly than the DCF, but more expensive than the chirped tanh FBG.

  3. Single-machine common/slack due window assignment problems with linear decreasing processing times

    NASA Astrophysics Data System (ADS)

    Zhang, Xingong; Lin, Win-Chin; Wu, Wen-Hsiang; Wu, Chin-Chia

    2017-08-01

    This paper studies linear non-increasing processing times and the common/slack due window assignment problems on a single machine, where the actual processing time of a job is a linear non-increasing function of its starting time. The aim is to minimize the sum of the earliness cost, tardiness cost, due window location and due window size. Some optimality results are discussed for the common/slack due window assignment problems and two O(n log n) time algorithms are presented to solve the two problems. Finally, two examples are provided to illustrate the correctness of the corresponding algorithms.

  4. Losses from effluent taxes and quotas under uncertainty

    USGS Publications Warehouse

    Watson, W.D.; Ridker, R.G.

    1984-01-01

    Recent theoretical papers by Adar and Griffin (J. Environ. Econ. Manag.3, 178-188 (1976)), Fishelson (J. Environ. Econ. Manag.3, 189-197 (1976)), and Weitzman (Rev. Econ. Studies41, 477-491 (1974)) show that,different expected social losses arise from using effluent taxes and quotas as alternative control instruments when marginal control costs are uncertain. Key assumptions in these analyses are linear marginal cost and benefit functions and an additive error for the marginal cost function (to reflect uncertainty). In this paper, empirically derived nonlinear functions and more realistic multiplicative error terms are used to estimate expected control and damage costs and to identify (empirically) the mix of control instruments that minimizes expected losses. ?? 1984.

  5. Oligopolies with contingent workforce and unemployment insurance systems

    NASA Astrophysics Data System (ADS)

    Matsumoto, Akio; Merlone, Ugo; Szidarovszky, Ferenc

    2015-10-01

    In the recent literature the introduction of modified cost functions has added reality into the classical oligopoly analysis. Furthermore, the market evolution requires much more flexibility to firms, and in several countries contingent workforce plays an important role in the production choices by the firms. Therefore, an analysis of dynamic adjustment costs is in order to understand oligopoly dynamics. In this paper, dynamic single-product oligopolies without product differentiation are first examined with the additional consideration of production adjustment costs. Linear inverse demand and cost functions are considered and it is assumed that the firms adjust their outputs partially toward best response. The set of the steady states is characterized by a system of linear inequalities and there are usually infinitely many steady states. The asymptotic behavior of the output trajectories is examined by using computer simulation. The numerical results indicate that the resulting dynamics is richer than in the case of the classical Cournot model. This model and results are then compared to oligopolies with unemployment insurance systems when the additional cost is considered if firms do not use their maximum capacities.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es

    We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.

  7. Monopoly Output and Welfare: The Role of Curvature of the Demand Function.

    ERIC Educational Resources Information Center

    Malueg, David A.

    1994-01-01

    Discusses linear demand functions and constant marginal costs related to a monopoly in a market economy. Illustrates the demand function by using a curve. Includes an appendix with two figures and accompanying mathematical formulae illustrating the concepts presented in the article. (CFR)

  8. Analysing the Costs of Integrated Care: A Case on Model Selection for Chronic Care Purposes

    PubMed Central

    Sánchez-Pérez, Inma; Ibern, Pere; Coderch, Jordi; Inoriza, José María

    2016-01-01

    Background: The objective of this study is to investigate whether the algorithm proposed by Manning and Mullahy, a consolidated health economics procedure, can also be used to estimate individual costs for different groups of healthcare services in the context of integrated care. Methods: A cross-sectional study focused on the population of the Baix Empordà (Catalonia-Spain) for the year 2012 (N = 92,498 individuals). A set of individual cost models as a function of sex, age and morbidity burden were adjusted and individual healthcare costs were calculated using a retrospective full-costing system. The individual morbidity burden was inferred using the Clinical Risk Groups (CRG) patient classification system. Results: Depending on the characteristics of the data, and according to the algorithm criteria, the choice of model was a linear model on the log of costs or a generalized linear model with a log link. We checked for goodness of fit, accuracy, linear structure and heteroscedasticity for the models obtained. Conclusion: The proposed algorithm identified a set of suitable cost models for the distinct groups of services integrated care entails. The individual morbidity burden was found to be indispensable when allocating appropriate resources to targeted individuals. PMID:28316542

  9. Scilab software as an alternative low-cost computing in solving the linear equations problem

    NASA Astrophysics Data System (ADS)

    Agus, Fahrul; Haviluddin

    2017-02-01

    Numerical computation packages are widely used both in teaching and research. These packages consist of license (proprietary) and open source software (non-proprietary). One of the reasons to use the package is a complexity of mathematics function (i.e., linear problems). Also, number of variables in a linear or non-linear function has been increased. The aim of this paper was to reflect on key aspects related to the method, didactics and creative praxis in the teaching of linear equations in higher education. If implemented, it could be contribute to a better learning in mathematics area (i.e., solving simultaneous linear equations) that essential for future engineers. The focus of this study was to introduce an additional numerical computation package of Scilab as an alternative low-cost computing programming. In this paper, Scilab software was proposed some activities that related to the mathematical models. In this experiment, four numerical methods such as Gaussian Elimination, Gauss-Jordan, Inverse Matrix, and Lower-Upper Decomposition (LU) have been implemented. The results of this study showed that a routine or procedure in numerical methods have been created and explored by using Scilab procedures. Then, the routine of numerical method that could be as a teaching material course has exploited.

  10. Steering of Frequency Standards by the Use of Linear Quadratic Gaussian Control Theory

    NASA Technical Reports Server (NTRS)

    Koppang, Paul; Leland, Robert

    1996-01-01

    Linear quadratic Gaussian control is a technique that uses Kalman filtering to estimate a state vector used for input into a control calculation. A control correction is calculated by minimizing a quadratic cost function that is dependent on both the state vector and the control amount. Different penalties, chosen by the designer, are assessed by the controller as the state vector and control amount vary from given optimal values. With this feature controllers can be designed to force the phase and frequency differences between two standards to zero either more or less aggressively depending on the application. Data will be used to show how using different parameters in the cost function analysis affects the steering and the stability of the frequency standards.

  11. Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Piunovskiy, A. B., E-mail: piunov@liv.ac.uk

    2016-08-15

    In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures ofmore » the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.« less

  12. Assessment of Health-Cost Externalities of Air Pollution at the National Level using the EVA Model System

    NASA Astrophysics Data System (ADS)

    Brandt, Jørgen; Silver, Jeremy David; Heile Christensen, Jesper; Skou Andersen, Mikael; Geels, Camilla; Gross, Allan; Buus Hansen, Ayoe; Mantzius Hansen, Kaj; Brandt Hedegaard, Gitte; Ambelas Skjøth, Carsten

    2010-05-01

    Air pollution has significant negative impacts on human health and well-being, which entail substantial economic consequences. We have developed an integrated model system, EVA (External Valuation of Air pollution), to assess health-related economic externalities of air pollution resulting from specific emission sources/sectors. The EVA system was initially developed to assess externalities from power production, but in this study it is extended to evaluate costs at the national level. The EVA system integrates a regional-scale atmospheric chemistry transport model (DEHM), address-level population data, exposure-response functions and monetary values applicable for Danish/European conditions. Traditionally, systems that assess economic costs of health impacts from air pollution assume linear approximations in the source-receptor relationships. However, atmospheric chemistry is non-linear and therefore the uncertainty involved in the linear assumption can be large. The EVA system has been developed to take into account the non-linear processes by using a comprehensive, state-of-the-art chemical transport model when calculating how specific changes to emissions affect air pollution levels and the subsequent impacts on human health and cost. Furthermore, we present a new "tagging" method, developed to examine how specific emission sources influence air pollution levels without assuming linearity of the non-linear behaviour of atmospheric chemistry. This method is more precise than the traditional approach based on taking the difference between two concentration fields. Using the EVA system, we have estimated the total external costs from the main emission sectors in Denmark, representing the ten major SNAP codes. Finally, we assess the impacts and external costs of emissions from international ship traffic around Denmark, since there is a high volume of ship traffic in the region.

  13. Digital robust active control law synthesis for large order systems using constrained optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1987-01-01

    This paper presents a direct digital control law synthesis procedure for a large order, sampled data, linear feedback system using constrained optimization techniques to meet multiple design requirements. A linear quadratic Gaussian type cost function is minimized while satisfying a set of constraints on the design loads and responses. General expressions for gradients of the cost function and constraints, with respect to the digital control law design variables are derived analytically and computed by solving a set of discrete Liapunov equations. The designer can choose the structure of the control law and the design variables, hence a stable classical control law as well as an estimator-based full or reduced order control law can be used as an initial starting point. Selected design responses can be treated as constraints instead of lumping them into the cost function. This feature can be used to modify a control law, to meet individual root mean square response limitations as well as minimum single value restrictions. Low order, robust digital control laws were synthesized for gust load alleviation of a flexible remotely piloted drone aircraft.

  14. Topology Trivialization and Large Deviations for the Minimum in the Simplest Random Optimization

    NASA Astrophysics Data System (ADS)

    Fyodorov, Yan V.; Le Doussal, Pierre

    2014-01-01

    Finding the global minimum of a cost function given by the sum of a quadratic and a linear form in N real variables over (N-1)-dimensional sphere is one of the simplest, yet paradigmatic problems in Optimization Theory known as the "trust region subproblem" or "constraint least square problem". When both terms in the cost function are random this amounts to studying the ground state energy of the simplest spherical spin glass in a random magnetic field. We first identify and study two distinct large-N scaling regimes in which the linear term (magnetic field) leads to a gradual topology trivialization, i.e. reduction in the total number {N}_{tot} of critical (stationary) points in the cost function landscape. In the first regime {N}_{tot} remains of the order N and the cost function (energy) has generically two almost degenerate minima with the Tracy-Widom (TW) statistics. In the second regime the number of critical points is of the order of unity with a finite probability for a single minimum. In that case the mean total number of extrema (minima and maxima) of the cost function is given by the Laplace transform of the TW density, and the distribution of the global minimum energy is expected to take a universal scaling form generalizing the TW law. Though the full form of that distribution is not yet known to us, one of its far tails can be inferred from the large deviation theory for the global minimum. In the rest of the paper we show how to use the replica method to obtain the probability density of the minimum energy in the large-deviation approximation by finding both the rate function and the leading pre-exponential factor.

  15. Active distribution network planning considering linearized system loss

    NASA Astrophysics Data System (ADS)

    Li, Xiao; Wang, Mingqiang; Xu, Hao

    2018-02-01

    In this paper, various distribution network planning techniques with DGs are reviewed, and a new distribution network planning method is proposed. It assumes that the location of DGs and the topology of the network are fixed. The proposed model optimizes the capacities of DG and the optimal distribution line capacity simultaneously by a cost/benefit analysis and the benefit is quantified by the reduction of the expected interruption cost. Besides, the network loss is explicitly analyzed in the paper. For simplicity, the network loss is appropriately simplified as a quadratic function of difference of voltage phase angle. Then it is further piecewise linearized. In this paper, a piecewise linearization technique with different segment lengths is proposed. To validate its effectiveness and superiority, the proposed distribution network planning model with elaborate linearization technique is tested on the IEEE 33-bus distribution network system.

  16. Using Excel's Solver Function to Facilitate Reciprocal Service Department Cost Allocations

    ERIC Educational Resources Information Center

    Leese, Wallace R.

    2013-01-01

    The reciprocal method of service department cost allocation requires linear equations to be solved simultaneously. These computations are often so complex as to cause the abandonment of the reciprocal method in favor of the less sophisticated and theoretically incorrect direct or step-down methods. This article illustrates how Excel's Solver…

  17. Nonlocal kinetic energy functional from the jellium-with-gap model: Applications to orbital-free density functional theory

    NASA Astrophysics Data System (ADS)

    Constantin, Lucian A.; Fabiano, Eduardo; Della Sala, Fabio

    2018-05-01

    Orbital-free density functional theory (OF-DFT) promises to describe the electronic structure of very large quantum systems, being its computational cost linear with the system size. However, the OF-DFT accuracy strongly depends on the approximation made for the kinetic energy (KE) functional. To date, the most accurate KE functionals are nonlocal functionals based on the linear-response kernel of the homogeneous electron gas, i.e., the jellium model. Here, we use the linear-response kernel of the jellium-with-gap model to construct a simple nonlocal KE functional (named KGAP) which depends on the band-gap energy. In the limit of vanishing energy gap (i.e., in the case of metals), the KGAP is equivalent to the Smargiassi-Madden (SM) functional, which is accurate for metals. For a series of semiconductors (with different energy gaps), the KGAP performs much better than SM, and results are close to the state-of-the-art functionals with sophisticated density-dependent kernels.

  18. A class of solution-invariant transformations of cost functions for minimum cost flow phase unwrapping.

    PubMed

    Hubig, Michael; Suchandt, Steffen; Adam, Nico

    2004-10-01

    Phase unwrapping (PU) represents an important step in synthetic aperture radar interferometry (InSAR) and other interferometric applications. Among the different PU methods, the so called branch-cut approaches play an important role. In 1996 M. Costantini [Proceedings of the Fringe '96 Workshop ERS SAR Interferometry (European Space Agency, Munich, 1996), pp. 261-272] proposed to transform the problem of correctly placing branch cuts into a minimum cost flow (MCF) problem. The crucial point of this new approach is to generate cost functions that represent the a priori knowledge necessary for PU. Since cost functions are derived from measured data, they are random variables. This leads to the question of MCF solution stability: How much can the cost functions be varied without changing the cheapest flow that represents the correct branch cuts? This question is partially answered: The existence of a whole linear subspace in the space of cost functions is shown; this subspace contains all cost differences by which a cost function can be changed without changing the cost difference between any two flows that are discharging any residue configuration. These cost differences are called strictly stable cost differences. For quadrangular nonclosed networks (the most important type of MCF networks for interferometric purposes) a complete classification of strictly stable cost differences is presented. Further, the role of the well-known class of node potentials in the framework of strictly stable cost differences is investigated, and information on the vector-space structure representing the MCF environment is provided.

  19. 2D/3D registration using a rotation-invariant cost function based on Zernike moments

    NASA Astrophysics Data System (ADS)

    Birkfellner, Wolfgang; Yang, Xinhui; Burgstaller, Wolfgang; Baumann, Bernard; Jacob, Augustinus L.; Niederer, Peter F.; Regazzoni, Pietro; Messmer, Peter

    2004-05-01

    We present a novel in-plane rotation invariant cost function for 2D/3D registration utilizing projection-invariant transformation properties and the decomposition of the X-ray nad the DRR under comparision into orhogonal Zernike moments. As a result, only five dof have to be optimized, and the number of iteration necessary for registration can be significantly reduced. Results in a phantom study show that an accuracy of approximately 0.7° and 2 mm can be achieved using this method. We conclude that reduction of coupled dof and usage of linear independent coefficients for cost function evaluation provide intersting new perspectives for the field of 2D/3D registration.

  20. Optimal dual-fuel propulsion for minimum inert weight or minimum fuel cost

    NASA Technical Reports Server (NTRS)

    Martin, J. A.

    1973-01-01

    An analytical investigation of single-stage vehicles with multiple propulsion phases has been conducted with the phasing optimized to minimize a general cost function. Some results are presented for linearized sizing relationships which indicate that single-stage-to-orbit, dual-fuel rocket vehicles can have lower inert weight than similar single-fuel rocket vehicles and that the advantage of dual-fuel vehicles can be increased if a dual-fuel engine is developed. The results also indicate that the optimum split can vary considerably with the choice of cost function to be minimized.

  1. Train repathing in emergencies based on fuzzy linear programming.

    PubMed

    Meng, Xuelei; Cui, Bingmou

    2014-01-01

    Train pathing is a typical problem which is to assign the train trips on the sets of rail segments, such as rail tracks and links. This paper focuses on the train pathing problem, determining the paths of the train trips in emergencies. We analyze the influencing factors of train pathing, such as transferring cost, running cost, and social adverse effect cost. With the overall consideration of the segment and station capability constraints, we build the fuzzy linear programming model to solve the train pathing problem. We design the fuzzy membership function to describe the fuzzy coefficients. Furthermore, the contraction-expansion factors are introduced to contract or expand the value ranges of the fuzzy coefficients, coping with the uncertainty of the value range of the fuzzy coefficients. We propose a method based on triangular fuzzy coefficient and transfer the train pathing (fuzzy linear programming model) to a determinate linear model to solve the fuzzy linear programming problem. An emergency is supposed based on the real data of the Beijing-Shanghai Railway. The model in this paper was solved and the computation results prove the availability of the model and efficiency of the algorithm.

  2. Dense image registration through MRFs and efficient linear programming.

    PubMed

    Glocker, Ben; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir; Paragios, Nikos

    2008-12-01

    In this paper, we introduce a novel and efficient approach to dense image registration, which does not require a derivative of the employed cost function. In such a context, the registration problem is formulated using a discrete Markov random field objective function. First, towards dimensionality reduction on the variables we assume that the dense deformation field can be expressed using a small number of control points (registration grid) and an interpolation strategy. Then, the registration cost is expressed using a discrete sum over image costs (using an arbitrary similarity measure) projected on the control points, and a smoothness term that penalizes local deviations on the deformation field according to a neighborhood system on the grid. Towards a discrete approach, the search space is quantized resulting in a fully discrete model. In order to account for large deformations and produce results on a high resolution level, a multi-scale incremental approach is considered where the optimal solution is iteratively updated. This is done through successive morphings of the source towards the target image. Efficient linear programming using the primal dual principles is considered to recover the lowest potential of the cost function. Very promising results using synthetic data with known deformations and real data demonstrate the potentials of our approach.

  3. A flexible model for correlated medical costs, with application to medical expenditure panel survey data.

    PubMed

    Chen, Jinsong; Liu, Lei; Shih, Ya-Chen T; Zhang, Daowen; Severini, Thomas A

    2016-03-15

    We propose a flexible model for correlated medical cost data with several appealing features. First, the mean function is partially linear. Second, the distributional form for the response is not specified. Third, the covariance structure of correlated medical costs has a semiparametric form. We use extended generalized estimating equations to simultaneously estimate all parameters of interest. B-splines are used to estimate unknown functions, and a modification to Akaike information criterion is proposed for selecting knots in spline bases. We apply the model to correlated medical costs in the Medical Expenditure Panel Survey dataset. Simulation studies are conducted to assess the performance of our method. Copyright © 2015 John Wiley & Sons, Ltd.

  4. FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.

    PubMed

    Li, Pu; Chen, Bing

    2011-04-01

    Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. Device with Functions of Linear Motor and Non-contact Power Collector for Wireless Drive

    NASA Astrophysics Data System (ADS)

    Fujii, Nobuo; Mizuma, Tsuyoshi

    The authors propose a new apparatus with functions of propulsion and non-contact power collection for a future vehicle which can run like an electric vehicle supplied from the onboard battery source in most of the root except near stations. The batteries or power-capacitors are non-contact charged from the winding connected with commercial power on ground in the stations etc. The apparatus has both functions of linear motor and transformer, and the basic configuration is a wound-secondary type linear induction motor (LIM). In the paper, the wound type LIM with the concentrated single-phase winding for the primary member on the ground is dealt from the viewpoint of low cost arrangement. The secondary winding is changed to the single-phase connection for zero thrust in the transformer operation, and the two-phase connection for the linear motor respectively. The change of connection is done by the special converter for charge and linear drive on board. The characteristics are studied analytically.

  6. Koopman Invariant Subspaces and Finite Linear Representations of Nonlinear Dynamical Systems for Control.

    PubMed

    Brunton, Steven L; Brunton, Bingni W; Proctor, Joshua L; Kutz, J Nathan

    2016-01-01

    In this wIn this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.ork, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control.

  7. A flexible model for the mean and variance functions, with application to medical cost data.

    PubMed

    Chen, Jinsong; Liu, Lei; Zhang, Daowen; Shih, Ya-Chen T

    2013-10-30

    Medical cost data are often skewed to the right and heteroscedastic, having a nonlinear relation with covariates. To tackle these issues, we consider an extension to generalized linear models by assuming nonlinear associations of covariates in the mean function and allowing the variance to be an unknown but smooth function of the mean. We make no further assumption on the distributional form. The unknown functions are described by penalized splines, and the estimation is carried out using nonparametric quasi-likelihood. Simulation studies show the flexibility and advantages of our approach. We apply the model to the annual medical costs of heart failure patients in the clinical data repository at the University of Virginia Hospital System. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Linearized self-consistent GW approach satisfying the Ward identity

    NASA Astrophysics Data System (ADS)

    Kuwahara, Riichi; Ohno, Kaoru

    2014-09-01

    We propose a linearized self-consistent GW approach satisfying the Ward identity. The vertex function derived from the Ward-Takahashi identity in the limit of q =0 and ω -ω'=0 is included in the self-energy and the polarization function as a consequence of the linearization of the quasiparticle equation. Due to the energy dependence of the self-energy, the Hamiltonian is a non-Hermitian operator and quasiparticle states are nonorthonormal and linearly dependent. However, the linearized quasiparticle states recover orthonormality and fulfill the completeness condition. This approach is very efficient, and the resulting quasiparticle energies are greatly improved compared to the nonlinearized self-consistent GW approach, although its computational cost is not much increased. We show the results for atoms and dimers of Li and Na compared with other approaches. We also propose convenient ways to calculate the Luttinger-Ward functional Φ based on a plasmon-pole model and calculate the total energy for the ground state. As a result, we conclude that the linearization improves overall behaviors in the self-consistent GW approach.

  9. Optimal estimation and scheduling in aquifer management using the rapid feedback control method

    NASA Astrophysics Data System (ADS)

    Ghorbanidehno, Hojat; Kokkinaki, Amalia; Kitanidis, Peter K.; Darve, Eric

    2017-12-01

    Management of water resources systems often involves a large number of parameters, as in the case of large, spatially heterogeneous aquifers, and a large number of "noisy" observations, as in the case of pressure observation in wells. Optimizing the operation of such systems requires both searching among many possible solutions and utilizing new information as it becomes available. However, the computational cost of this task increases rapidly with the size of the problem to the extent that textbook optimization methods are practically impossible to apply. In this paper, we present a new computationally efficient technique as a practical alternative for optimally operating large-scale dynamical systems. The proposed method, which we term Rapid Feedback Controller (RFC), provides a practical approach for combined monitoring, parameter estimation, uncertainty quantification, and optimal control for linear and nonlinear systems with a quadratic cost function. For illustration, we consider the case of a weakly nonlinear uncertain dynamical system with a quadratic objective function, specifically a two-dimensional heterogeneous aquifer management problem. To validate our method, we compare our results with the linear quadratic Gaussian (LQG) method, which is the basic approach for feedback control. We show that the computational cost of the RFC scales only linearly with the number of unknowns, a great improvement compared to the basic LQG control with a computational cost that scales quadratically. We demonstrate that the RFC method can obtain the optimal control values at a greatly reduced computational cost compared to the conventional LQG algorithm with small and controllable losses in the accuracy of the state and parameter estimation.

  10. Population dynamics and mutualism: Functional responses of benefits and costs

    USGS Publications Warehouse

    Holland, J. Nathaniel; DeAngelis, Donald L.; Bronstein, Judith L.

    2002-01-01

    We develop an approach for studying population dynamics resulting from mutualism by employing functional responses based on density‐dependent benefits and costs. These functional responses express how the population growth rate of a mutualist is modified by the density of its partner. We present several possible dependencies of gross benefits and costs, and hence net effects, to a mutualist as functions of the density of its partner. Net effects to mutualists are likely a monotonically saturating or unimodal function of the density of their partner. We show that fundamental differences in the growth, limitation, and dynamics of a population can occur when net effects to that population change linearly, unimodally, or in a saturating fashion. We use the mutualism between senita cactus and its pollinating seed‐eating moth as an example to show the influence of different benefit and cost functional responses on population dynamics and stability of mutualisms. We investigated two mechanisms that may alter this mutualism's functional responses: distribution of eggs among flowers and fruit abortion. Differences in how benefits and costs vary with density can alter the stability of this mutualism. In particular, fruit abortion may allow for a stable equilibrium where none could otherwise exist.

  11. A Study of Alternative Quantile Estimation Methods in Newsboy-Type Problems

    DTIC Science & Technology

    1980-03-01

    decision maker selects to have on hand. The newsboy cost equation may be formulated as a two-piece continuous linear function in the following manner. C(S...number of observations, some approximations may be possible. Three points which are near each other can be assumed to be linear and some estimator using...respectively. Define the value r as: r = [nq + 0.5] , (6) where [X] denotes the largest integer of X. Let us consider an estimate of X as the linear

  12. Testing the dose-response specification in epidemiology: public health and policy consequences for lead.

    PubMed

    Rothenberg, Stephen J; Rothenberg, Jesse C

    2005-09-01

    Statistical evaluation of the dose-response function in lead epidemiology is rarely attempted. Economic evaluation of health benefits of lead reduction usually assumes a linear dose-response function, regardless of the outcome measure used. We reanalyzed a previously published study, an international pooled data set combining data from seven prospective lead studies examining contemporaneous blood lead effect on IQ (intelligence quotient) of 7-year-old children (n = 1,333). We constructed alternative linear multiple regression models with linear blood lead terms (linear-linear dose response) and natural-log-transformed blood lead terms (log-linear dose response). We tested the two lead specifications for nonlinearity in the models, compared the two lead specifications for significantly better fit to the data, and examined the effects of possible residual confounding on the functional form of the dose-response relationship. We found that a log-linear lead-IQ relationship was a significantly better fit than was a linear-linear relationship for IQ (p = 0.009), with little evidence of residual confounding of included model variables. We substituted the log-linear lead-IQ effect in a previously published health benefits model and found that the economic savings due to U.S. population lead decrease between 1976 and 1999 (from 17.1 microg/dL to 2.0 microg/dL) was 2.2 times (319 billion dollars) that calculated using a linear-linear dose-response function (149 billion dollars). The Centers for Disease Control and Prevention action limit of 10 microg/dL for children fails to protect against most damage and economic cost attributable to lead exposure.

  13. Performances of One-Round Walks in Linear Congestion Games

    NASA Astrophysics Data System (ADS)

    Bilò, Vittorio; Fanelli, Angelo; Flammini, Michele; Moscardelli, Luca

    We investigate the approximation ratio of the solutions achieved after a one-round walk in linear congestion games. We consider the social functions {Stextsc{um}}, defined as the sum of the players’ costs, and {Mtextsc{ax}}, defined as the maximum cost per player, as a measure of the quality of a given solution. For the social function {Stextsc{um}} and one-round walks starting from the empty strategy profile, we close the gap between the upper bound of 2+sqrt{5}≈ 4.24 given in [8] and the lower bound of 4 derived in [4] by providing a matching lower bound whose construction and analysis require non-trivial arguments. For the social function {Mtextsc{ax}}, for which, to the best of our knowledge, no results were known prior to this work, we show an approximation ratio of Θ(sqrt[4]{n^3}) (resp. Θ(nsqrt{n})), where n is the number of players, for one-round walks starting from the empty (resp. an arbitrary) strategy profile.

  14. Functional Impairment: An Unmeasured Marker of Medicare Costs for Postacute Care of Older Adults.

    PubMed

    Greysen, S Ryan; Stijacic Cenzer, Irena; Boscardin, W John; Covinsky, Kenneth E

    2017-09-01

    To assess the effects of preadmission functional impairment on Medicare costs of postacute care up to 365 days after hospital discharge. Longitudinal cohort study. Health and Retirement Study (HRS). Nationally representative sample of 16,673 Medicare hospitalizations of 8,559 community-dwelling older adults from 2000 to 2012. The main outcome was total Medicare costs in the year after hospital discharge, assessed according to Medicare claims data. The main predictor was functional impairment (level of difficulty or dependence in activities of daily living (ADLs)), determined from HRS interview preceding hospitalization. Multivariable linear regression was performed, adjusted for age, race, sex, income, net worth, and comorbidities, with clustering at the individual level to characterize the association between functional impairment and costs of postacute care. Unadjusted mean Medicare costs for 1 year after discharge increased with severity of impairment in a dose-response fashion (P < .001 for trend); 68% had no functional impairment ($25,931), 17% had difficulty with one ADL ($32,501), 7% had dependency in one ADL ($39,928), and 8% had dependency in two or more ADLs ($45,895). The most severely impaired participants cost 77% more than those with no impairment; adjusted analyses showed attenuated effect size (33% more) but no change in trend. Considering costs attributable to comorbidities, only three conditions were more expensive than severe functional impairment (lymphoma, metastatic cancer, paralysis). Functional impairment is associated with greater Medicare costs for postacute care and may be an unmeasured but important marker of long-term costs that cuts across conditions. © 2017, Copyright the Authors Journal compilation © 2017, The American Geriatrics Society.

  15. More memory under evolutionary learning may lead to chaos

    NASA Astrophysics Data System (ADS)

    Diks, Cees; Hommes, Cars; Zeppini, Paolo

    2013-02-01

    We show that an increase of memory of past strategy performance in a simple agent-based innovation model, with agents switching between costly innovation and cheap imitation, can be quantitatively stabilising while at the same time qualitatively destabilising. As memory in the fitness measure increases, the amplitude of price fluctuations decreases, but at the same time a bifurcation route to chaos may arise. The core mechanism leading to the chaotic behaviour in this model with strategy switching is that the map obtained for the system with memory is a convex combination of an increasing linear function and a decreasing non-linear function.

  16. Adaptive non-linear control for cancer therapy through a Fokker-Planck observer.

    PubMed

    Shakeri, Ehsan; Latif-Shabgahi, Gholamreza; Esmaeili Abharian, Amir

    2018-04-01

    In recent years, many efforts have been made to present optimal strategies for cancer therapy through the mathematical modelling of tumour-cell population dynamics and optimal control theory. In many cases, therapy effect is included in the drift term of the stochastic Gompertz model. By fitting the model with empirical data, the parameters of therapy function are estimated. The reported research works have not presented any algorithm to determine the optimal parameters of therapy function. In this study, a logarithmic therapy function is entered in the drift term of the Gompertz model. Using the proposed control algorithm, the therapy function parameters are predicted and adaptively adjusted. To control the growth of tumour-cell population, its moments must be manipulated. This study employs the probability density function (PDF) control approach because of its ability to control all the process moments. A Fokker-Planck-based non-linear stochastic observer will be used to determine the PDF of the process. A cost function based on the difference between a predefined desired PDF and PDF of tumour-cell population is defined. Using the proposed algorithm, the therapy function parameters are adjusted in such a manner that the cost function is minimised. The existence of an optimal therapy function is also proved. The numerical results are finally given to demonstrate the effectiveness of the proposed method.

  17. Cellular Manufacturing System with Dynamic Lot Size Material Handling

    NASA Astrophysics Data System (ADS)

    Khannan, M. S. A.; Maruf, A.; Wangsaputra, R.; Sutrisno, S.; Wibawa, T.

    2016-02-01

    Material Handling take as important role in Cellular Manufacturing System (CMS) design. In several study at CMS design material handling was assumed per pieces or with constant lot size. In real industrial practice, lot size may change during rolling period to cope with demand changes. This study develops CMS Model with Dynamic Lot Size Material Handling. Integer Linear Programming is used to solve the problem. Objective function of this model is minimizing total expected cost consisting machinery depreciation cost, operating costs, inter-cell material handling cost, intra-cell material handling cost, machine relocation costs, setup costs, and production planning cost. This model determines optimum cell formation and optimum lot size. Numerical examples are elaborated in the paper to ilustrate the characterictic of the model.

  18. Optimizing conjunctive use of surface water and groundwater resources with stochastic dynamic programming

    NASA Astrophysics Data System (ADS)

    Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter

    2014-05-01

    Optimal management of conjunctive use of surface water and groundwater has been attempted with different algorithms in the literature. In this study, a hydro-economic modelling approach to optimize conjunctive use of scarce surface water and groundwater resources under uncertainty is presented. A stochastic dynamic programming (SDP) approach is used to minimize the basin-wide total costs arising from water allocations and water curtailments. Dynamic allocation problems with inclusion of groundwater resources proved to be more complex to solve with SDP than pure surface water allocation problems due to head-dependent pumping costs. These dynamic pumping costs strongly affect the total costs and can lead to non-convexity of the future cost function. The water user groups (agriculture, industry, domestic) are characterized by inelastic demands and fixed water allocation and water supply curtailment costs. As in traditional SDP approaches, one step-ahead sub-problems are solved to find the optimal management at any time knowing the inflow scenario and reservoir/aquifer storage levels. These non-linear sub-problems are solved using a genetic algorithm (GA) that minimizes the sum of the immediate and future costs for given surface water reservoir and groundwater aquifer end storages. The immediate cost is found by solving a simple linear allocation sub-problem, and the future costs are assessed by interpolation in the total cost matrix from the following time step. Total costs for all stages, reservoir states, and inflow scenarios are used as future costs to drive a forward moving simulation under uncertain water availability. The use of a GA to solve the sub-problems is computationally more costly than a traditional SDP approach with linearly interpolated future costs. However, in a two-reservoir system the future cost function would have to be represented by a set of planes, and strict convexity in both the surface water and groundwater dimension cannot be maintained. The optimization framework based on the GA is still computationally feasible and represents a clean and customizable method. The method has been applied to the Ziya River basin, China. The basin is located on the North China Plain and is subject to severe water scarcity, which includes surface water droughts and groundwater over-pumping. The head-dependent groundwater pumping costs will enable assessment of the long-term effects of increased electricity prices on the groundwater pumping. The coupled optimization framework is used to assess realistic alternative development scenarios for the basin. In particular the potential for using electricity pricing policies to reach sustainable groundwater pumping is investigated.

  19. Global optimization algorithm for heat exchanger networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quesada, I.; Grossmann, I.E.

    This paper deals with the global optimization of heat exchanger networks with fixed topology. It is shown that if linear area cost functions are assumed, as well as arithmetic mean driving force temperature differences in networks with isothermal mixing, the corresponding nonlinear programming (NLP) optimization problem involves linear constraints and a sum of linear fractional functions in the objective which are nonconvex. A rigorous algorithm is proposed that is based on a convex NLP underestimator that involves linear and nonlinear estimators for fractional and bilinear terms which provide a tight lower bound to the global optimum. This NLP problem ismore » used within a spatial branch and bound method for which branching rules are given. Basic properties of the proposed method are presented, and its application is illustrated with several example problems. The results show that the proposed method only requires few nodes in the branch and bound search.« less

  20. Koopman Invariant Subspaces and Finite Linear Representations of Nonlinear Dynamical Systems for Control

    PubMed Central

    Brunton, Steven L.; Brunton, Bingni W.; Proctor, Joshua L.; Kutz, J. Nathan

    2016-01-01

    In this work, we explore finite-dimensional linear representations of nonlinear dynamical systems by restricting the Koopman operator to an invariant subspace spanned by specially chosen observable functions. The Koopman operator is an infinite-dimensional linear operator that evolves functions of the state of a dynamical system. Dominant terms in the Koopman expansion are typically computed using dynamic mode decomposition (DMD). DMD uses linear measurements of the state variables, and it has recently been shown that this may be too restrictive for nonlinear systems. Choosing the right nonlinear observable functions to form an invariant subspace where it is possible to obtain linear reduced-order models, especially those that are useful for control, is an open challenge. Here, we investigate the choice of observable functions for Koopman analysis that enable the use of optimal linear control techniques on nonlinear problems. First, to include a cost on the state of the system, as in linear quadratic regulator (LQR) control, it is helpful to include these states in the observable subspace, as in DMD. However, we find that this is only possible when there is a single isolated fixed point, as systems with multiple fixed points or more complicated attractors are not globally topologically conjugate to a finite-dimensional linear system, and cannot be represented by a finite-dimensional linear Koopman subspace that includes the state. We then present a data-driven strategy to identify relevant observable functions for Koopman analysis by leveraging a new algorithm to determine relevant terms in a dynamical system by ℓ1-regularized regression of the data in a nonlinear function space; we also show how this algorithm is related to DMD. Finally, we demonstrate the usefulness of nonlinear observable subspaces in the design of Koopman operator optimal control laws for fully nonlinear systems using techniques from linear optimal control. PMID:26919740

  1. Costs explained by function rather than diagnosis--results from the SNAC Nordanstig elderly cohort in Sweden.

    PubMed

    Lindholm, C; Gustavsson, A; Jönsson, L; Wimo, A

    2013-05-01

    Because the prevalence of many brain disorders rises with age, and brain disorders are costly, the economic burden of brain disorders will increase markedly during the next decades. The purpose of this study is to analyze how the costs to society vary with different levels of functioning and with the presence of a brain disorder. Resource utilization and costs from a societal viewpoint were analyzed versus cognition, activities of daily living (ADL), instrumental activities of daily living (IADL), brain disorder diagnosis and age in a population-based cohort of people aged 65 years and older in Nordanstig in Northern Sweden. Descriptive statistics, non-parametric bootstrapping and a generalized linear model (GLM) were used for the statistical analyses. Most people were zero users of care. Societal costs of dementia were by far the highest, ranging from SEK 262,000 (mild) to SEK 519,000 per year (severe dementia). In univariate analysis, all measures of functioning were significantly related to costs. When controlling for ADL and IADL in the multivariate GLM, cognition did not have a statistically significant effect on total cost. The presence of a brain disorder did not impact total cost when controlling for function. The greatest shift in costs was seen when comparing no dependency in ADL and dependency in one basic ADL function. It is the level of functioning, rather than the presence of a brain disorder diagnosis, which predicts costs. ADLs are better explanatory variables of costs than Mini mental state examination. Most people in a population-based cohort are zero users of care. Copyright © 2012 John Wiley & Sons, Ltd.

  2. Cost decomposition of linear systems with application to model reduction

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.

    1980-01-01

    A means is provided to assess the value or 'cst' of each component of a large scale system, when the total cost is a quadratic function. Such a 'cost decomposition' of the system has several important uses. When the components represent physical subsystems which can fail, the 'component cost' is useful in failure mode analysis. When the components represent mathematical equations which may be truncated, the 'component cost' becomes a criterion for model truncation. In this latter event component costs provide a mechanism by which the specific control objectives dictate which components should be retained in the model reduction process. This information can be valuable in model reduction and decentralized control problems.

  3. An Alternative Procedure for Estimating Unit Learning Curves,

    DTIC Science & Technology

    1985-09-01

    the model accurately describes the real-life situation, i.e., when the model is properly applied to the data, it can be a powerful tool for...predicting unit production costs. There are, however, some unique estimation problems inherent in the model . The usual method of generating predicted unit...production costs attempts to extend properties of least squares estimators to non- linear functions of these estimators. The result is biased estimates of

  4. Probabilistic dual heuristic programming-based adaptive critic

    NASA Astrophysics Data System (ADS)

    Herzallah, Randa

    2010-02-01

    Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.

  5. Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brabec, Jiri; Lin, Lin; Shao, Meiyue

    We present two iterative algorithms for approximating the absorption spectrum of molecules within linear response of time-dependent density functional theory (TDDFT) framework. These methods do not attempt to compute eigenvalues or eigenvectors of the linear response matrix. They are designed to approximate the absorption spectrum as a function directly. They take advantage of the special structure of the linear response matrix. Neither method requires the linear response matrix to be constructed explicitly. They only require a procedure that performs the multiplication of the linear response matrix with a vector. These methods can also be easily modified to efficiently estimate themore » density of states (DOS) of the linear response matrix without computing the eigenvalues of this matrix. We show by computational experiments that the methods proposed in this paper can be much more efficient than methods that are based on the exact diagonalization of the linear response matrix. We show that they can also be more efficient than real-time TDDFT simulations. We compare the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost.« less

  6. Improved Evolutionary Programming with Various Crossover Techniques for Optimal Power Flow Problem

    NASA Astrophysics Data System (ADS)

    Tangpatiphan, Kritsana; Yokoyama, Akihiko

    This paper presents an Improved Evolutionary Programming (IEP) for solving the Optimal Power Flow (OPF) problem, which is considered as a non-linear, non-smooth, and multimodal optimization problem in power system operation. The total generator fuel cost is regarded as an objective function to be minimized. The proposed method is an Evolutionary Programming (EP)-based algorithm with making use of various crossover techniques, normally applied in Real Coded Genetic Algorithm (RCGA). The effectiveness of the proposed approach is investigated on the IEEE 30-bus system with three different types of fuel cost functions; namely the quadratic cost curve, the piecewise quadratic cost curve, and the quadratic cost curve superimposed by sine component. These three cost curves represent the generator fuel cost functions with a simplified model and more accurate models of a combined-cycle generating unit and a thermal unit with value-point loading effect respectively. The OPF solutions by the proposed method and Pure Evolutionary Programming (PEP) are observed and compared. The simulation results indicate that IEP requires less computing time than PEP with better solutions in some cases. Moreover, the influences of important IEP parameters on the OPF solution are described in details.

  7. Cointegration of output, capital, labor, and energy

    NASA Astrophysics Data System (ADS)

    Stresing, R.; Lindenberger, D.; Kã¼mmel, R.

    2008-11-01

    Cointegration analysis is applied to the linear combinations of the time series of (the logarithms of) output, capital, labor, and energy for Germany, Japan, and the USA since 1960. The computed cointegration vectors represent the output elasticities of the aggregate energy-dependent Cobb-Douglas function. The output elasticities give the economic weights of the production factors capital, labor, and energy. We find that they are for labor much smaller and for energy much larger than the cost shares of these factors. In standard economic theory output elasticities equal cost shares. Our heterodox findings support results obtained with LINEX production functions.

  8. Digital robust active control law synthesis for large order flexible structure using parameter optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.

    1988-01-01

    A generic procedure for the parameter optimization of a digital control law for a large-order flexible flight vehicle or large space structure modeled as a sampled data system is presented. A linear quadratic Guassian type cost function was minimized, while satisfying a set of constraints on the steady-state rms values of selected design responses, using a constrained optimization technique to meet multiple design requirements. Analytical expressions for the gradients of the cost function and the design constraints on mean square responses with respect to the control law design variables are presented.

  9. Accounting for the cost of scaling-up health interventions.

    PubMed

    Johns, Benjamin; Baltussen, Rob

    2004-11-01

    Recent studies such as the Commission on Macroeconomics and Health have highlighted the need for expanding the coverage of services for HIV/AIDS, malaria, tuberculosis, immunisations and other diseases. In order for policy makers to plan for these changes, they need to analyse the change in costs when interventions are 'scaled-up' to cover greater percentages of the population. Previous studies suggest that applying current unit costs to an entire population can misconstrue the true costs of an intervention. This study presents the methodology used in WHO-CHOICE's generalised cost effectiveness analysis, which includes non-linear cost functions for health centres, transportation and supervision costs, as well as the presence of fixed costs of establishing a health infrastructure. Results show changing marginal costs as predicted by economic theory. 2004 John Wiley & Sons, Ltd.

  10. Dikin-type algorithms for dextrous grasping force optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buss, M.; Faybusovich, L.; Moore, J.B.

    1998-08-01

    One of the central issues in dextrous robotic hand grasping is to balance external forces acting on the object and at the same time achieve grasp stability and minimum grasping effort. A companion paper shows that the nonlinear friction-force limit constraints on grasping forces are equivalent to the positive definiteness of a certain matrix subject to linear constraints. Further, compensation of the external object force is also a linear constraint on this matrix. Consequently, the task of grasping force optimization can be formulated as a problem with semidefinite constraints. In this paper, two versions of strictly convex cost functions, onemore » of them self-concordant, are considered. These are twice-continuously differentiable functions that tend to infinity at the boundary of possible definiteness. For the general class of such cost functions, Dikin-type algorithms are presented. It is shown that the proposed algorithms guarantee convergence to the unique solution of the semidefinite programming problem associated with dextrous grasping force optimization. Numerical examples demonstrate the simplicity of implementation, the good numerical properties, and the optimality of the approach.« less

  11. Approximate analytical relationships for linear optimal aeroelastic flight control laws

    NASA Astrophysics Data System (ADS)

    Kassem, Ayman Hamdy

    1998-09-01

    This dissertation introduces new methods to uncover functional relationships between design parameters of a contemporary control design technique and the resulting closed-loop properties. Three new methods are developed for generating such relationships through analytical expressions: the Direct Eigen-Based Technique, the Order of Magnitude Technique, and the Cost Function Imbedding Technique. Efforts concentrated on the linear-quadratic state-feedback control-design technique applied to an aeroelastic flight control task. For this specific application, simple and accurate analytical expressions for the closed-loop eigenvalues and zeros in terms of basic parameters such as stability and control derivatives, structural vibration damping and natural frequency, and cost function weights are generated. These expressions explicitly indicate how the weights augment the short period and aeroelastic modes, as well as the closed-loop zeros, and by what physical mechanism. The analytical expressions are used to address topics such as damping, nonminimum phase behavior, stability, and performance with robustness considerations, and design modifications. This type of knowledge is invaluable to the flight control designer and would be more difficult to formulate when obtained from numerical-based sensitivity analysis.

  12. Fixed order dynamic compensation for multivariable linear systems

    NASA Technical Reports Server (NTRS)

    Kramer, F. S.; Calise, A. J.

    1986-01-01

    This paper considers the design of fixed order dynamic compensators for multivariable time invariant linear systems, minimizing a linear quadratic performance cost functional. Attention is given to robustness issues in terms of multivariable frequency domain specifications. An output feedback formulation is adopted by suitably augmenting the system description to include the compensator states. Either a controller or observer canonical form is imposed on the compensator description to reduce the number of free parameters to its minimal number. The internal structure of the compensator is prespecified by assigning a set of ascending feedback invariant indices, thus forming a Brunovsky structure for the nominal compensator.

  13. Optimum sensitivity derivatives of objective functions in nonlinear programming

    NASA Technical Reports Server (NTRS)

    Barthelemy, J.-F. M.; Sobieszczanski-Sobieski, J.

    1983-01-01

    The feasibility of eliminating second derivatives from the input of optimum sensitivity analyses of optimization problems is demonstrated. This elimination restricts the sensitivity analysis to the first-order sensitivity derivatives of the objective function. It is also shown that when a complete first-order sensitivity analysis is performed, second-order sensitivity derivatives of the objective function are available at little additional cost. An expression is derived whose application to linear programming is presented.

  14. Galaxy Redshifts from Discrete Optimization of Correlation Functions

    NASA Astrophysics Data System (ADS)

    Lee, Benjamin C. G.; Budavári, Tamás; Basu, Amitabh; Rahman, Mubdi

    2016-12-01

    We propose a new method of constraining the redshifts of individual extragalactic sources based on celestial coordinates and their ensemble statistics. Techniques from integer linear programming (ILP) are utilized to optimize simultaneously for the angular two-point cross- and autocorrelation functions. Our novel formalism introduced here not only transforms the otherwise hopelessly expensive, brute-force combinatorial search into a linear system with integer constraints but also is readily implementable in off-the-shelf solvers. We adopt Gurobi, a commercial optimization solver, and use Python to build the cost function dynamically. The preliminary results on simulated data show potential for future applications to sky surveys by complementing and enhancing photometric redshift estimators. Our approach is the first application of ILP to astronomical analysis.

  15. Developing integrated parametric planning models for budgeting and managing complex projects

    NASA Technical Reports Server (NTRS)

    Etnyre, Vance A.; Black, Ken U.

    1988-01-01

    The applicability of integrated parametric models for the budgeting and management of complex projects is investigated. Methods for building a very flexible, interactive prototype for a project planning system, and software resources available for this purpose, are discussed and evaluated. The prototype is required to be sensitive to changing objectives, changing target dates, changing costs relationships, and changing budget constraints. To achieve the integration of costs and project and task durations, parametric cost functions are defined by a process of trapezoidal segmentation, where the total cost for the project is the sum of the various project cost segments, and each project cost segment is the integral of a linearly segmented cost loading function over a specific interval. The cost can thus be expressed algebraically. The prototype was designed using Lotus-123 as the primary software tool. This prototype implements a methodology for interactive project scheduling that provides a model of a system that meets most of the goals for the first phase of the study and some of the goals for the second phase.

  16. Does the cost function matter in Bayes decision rule?

    PubMed

    Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann

    2012-02-01

    In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.

  17. Testing the Dose–Response Specification in Epidemiology: Public Health and Policy Consequences for Lead

    PubMed Central

    Rothenberg, Stephen J.; Rothenberg, Jesse C.

    2005-01-01

    Statistical evaluation of the dose–response function in lead epidemiology is rarely attempted. Economic evaluation of health benefits of lead reduction usually assumes a linear dose–response function, regardless of the outcome measure used. We reanalyzed a previously published study, an international pooled data set combining data from seven prospective lead studies examining contemporaneous blood lead effect on IQ (intelligence quotient) of 7-year-old children (n = 1,333). We constructed alternative linear multiple regression models with linear blood lead terms (linear–linear dose response) and natural-log–transformed blood lead terms (log-linear dose response). We tested the two lead specifications for nonlinearity in the models, compared the two lead specifications for significantly better fit to the data, and examined the effects of possible residual confounding on the functional form of the dose–response relationship. We found that a log-linear lead–IQ relationship was a significantly better fit than was a linear–linear relationship for IQ (p = 0.009), with little evidence of residual confounding of included model variables. We substituted the log-linear lead–IQ effect in a previously published health benefits model and found that the economic savings due to U.S. population lead decrease between 1976 and 1999 (from 17.1 μg/dL to 2.0 μg/dL) was 2.2 times ($319 billion) that calculated using a linear–linear dose–response function ($149 billion). The Centers for Disease Control and Prevention action limit of 10 μg/dL for children fails to protect against most damage and economic cost attributable to lead exposure. PMID:16140626

  18. Reconstruction of the unknown optimization cost functions from experimental recordings during static multi-finger prehension

    PubMed Central

    Niu, Xun; Terekhov, Alexander V.; Latash, Mark L.; Zatsiorsky, Vladimir M.

    2013-01-01

    The goal of the research is to reconstruct the unknown cost (objective) function(s) presumably used by the neural controller for sharing the total force among individual fingers in multi-finger prehension. The cost function was determined from experimental data by applying the recently developed Analytical Inverse Optimization (ANIO) method (Terekhov et al 2010). The core of the ANIO method is the Theorem of Uniqueness that specifies conditions for unique (with some restrictions) estimation of the objective functions. In the experiment, subjects (n=8) grasped an instrumented handle and maintained it at rest in the air with various external torques, loads, and target grasping forces applied to the object. The experimental data recorded from 80 trials showed a tendency to lie on a 2-dimensional hyperplane in the 4-dimensional finger-force space. Because the constraints in each trial were different, such a propensity is a manifestation of a neural mechanism (not the task mechanics). In agreement with the Lagrange principle for the inverse optimization, the plane of experimental observations was close to the plane resulting from the direct optimization. The latter plane was determined using the ANIO method. The unknown cost function was reconstructed successfully for each performer, as well as for the group data. The cost functions were found to be quadratic with non-zero linear terms. The cost functions obtained with the ANIO method yielded more accurate results than other optimization methods. The ANIO method has an evident potential for addressing the problem of optimization in motor control. PMID:22104742

  19. A binary linear programming formulation of the graph edit distance.

    PubMed

    Justice, Derek; Hero, Alfred

    2006-08-01

    A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.

  20. Novel RF and microwave components employing ferroelectric and solid-state tunable capacitors for multi-functional wireless communication systems

    NASA Astrophysics Data System (ADS)

    Tombak, Ali

    The recent advancement in wireless communications demands an ever increasing improvement in the system performance and functionality with a reduced size and cost. This thesis demonstrates novel RF and microwave components based on ferroelectric and solid-state based tunable capacitor (varactor) technologies for the design of low-cost, small-size and multi-functional wireless communication systems. These include tunable lumped element VHF filters based on ferroelectric varactors, a beam-steering technique which, unlike conventional systems, does not require separate power divider and phase shifters, and a predistortion linearization technique that uses a varactor based tunable R-L-C resonator. Among various ferroelectric materials, Barium Strontium Titanate (BST) is actively being studied for the fabrication of high performance varactors at RF and microwave frequencies. BST based tunable capacitors are presented with typical tunabilities of 4.2:1 with the application of 5 to 10 V DC bias voltages and typical loss tangents in the range of 0.003--0.009 at VHF frequencies. Tunable lumped element lowpass and bandpass VHF filters based on BST varactors are also demonstrated with tunabilities of 40% and 57%, respectively. A new beam-steering technique is developed based on the extended resonance power dividing technique. Phased arrays based on this technique do not require separate power divider and phase shifters. Instead, the power division and phase shifting circuits are combined into a single circuit, which utilizes tunable capacitors. This results in a substantial reduction in the circuit complexity and cost. Phased arrays based on this technique can be employed in mobile multimedia services and automotive collision avoidance radars. A 2-GHz 4-antenna and a 10-GHz 8-antenna extended resonance phased arrays are demonstrated with scan ranges of 20 degrees and 18 degrees, respectively. A new predistortion linearization technique for the linearization of RF/microwave power amplifiers is also presented. This technique utilizes a varactor based tunable R-L-C resonator in shunt configuration. Due to the small number of circuit elements required, linearizers based on this technique offer low-cost and simple circuitry, hence can be utilized in handheld and cellular applications. A 1.8 GHz power amplifier with 9 dB gain is linearized using this technique. The linearizer improves the output 1-dB compression point of the power amplifier from 21 to 22.8 dBm. Adjacent channel power ratio (ACPR) is improved approximately 11 dB at an output RF power level of 17.5 dBm. The thesis is concluded by summarizing the main achievements and discussing the future work directions.

  1. Resource utilisation and direct costs in patients with recently diagnosed fibromyalgia who are offered one of three different interventions in a randomised pragmatic trial.

    PubMed

    van Eijk-Hustings, Yvonne; Kroese, Mariëlle; Creemers, An; Landewé, Robert; Boonen, Annelies

    2016-05-01

    The purpose of this study is to understand the course of costs over a 2-year period in a cohort of recently diagnosed fibromyalgia (FM) patients receiving different treatment strategies. Following the diagnosis, patients were randomly assigned to a multidisciplinary programme (MD), aerobic exercise (AE) or usual care (UC) without being aware of alternative interventions. Time between diagnosis and start of treatment varied between patients. Resource utilisation, health care costs and costs for patients and families were collected through cost diaries. Mixed linear model analyses (MLM) examined the course of costs over time. Linear regression was used to explore predictors of health care costs in the post-intervention period. Two hundred three participants, 90 % women, mean (SD) age 41.7 (9.8) years, were included in the cohort. Intervention costs per patient varied from €864 to 1392 for MD and were €121 for AE. Health care costs (excluding intervention costs) decreased after diagnosis, but before the intervention in each group, and increased again afterwards to the level close to the diagnostic phase. In contrast, patient and family costs slightly increased over time in all groups without initial decrease immediately after diagnosis. Annualised health care costs post-intervention varied between €1872 and 2310 per patient and were predicted by worse functioning and high health care costs at diagnosis. In patients with FM, health care costs decreased following the diagnosis by a rheumatologist. Offering patients a specific intervention after diagnosis incurred substantial costs while having only marginal effects on costs.

  2. A new approach to approximating the linear quadratic optimal control law for hereditary systems with control delays

    NASA Technical Reports Server (NTRS)

    Milman, M. H.

    1985-01-01

    A factorization approach is presented for deriving approximations to the optimal feedback gain for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the feedback kernels.

  3. Experimental and Theoretical Results in Output Trajectory Redesign for Flexible Structures

    NASA Technical Reports Server (NTRS)

    Dewey, J. S.; Leang, K.; Devasia, S.

    1998-01-01

    In this paper we study the optimal redesign of output trajectories for linear invertible systems. This is particularly important for tracking control of flexible structures because the input-state trajectores, that achieve tracking of the required output may cause excessive vibrations in the structure. We pose and solve this problem, in the context of linear systems, as the minimization of a quadratic cost function. The theory is developed and applied to the output tracking of a flexible structure and experimental results are presented.

  4. Cyber-Physical Attacks With Control Objectives

    DOE PAGES

    Chen, Yuan; Kar, Soummya; Moura, Jose M. F.

    2017-08-18

    This study studies attackers with control objectives against cyber-physical systems (CPSs). The goal of the attacker is to counteract the CPS's controller and move the system to a target state while evading detection. We formulate a cost function that reflects the attacker's goals, and, using dynamic programming, we show that the optimal attack strategy reduces to a linear feedback of the attacker's state estimate. By changing the parameters of the cost function, we show how an attacker can design optimal attacks to balance the control objective and the detection avoidance objective. In conclusion, we provide a numerical illustration based onmore » a remotely controlled helicopter under attack.« less

  5. Cyber-Physical Attacks With Control Objectives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yuan; Kar, Soummya; Moura, Jose M. F.

    This study studies attackers with control objectives against cyber-physical systems (CPSs). The goal of the attacker is to counteract the CPS's controller and move the system to a target state while evading detection. We formulate a cost function that reflects the attacker's goals, and, using dynamic programming, we show that the optimal attack strategy reduces to a linear feedback of the attacker's state estimate. By changing the parameters of the cost function, we show how an attacker can design optimal attacks to balance the control objective and the detection avoidance objective. In conclusion, we provide a numerical illustration based onmore » a remotely controlled helicopter under attack.« less

  6. CAD of control systems: Application of nonlinear programming to a linear quadratic formulation

    NASA Technical Reports Server (NTRS)

    Fleming, P.

    1983-01-01

    The familiar suboptimal regulator design approach is recast as a constrained optimization problem and incorporated in a Computer Aided Design (CAD) package where both design objective and constraints are quadratic cost functions. This formulation permits the separate consideration of, for example, model following errors, sensitivity measures and control energy as objectives to be minimized or limits to be observed. Efficient techniques for computing the interrelated cost functions and their gradients are utilized in conjunction with a nonlinear programming algorithm. The effectiveness of the approach and the degree of insight into the problem which it affords is illustrated in a helicopter regulation design example.

  7. Exploring Duopoly Markets with Conjectural Variations

    ERIC Educational Resources Information Center

    Julien, Ludovic A.; Musy, Olivier; Saïdi, Aurélien W.

    2014-01-01

    In this article, the authors investigate competitive firm behaviors in a two-firm environment assuming linear cost and demand functions. By introducing conjectural variations, they capture the different market structures as specific configurations of a more general model. Conjectural variations are based on the assumption that each firm believes…

  8. Novel methods for Solving Economic Dispatch of Security-Constrained Unit Commitment Based on Linear Programming

    NASA Astrophysics Data System (ADS)

    Guo, Sangang

    2017-09-01

    There are two stages in solving security-constrained unit commitment problems (SCUC) within Lagrangian framework: one is to obtain feasible units’ states (UC), the other is power economic dispatch (ED) for each unit. The accurate solution of ED is more important for enhancing the efficiency of the solution to SCUC for the fixed feasible units’ statues. Two novel methods named after Convex Combinatorial Coefficient Method and Power Increment Method respectively based on linear programming problem for solving ED are proposed by the piecewise linear approximation to the nonlinear convex fuel cost functions. Numerical testing results show that the methods are effective and efficient.

  9. Use of nonlinear programming to optimize performance response to energy density in broiler feed formulation.

    PubMed

    Guevara, V R

    2004-02-01

    A nonlinear programming optimization model was developed to maximize margin over feed cost in broiler feed formulation and is described in this paper. The model identifies the optimal feed mix that maximizes profit margin. Optimum metabolizable energy level and performance were found by using Excel Solver nonlinear programming. Data from an energy density study with broilers were fitted to quadratic equations to express weight gain, feed consumption, and the objective function income over feed cost in terms of energy density. Nutrient:energy ratio constraints were transformed into equivalent linear constraints. National Research Council nutrient requirements and feeding program were used for examining changes in variables. The nonlinear programming feed formulation method was used to illustrate the effects of changes in different variables on the optimum energy density, performance, and profitability and was compared with conventional linear programming. To demonstrate the capabilities of the model, I determined the impact of variation in prices. Prices for broiler, corn, fish meal, and soybean meal were increased and decreased by 25%. Formulations were identical in all other respects. Energy density, margin, and diet cost changed compared with conventional linear programming formulation. This study suggests that nonlinear programming can be more useful than conventional linear programming to optimize performance response to energy density in broiler feed formulation because an energy level does not need to be set.

  10. Aggression and Adaptive Functioning: The Bright Side to Bad Behavior.

    ERIC Educational Resources Information Center

    Hawley, Patricia H.; Vaughn, Brian E.

    2003-01-01

    Asserts that effective children and adolescents can engage in socially undesirable behavior to attain personal goals at relatively little personal or interpersonal cost, implying that relations between adjustment and aggression may not be optimally described by standard linear models. Suggests that if researchers recognize that some aggression…

  11. Application of Output Predictive Algorithmic Control to a Terrain Following Aircraft System.

    DTIC Science & Technology

    1982-03-01

    non-linear regime the results from an optimal control solution may be questionable. 15 -**—• - •*- "•—"".’" CHAPTER 3 Output Prpdirl- ivf ...strongly influenced by two other factors as well - the sample time T and the least-squares cost function Q. unlike the deadbeat control law of Ref...design of aircraft control systems since these methods offer tremendous insight into the dynamic behavior of the system at relatively low cost . However

  12. Multi-objective possibilistic model for portfolio selection with transaction cost

    NASA Astrophysics Data System (ADS)

    Jana, P.; Roy, T. K.; Mazumder, S. K.

    2009-06-01

    In this paper, we introduce the possibilistic mean value and variance of continuous distribution, rather than probability distributions. We propose a multi-objective Portfolio based model and added another entropy objective function to generate a well diversified asset portfolio within optimal asset allocation. For quantifying any potential return and risk, portfolio liquidity is taken into account and a multi-objective non-linear programming model for portfolio rebalancing with transaction cost is proposed. The models are illustrated with numerical examples.

  13. The Impact of Biomass Feedstock Supply Variability on the Delivered Price to a Biorefinery in the Peace River Region of Alberta, Canada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stephen, Jamie; Sokhansanj, Shahabaddine; Bi, X.T.

    2010-01-01

    Agricultural residue feedstock availability in a given region can vary significantly over the 20 25 year lifetime of a biorefinery. Since delivered price of biomass feedstock to a biorefinery is related to the distance travelled and equipment optimization, and transportation distance increases as productivity decreases, productivity is a primary determinant of feedstock price. Using the Integrated Biomass Supply Analysis and Logistics (IBSAL) modeling environment and a standard round bale harvest and delivery scenario, harvest and delivery price were modelled for minimum, average, and maximum yields at four potential biorefinery sites in the Peace River region of Alberta, Canada. Biorefinery capacitiesmore » ranged from 50,000 to 500,000 tonnes per year. Delivery cost is a linear function of transportation distance and can be combined with a polynomial harvest function to create a generalized delivered cost function for agricultural residues. The range in delivered cost is substantial and is an important consideration for the operating costs of a biorefinery.« less

  14. Cost Estimation of Naval Ship Acquisition.

    DTIC Science & Technology

    1983-12-01

    one a 9-sub- system model , the other a single total cost model . The models were developed using the linear least squares regression tech- nique with...to Linear Statistical Models , McGraw-Hill, 1961. 11. Helmer, F. T., Bibliography on Pricing Methodology and Cost Estimating, Dept. of Economics and...SUPPI.EMSaTARY NOTES IS. KWRo" (Cowaft. en tever aide of ..aesep M’ Idab~t 6 Week ONNa.) Cost estimation; Acquisition; Parametric cost estimate; linear

  15. A life cycle cost economics model for projects with uniformly varying operating costs. [management planning

    NASA Technical Reports Server (NTRS)

    Remer, D. S.

    1977-01-01

    A mathematical model is developed for calculating the life cycle costs for a project where the operating costs increase or decrease in a linear manner with time. The life cycle cost is shown to be a function of the investment costs, initial operating costs, operating cost gradient, project life time, interest rate for capital and salvage value. The results show that the life cycle cost for a project can be grossly underestimated (or overestimated) if the operating costs increase (or decrease) uniformly over time rather than being constant as is often assumed in project economic evaluations. The following range of variables is examined: (1) project life from 2 to 30 years; (2) interest rate from 0 to 15 percent per year; and (3) operating cost gradient from 5 to 90 percent of the initial operating costs. A numerical example plus tables and graphs is given to help calculate project life cycle costs over a wide range of variables.

  16. Convert a low-cost sensor to a colorimeter using an improved regression method

    NASA Astrophysics Data System (ADS)

    Wu, Yifeng

    2008-01-01

    Closed loop color calibration is a process to maintain consistent color reproduction for color printers. To perform closed loop color calibration, a pre-designed color target should be printed, and automatically measured by a color measuring instrument. A low cost sensor has been embedded to the printer to perform the color measurement. A series of sensor calibration and color conversion methods have been developed. The purpose is to get accurate colorimetric measurement from the data measured by the low cost sensor. In order to get high accuracy colorimetric measurement, we need carefully calibrate the sensor, and minimize all possible errors during the color conversion. After comparing several classical color conversion methods, a regression based color conversion method has been selected. The regression is a powerful method to estimate the color conversion functions. But the main difficulty to use this method is to find an appropriate function to describe the relationship between the input and the output data. In this paper, we propose to use 1D pre-linearization tables to improve the linearity between the input sensor measuring data and the output colorimetric data. Using this method, we can increase the accuracy of the regression method, so as to improve the accuracy of the color conversion.

  17. A reliable algorithm for optimal control synthesis

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1992-01-01

    In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.

  18. On real-space Density Functional Theory for non-orthogonal crystal systems: Kronecker product formulation of the kinetic energy operator

    NASA Astrophysics Data System (ADS)

    Sharma, Abhiraj; Suryanarayana, Phanish

    2018-05-01

    We present an accurate and efficient real-space Density Functional Theory (DFT) framework for the ab initio study of non-orthogonal crystal systems. Specifically, employing a local reformulation of the electrostatics, we develop a novel Kronecker product formulation of the real-space kinetic energy operator that significantly reduces the number of operations associated with the Laplacian-vector multiplication, the dominant cost in practical computations. In particular, we reduce the scaling with respect to finite-difference order from quadratic to linear, thereby significantly bridging the gap in computational cost between non-orthogonal and orthogonal systems. We verify the accuracy and efficiency of the proposed methodology through selected examples.

  19. Supervised Variational Relevance Learning, An Analytic Geometric Feature Selection with Applications to Omic Datasets.

    PubMed

    Boareto, Marcelo; Cesar, Jonatas; Leite, Vitor B P; Caticha, Nestor

    2015-01-01

    We introduce Supervised Variational Relevance Learning (Suvrel), a variational method to determine metric tensors to define distance based similarity in pattern classification, inspired in relevance learning. The variational method is applied to a cost function that penalizes large intraclass distances and favors small interclass distances. We find analytically the metric tensor that minimizes the cost function. Preprocessing the patterns by doing linear transformations using the metric tensor yields a dataset which can be more efficiently classified. We test our methods using publicly available datasets, for some standard classifiers. Among these datasets, two were tested by the MAQC-II project and, even without the use of further preprocessing, our results improve on their performance.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Jong-Won; Hirao, Kimihiko

    Long-range corrected density functional theory (LC-DFT) attracts many chemists’ attentions as a quantum chemical method to be applied to large molecular system and its property calculations. However, the expensive time cost to evaluate the long-range HF exchange is a big obstacle to be overcome to be applied to the large molecular systems and the solid state materials. Upon this problem, we propose a linear-scaling method of the HF exchange integration, in particular, for the LC-DFT hybrid functional.

  1. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra.

    PubMed

    Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C

    2010-09-21

    We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.

  2. An algorithm for control system design via parameter optimization. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sinha, P. K.

    1972-01-01

    An algorithm for design via parameter optimization has been developed for linear-time-invariant control systems based on the model reference adaptive control concept. A cost functional is defined to evaluate the system response relative to nominal, which involves in general the error between the system and nominal response, its derivatives and the control signals. A program for the practical implementation of this algorithm has been developed, with the computational scheme for the evaluation of the performance index based on Lyapunov's theorem for stability of linear invariant systems.

  3. On the Inefficiency of Equilibria in Linear Bottleneck Congestion Games

    NASA Astrophysics Data System (ADS)

    de Keijzer, Bart; Schäfer, Guido; Telelis, Orestis A.

    We study the inefficiency of equilibrium outcomes in bottleneck congestion games. These games model situations in which strategic players compete for a limited number of facilities. Each player allocates his weight to a (feasible) subset of the facilities with the goal to minimize the maximum (weight-dependent) latency that he experiences on any of these facilities. We derive upper and (asymptotically) matching lower bounds on the (strong) price of anarchy of linear bottleneck congestion games for a natural load balancing social cost objective (i.e., minimize the maximum latency of a facility). We restrict our studies to linear latency functions. Linear bottleneck congestion games still constitute a rich class of games and generalize, for example, load balancing games with identical or uniformly related machines with or without restricted assignments.

  4. Multicategory nets of single-layer perceptrons: complexity and sample-size issues.

    PubMed

    Raudys, Sarunas; Kybartas, Rimantas; Zavadskas, Edmundas Kazimieras

    2010-05-01

    The standard cost function of multicategory single-layer perceptrons (SLPs) does not minimize the classification error rate. In order to reduce classification error, it is necessary to: 1) refuse the traditional cost function, 2) obtain near to optimal pairwise linear classifiers by specially organized SLP training and optimal stopping, and 3) fuse their decisions properly. To obtain better classification in unbalanced training set situations, we introduce the unbalance correcting term. It was found that fusion based on the Kulback-Leibler (K-L) distance and the Wu-Lin-Weng (WLW) method result in approximately the same performance in situations where sample sizes are relatively small. The explanation for this observation is by theoretically known verity that an excessive minimization of inexact criteria becomes harmful at times. Comprehensive comparative investigations of six real-world pattern recognition (PR) problems demonstrated that employment of SLP-based pairwise classifiers is comparable and as often as not outperforming the linear support vector (SV) classifiers in moderate dimensional situations. The colored noise injection used to design pseudovalidation sets proves to be a powerful tool for facilitating finite sample problems in moderate-dimensional PR tasks.

  5. The role of retinal bipolar cell in early vision: an implication with analogue networks and regularization theory.

    PubMed

    Yagi, T; Ohshima, S; Funahashi, Y

    1997-09-01

    A linear analogue network model is proposed to describe the neuronal circuit of the outer retina consisting of cones, horizontal cells, and bipolar cells. The model reflects previous physiological findings on the spatial response properties of these neurons to dim illumination and is expressed by physiological mechanisms, i.e., membrane conductances, gap-junctional conductances, and strengths of chemical synaptic interactions. Using the model, we characterized the spatial filtering properties of the bipolar cell receptive field with the standard regularization theory, in which the early vision problems are attributed to minimization of a cost function. The cost function accompanying the present characterization is derived from the linear analogue network model, and one can gain intuitive insights on how physiological mechanisms contribute to the spatial filtering properties of the bipolar cell receptive field. We also elucidated a quantitative relation between the Laplacian of Gaussian operator and the bipolar cell receptive field. From the computational point of view, the dopaminergic modulation of the gap-junctional conductance between horizontal cells is inferred to be a suitable neural adaptation mechanism for transition between photopic and mesopic vision.

  6. On a stochastic control method for weakly coupled linear systems. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kwong, R. H.

    1972-01-01

    The stochastic control of two weakly coupled linear systems with different controllers is considered. Each controller only makes measurements about his own system; no information about the other system is assumed to be available. Based on the noisy measurements, the controllers are to generate independently suitable control policies which minimize a quadratic cost functional. To account for the effects of weak coupling directly, an approximate model, which involves replacing the influence of one system on the other by a white noise process is proposed. Simple suboptimal control problem for calculating the covariances of these noises is solved using the matrix minimum principle. The overall system performance based on this scheme is analyzed as a function of the degree of intersystem coupling.

  7. Stochastic Control of Energy Efficient Buildings: A Semidefinite Programming Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Xiao; Dong, Jin; Djouadi, Seddik M

    2015-01-01

    The key goal in energy efficient buildings is to reduce energy consumption of Heating, Ventilation, and Air- Conditioning (HVAC) systems while maintaining a comfortable temperature and humidity in the building. This paper proposes a novel stochastic control approach for achieving joint performance and power control of HVAC. We employ a constrained Stochastic Linear Quadratic Control (cSLQC) by minimizing a quadratic cost function with a disturbance assumed to be Gaussian. The problem is formulated to minimize the expected cost subject to a linear constraint and a probabilistic constraint. By using cSLQC, the problem is reduced to a semidefinite optimization problem, wheremore » the optimal control can be computed efficiently by Semidefinite programming (SDP). Simulation results are provided to demonstrate the effectiveness and power efficiency by utilizing the proposed control approach.« less

  8. Parkinsonian Balance Deficits Quantified Using a Game Industry Board and a Specific Battery of Four Paradigms.

    PubMed

    Darbin, Olivier; Gubler, Coral; Naritoku, Dean; Dees, Daniel; Martino, Anthony; Adams, Elizabeth

    2016-01-01

    This study describes a cost-effective screening protocol for parkinsonism based on combined objective and subjective monitoring of balance function. Objective evaluation of balance function was performed using a game industry balance board and an automated analyses of the dynamic of the center of pressure in time, frequency, and non-linear domains collected during short series of stand up tests with different modalities and severity of sensorial deprivation. The subjective measurement of balance function was performed using the Dizziness Handicap Inventory questionnaire. Principal component analyses on both objective and subjective measurements of balance function allowed to obtained a specificity and selectivity for parkinsonian patients (vs. healthy subjects) of 0.67 and 0.71 respectively. The findings are discussed regarding the relevance of cost-effective balance-based screening system as strategy to meet the needs of broader and earlier screening for parkinsonism in communities with limited access to healthcare.

  9. Investigation and appreciation of optimal output feedback. Volume 1: A convergent algorithm for the stochastic infinite-time discrete optimal output feedback problem

    NASA Technical Reports Server (NTRS)

    Halyo, N.; Broussard, J. R.

    1984-01-01

    The stochastic, infinite time, discrete output feedback problem for time invariant linear systems is examined. Two sets of sufficient conditions for the existence of a stable, globally optimal solution are presented. An expression for the total change in the cost function due to a change in the feedback gain is obtained. This expression is used to show that a sequence of gains can be obtained by an algorithm, so that the corresponding cost sequence is monotonically decreasing and the corresponding sequence of the cost gradient converges to zero. The algorithm is guaranteed to obtain a critical point of the cost function. The computational steps necessary to implement the algorithm on a computer are presented. The results are applied to a digital outer loop flight control problem. The numerical results for this 13th order problem indicate a rate of convergence considerably faster than two other algorithms used for comparison.

  10. Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method

    NASA Technical Reports Server (NTRS)

    Kowal, Michael T.

    1997-01-01

    The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.

  11. Waste management under multiple complexities: Inexact piecewise-linearization-based fuzzy flexible programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun Wei; Huang, Guo H., E-mail: huang@iseis.org; Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan, S4S 0A2

    2012-06-15

    Highlights: Black-Right-Pointing-Pointer Inexact piecewise-linearization-based fuzzy flexible programming is proposed. Black-Right-Pointing-Pointer It's the first application to waste management under multiple complexities. Black-Right-Pointing-Pointer It tackles nonlinear economies-of-scale effects in interval-parameter constraints. Black-Right-Pointing-Pointer It estimates costs more accurately than the linear-regression-based model. Black-Right-Pointing-Pointer Uncertainties are decreased and more satisfactory interval solutions are obtained. - Abstract: To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerancemore » intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities.« less

  12. Fuzzy Multi-Objective Vendor Selection Problem with Modified S-CURVE Membership Function

    NASA Astrophysics Data System (ADS)

    Díaz-Madroñero, Manuel; Peidro, David; Vasant, Pandian

    2010-06-01

    In this paper, the S-Curve membership function methodology is used in a vendor selection (VS) problem. An interactive method for solving multi-objective VS problems with fuzzy goals is developed. The proposed method attempts simultaneously to minimize the total order costs, the number of rejected items and the number of late delivered items with reference to several constraints such as meeting buyers' demand, vendors' capacity, vendors' quota flexibility, vendors' allocated budget, etc. We compare in an industrial case the performance of S-curve membership functions, representing uncertainty goals and constraints in VS problems, with linear membership functions.

  13. Graph-cut based discrete-valued image reconstruction.

    PubMed

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  14. Identifying factors of activities of daily living important for cost and caregiver outcomes in Alzheimer's disease.

    PubMed

    Reed, Catherine; Belger, Mark; Vellas, Bruno; Andrews, Jeffrey Scott; Argimon, Josep M; Bruno, Giuseppe; Dodel, Richard; Jones, Roy W; Wimo, Anders; Haro, Josep Maria

    2016-02-01

    We aimed to obtain a better understanding of how different aspects of patient functioning affect key cost and caregiver outcomes in Alzheimer's disease (AD). Baseline data from a prospective observational study of community-living AD patients (GERAS) were used. Functioning was assessed using the Alzheimer's Disease Cooperative Study-Activities of Daily Living Scale. Generalized linear models were conducted to analyze the relationship between scores for total activities of daily living (ADL), basic ADL (BADL), instrumental ADL (IADL), ADL subdomains (confirmed through factor analysis) and individual ADL questions, and total societal costs, patient healthcare and social care costs, total and supervision caregiver time, and caregiver burden. Four distinct ADL subdomains were confirmed: basic activities, domestic/household activities, communication, and outside activities. Higher total societal costs were associated with impairments in all aspects of ADL, including all subdomains; patient costs were associated with total ADL and BADL, and basic activities subdomain scores. Both total and supervision caregiver hours were associated with total ADL and IADL scores, and domestic/household and outside activities subdomain scores (greater hours associated with greater functional impairments). There was no association between caregiver burden and BADL or basic activities subdomain scores. The relationship between total ADL, IADL, and the outside activities subdomain and outcomes differed between patients with mild and moderate-to-severe AD. Identification of ADL subdomains may lead to a better understanding of the association between patient function and costs and caregiver outcomes at different stages of AD, in particular the outside activities subdomain within mild AD.

  15. An EOQ model for weibull distribution deterioration with time-dependent cubic demand and backlogging

    NASA Astrophysics Data System (ADS)

    Santhi, G.; Karthikeyan, K.

    2017-11-01

    In this article we introduce an economic order quantity model with weibull deterioration and time dependent cubic demand rate where holding costs as a linear function of time. Shortages are allowed in the inventory system are partially and fully backlogging. The objective of this model is to minimize the total inventory cost by using the optimal order quantity and the cycle length. The proposed model is illustrated by numerical examples and the sensitivity analysis is performed to study the effect of changes in parameters on the optimum solutions.

  16. Pulsed-focusing recirculating linacs for muon acceleration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Rolland

    2014-12-31

    Since the muon has a short lifetime, fast acceleration is essential for high-energy applications such as muon colliders, Higgs factories, or neutrino factories. The best one can do is to make a linear accelerator with the highest possible accelerating gradient to make the accelerating time as short as possible. However, the cost of such a single linear accelerator is prohibitively large due to expensive power sources, cavities, tunnels, and related infrastructure. As was demonstrated in the Thomas Jefferson Accelerator Facility (Jefferson Lab) Continuous Electron Beam Accelerator Facility (CEBAF), an elegant solution to reduce cost is to use magnetic return arcsmore » to recirculate the beam through the accelerating RF cavities many times, where they gain energy on each pass. In such a Recirculating Linear Accelerator (RLA), the magnetic focusing strength diminishes as the beam energy increases in a conventional linac that has constant strength quadrupoles. After some number of passes the focusing strength is insufficient to keep the beam from going unstable and being lost. In this project, the use of fast pulsed quadrupoles in the linac sections was considered for stronger focusing as a function of time to allow more successive passes of a muon beam in a recirculating linear accelerator. In one simulation, it was shown that the number of passes could be increased from 8 to 12 using pulsed magnet designs that have been developed and tested. This could reduce the cost of linac sections of a muon RLA by 8/12, where more improvement is still possible. The expense of a greater number of passes and corresponding number of return arcs was also addressed in this project by exploring the use of ramped or FFAG-style magnets in the return arcs. A better solution, invented in this project, is to use combined-function dipole-quadrupole magnets to simultaneously transport two beams of different energies through one magnet string to reduce costs of return arcs by almost a factor of two. A patent application was filed for this invention and a detailed report published in Physical Review Special Topics. A scaled model using an electron beam was developed and proposed to test the concept of a dog bone RLA with combined-function return arcs. The efforts supported by this grant were reported in a series of contributions to particle accelerator conferences that are reproduced in the appendices and summarized in the body of this report.« less

  17. Minimization for conditional simulation: Relationship to optimal transport

    NASA Astrophysics Data System (ADS)

    Oliver, Dean S.

    2014-05-01

    In this paper, we consider the problem of generating independent samples from a conditional distribution when independent samples from the prior distribution are available. Although there are exact methods for sampling from the posterior (e.g. Markov chain Monte Carlo or acceptance/rejection), these methods tend to be computationally demanding when evaluation of the likelihood function is expensive, as it is for most geoscience applications. As an alternative, in this paper we discuss deterministic mappings of variables distributed according to the prior to variables distributed according to the posterior. Although any deterministic mappings might be equally useful, we will focus our discussion on a class of algorithms that obtain implicit mappings by minimization of a cost function that includes measures of data mismatch and model variable mismatch. Algorithms of this type include quasi-linear estimation, randomized maximum likelihood, perturbed observation ensemble Kalman filter, and ensemble of perturbed analyses (4D-Var). When the prior pdf is Gaussian and the observation operators are linear, we show that these minimization-based simulation methods solve an optimal transport problem with a nonstandard cost function. When the observation operators are nonlinear, however, the mapping of variables from the prior to the posterior obtained from those methods is only approximate. Errors arise from neglect of the Jacobian determinant of the transformation and from the possibility of discontinuous mappings.

  18. The Non-Linear Relationship between BMI and Health Care Costs and the Resulting Cost Fraction Attributable to Obesity.

    PubMed

    Laxy, Michael; Stark, Renée; Peters, Annette; Hauner, Hans; Holle, Rolf; Teuner, Christina M

    2017-08-30

    This study aims to analyse the non-linear relationship between Body Mass Index (BMI) and direct health care costs, and to quantify the resulting cost fraction attributable to obesity in Germany. Five cross-sectional surveys of cohort studies in southern Germany were pooled, resulting in data of 6757 individuals (31-96 years old). Self-reported information on health care utilisation was used to estimate direct health care costs for the year 2011. The relationship between measured BMI and annual costs was analysed using generalised additive models, and the cost fraction attributable to obesity was calculated. We found a non-linear association of BMI and health care costs with a continuously increasing slope for increasing BMI without any clear threshold. Under the consideration of the non-linear BMI-cost relationship, a shift in the BMI distribution so that the BMI of each individual is lowered by one point is associated with a 2.1% reduction of mean direct costs in the population. If obesity was eliminated, and the BMI of all obese individuals were lowered to 29.9 kg/m², this would reduce the mean direct costs by 4.0% in the population. Results show a non-linear relationship between BMI and health care costs, with very high costs for a few individuals with high BMI. This indicates that population-based interventions in combination with selective measures for very obese individuals might be the preferred strategy.

  19. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package.

    PubMed

    Womack, James C; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-28

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  20. Self-consistent implementation of meta-GGA functionals for the ONETEP linear-scaling electronic structure package

    NASA Astrophysics Data System (ADS)

    Womack, James C.; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton

    2016-11-01

    Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.

  1. Comparing Linear Discriminant Function with Logistic Regression for the Two-Group Classification Problem.

    ERIC Educational Resources Information Center

    Fan, Xitao; Wang, Lin

    The Monte Carlo study compared the performance of predictive discriminant analysis (PDA) and that of logistic regression (LR) for the two-group classification problem. Prior probabilities were used for classification, but the cost of misclassification was assumed to be equal. The study used a fully crossed three-factor experimental design (with…

  2. Entry of new pharmacies in the deregulated Norwegian pharmaceuticals market--consequences for costs and availability.

    PubMed

    Rudholm, Niklas

    2008-08-01

    The objective of this study is to analyze the impact of the new regulation concerning entry of pharmacies into the Norwegian pharmaceuticals market in 2001 on cost and availability of pharmaceutical products. In order to study costs, a translog cost function is estimated using data from the annual reports of a sample of Norwegian pharmacies before and after the deregulation of the market. Linear regression models for the number of pharmacies in each region in Norway are also estimated. The results show that the costs of the individual pharmacies have not decreased as a consequence of the deregulation of the Norwegian pharmaceuticals market. The deregulation of the market did, however, increase the availability to pharmacy services substantially. Increased availability of pharmacy services can be achieved by deregulating pharmaceutical markets as in Norway, but at the expense of increased costs for the pharmacies.

  3. APPLICATION OF NEURAL NETWORK ALGORITHMS FOR BPM LINEARIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musson, John C.; Seaton, Chad; Spata, Mike F.

    2012-11-01

    Stripline BPM sensors contain inherent non-linearities, as a result of field distortions from the pickup elements. Many methods have been devised to facilitate corrections, often employing polynomial fitting. The cost of computation makes real-time correction difficult, particulalry when integer math is utilized. The application of neural-network technology, particularly the multi-layer perceptron algorithm, is proposed as an efficient alternative for electrode linearization. A process of supervised learning is initially used to determine the weighting coefficients, which are subsequently applied to the incoming electrode data. A non-linear layer, known as an activation layer, is responsible for the removal of saturation effects. Implementationmore » of a perceptron in an FPGA-based software-defined radio (SDR) is presented, along with performance comparisons. In addition, efficient calculation of the sigmoidal activation function via the CORDIC algorithm is presented.« less

  4. A Method for Scheduling Air Traffic with Uncertain En Route Capacity Constraints

    NASA Technical Reports Server (NTRS)

    Arneson, Heather; Bloem, Michael

    2009-01-01

    A method for scheduling ground delay and airborne holding for flights scheduled to fly through airspace with uncertain capacity constraints is presented. The method iteratively solves linear programs for departure rates and airborne holding as new probabilistic information about future airspace constraints becomes available. The objective function is the expected value of the weighted sum of ground and airborne delay. In order to limit operationally costly changes to departure rates, they are updated only when such an update would lead to a significant cost reduction. Simulation results show a 13% cost reduction over a rough approximation of current practices. Comparison between the proposed as needed replanning method and a similar method that uses fixed frequency replanning shows a typical cost reduction of 1% to 2%, and even up to a 20% cost reduction in some cases.

  5. Evaluation of non-rigid registration parameters for atlas-based segmentation of CT images of human cochlea

    NASA Astrophysics Data System (ADS)

    Elfarnawany, Mai; Alam, S. Riyahi; Agrawal, Sumit K.; Ladak, Hanif M.

    2017-02-01

    Cochlear implant surgery is a hearing restoration procedure for patients with profound hearing loss. In this surgery, an electrode is inserted into the cochlea to stimulate the auditory nerve and restore the patient's hearing. Clinical computed tomography (CT) images are used for planning and evaluation of electrode placement, but their low resolution limits the visualization of internal cochlear structures. Therefore, high resolution micro-CT images are used to develop atlas-based segmentation methods to extract these nonvisible anatomical features in clinical CT images. Accurate registration of the high and low resolution CT images is a prerequisite for reliable atlas-based segmentation. In this study, we evaluate and compare different non-rigid B-spline registration parameters using micro-CT and clinical CT images of five cadaveric human cochleae. The varying registration parameters are cost function (normalized correlation (NC), mutual information and mean square error), interpolation method (linear, windowed-sinc and B-spline) and sampling percentage (1%, 10% and 100%). We compare the registration results visually and quantitatively using the Dice similarity coefficient (DSC), Hausdorff distance (HD) and absolute percentage error in cochlear volume. Using MI or MSE cost functions and linear or windowed-sinc interpolation resulted in visually undesirable deformation of internal cochlear structures. Quantitatively, the transforms using 100% sampling percentage yielded the highest DSC and smallest HD (0.828+/-0.021 and 0.25+/-0.09mm respectively). Therefore, B-spline registration with cost function: NC, interpolation: B-spline and sampling percentage: moments 100% can be the foundation of developing an optimized atlas-based segmentation algorithm of intracochlear structures in clinical CT images.

  6. A comparison of methods to handle skew distributed cost variables in the analysis of the resource consumption in schizophrenia treatment.

    PubMed

    Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C

    2002-03-01

    Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant negative influences of employment status and partnership on costs. All three models provided a R2 of about.31. The Residuals of the linear OLS model revealed significant deviances from normality and homoscedasticity. The residuals of the log-transformed model are normally distributed but still heteroscedastic. The linear OLS model provided the lowest prediction error and the best forecast of the dependent cost variable. The log-transformed model provided the lowest RMSE if the heteroscedastic bias correction was used. The RMSE of the GLM with a log link and a gamma distribution was higher than those of the linear OLS model and the log-transformed OLS model. The difference between the RMSE of the linear OLS model and that of the log-transformed OLS model without bias correction was significant at the 95% level. As result of the cross-validation procedure, the linear OLS model provided the lowest RMSE followed by the log-transformed OLS model with a heteroscedastic bias correction. The GLM showed the weakest model fit again. None of the differences between the RMSE resulting form the cross- validation procedure were found to be significant. The comparison of the fit indices of the different regression models revealed that the linear OLS model provided a better fit than the log-transformed model and the GLM, but the differences between the models RMSE were not significant. Due to the small number of cases in the study the lack of significance does not sufficiently proof that the differences between the RSME for the different models are zero and the superiority of the linear OLS model can not be generalized. The lack of significant differences among the alternative estimators may reflect a lack of sample size adequate to detect important differences among the estimators employed. Further studies with larger case number are necessary to confirm the results. Specification of an adequate regression models requires a careful examination of the characteristics of the data. Estimation of standard errors and confidence intervals by nonparametric methods which are robust against deviations from the normal distribution and the homoscedasticity of residuals are suitable alternatives to the transformation of the skew distributed dependent variable. Further studies with more adequate case numbers are needed to confirm the results.

  7. A kernel adaptive algorithm for quaternion-valued inputs.

    PubMed

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  8. A new analytical method for estimating lumped parameter constants of linear viscoelastic models from strain rate tests

    NASA Astrophysics Data System (ADS)

    Mattei, G.; Ahluwalia, A.

    2018-04-01

    We introduce a new function, the apparent elastic modulus strain-rate spectrum, E_{app} ( \\dot{ɛ} ), for the derivation of lumped parameter constants for Generalized Maxwell (GM) linear viscoelastic models from stress-strain data obtained at various compressive strain rates ( \\dot{ɛ}). The E_{app} ( \\dot{ɛ} ) function was derived using the tangent modulus function obtained from the GM model stress-strain response to a constant \\dot{ɛ} input. Material viscoelastic parameters can be rapidly derived by fitting experimental E_{app} data obtained at different strain rates to the E_{app} ( \\dot{ɛ} ) function. This single-curve fitting returns similar viscoelastic constants as the original epsilon dot method based on a multi-curve global fitting procedure with shared parameters. Its low computational cost permits quick and robust identification of viscoelastic constants even when a large number of strain rates or replicates per strain rate are considered. This method is particularly suited for the analysis of bulk compression and nano-indentation data of soft (bio)materials.

  9. Simple Procedure to Compute the Inductance of a Toroidal Ferrite Core from the Linear to the Saturation Regions

    PubMed Central

    Salas, Rosa Ana; Pleite, Jorge

    2013-01-01

    We propose a specific procedure to compute the inductance of a toroidal ferrite core as a function of the excitation current. The study includes the linear, intermediate and saturation regions. The procedure combines the use of Finite Element Analysis in 2D and experimental measurements. Through the two dimensional (2D) procedure we are able to achieve convergence, a reduction of computational cost and equivalent results to those computed by three dimensional (3D) simulations. The validation is carried out by comparing 2D, 3D and experimental results. PMID:28809283

  10. DQM: Decentralized Quadratically Approximated Alternating Direction Method of Multipliers

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Shi, Wei; Ling, Qing; Ribeiro, Alejandro

    2016-10-01

    This paper considers decentralized consensus optimization problems where nodes of a network have access to different summands of a global objective function. Nodes cooperate to minimize the global objective by exchanging information with neighbors only. A decentralized version of the alternating directions method of multipliers (DADMM) is a common method for solving this category of problems. DADMM exhibits linear convergence rate to the optimal objective but its implementation requires solving a convex optimization problem at each iteration. This can be computationally costly and may result in large overall convergence times. The decentralized quadratically approximated ADMM algorithm (DQM), which minimizes a quadratic approximation of the objective function that DADMM minimizes at each iteration, is proposed here. The consequent reduction in computational time is shown to have minimal effect on convergence properties. Convergence still proceeds at a linear rate with a guaranteed constant that is asymptotically equivalent to the DADMM linear convergence rate constant. Numerical results demonstrate advantages of DQM relative to DADMM and other alternatives in a logistic regression problem.

  11. A generalized fuzzy linear programming approach for environmental management problem under uncertainty.

    PubMed

    Fan, Yurui; Huang, Guohe; Veawab, Amornvadee

    2012-01-01

    In this study, a generalized fuzzy linear programming (GFLP) method was developed to deal with uncertainties expressed as fuzzy sets that exist in the constraints and objective function. A stepwise interactive algorithm (SIA) was advanced to solve GFLP model and generate solutions expressed as fuzzy sets. To demonstrate its application, the developed GFLP method was applied to a regional sulfur dioxide (SO2) control planning model to identify effective SO2 mitigation polices with a minimized system performance cost under uncertainty. The results were obtained to represent the amount of SO2 allocated to different control measures from different sources. Compared with the conventional interval-parameter linear programming (ILP) approach, the solutions obtained through GFLP were expressed as fuzzy sets, which can provide intervals for the decision variables and objective function, as well as related possibilities. Therefore, the decision makers can make a tradeoff between model stability and the plausibility based on solutions obtained through GFLP and then identify desired policies for SO2-emission control under uncertainty.

  12. Reliability enhancement through optimal burn-in

    NASA Astrophysics Data System (ADS)

    Kuo, W.

    1984-06-01

    A numerical reliability and cost model is defined for production line burn-in tests of electronic components. The necessity of burn-in is governed by upper and lower bounds: burn-in is mandatory for operation-critical or nonreparable component; no burn-in is needed when failure effects are insignificant or easily repairable. The model considers electronic systems in terms of a series of components connected by a single black box. The infant mortality rate is described with a Weibull distribution. Performance reaches a steady state after burn-in, and the cost of burn-in is a linear function for each component. A minimum cost is calculated among the costs and total time of burn-in, shop repair, and field repair, with attention given to possible losses in future sales from inadequate burn-in testing.

  13. Quantifying the conservation gains from shared access to linear infrastructure.

    PubMed

    Runge, Claire A; Tulloch, Ayesha I T; Gordon, Ascelin; Rhodes, Jonathan R

    2017-12-01

    The proliferation of linear infrastructure such as roads and railways is a major global driver of cumulative biodiversity loss. One strategy for reducing habitat loss associated with development is to encourage linear infrastructure providers and users to share infrastructure networks. We quantified the reductions in biodiversity impact and capital costs under linear infrastructure sharing of a range of potential mine to port transportation links for 47 mine locations operated by 28 separate companies in the Upper Spencer Gulf Region of South Australia. We mapped transport links based on least-cost pathways for different levels of linear-infrastructure sharing and used expert-elicited impacts of linear infrastructure to estimate the consequences for biodiversity. Capital costs were calculated based on estimates of construction costs, compensation payments, and transaction costs. We evaluated proposed mine-port links by comparing biodiversity impacts and capital costs across 3 scenarios: an independent scenario, where no infrastructure is shared; a restricted-access scenario, where the largest mining companies share infrastructure but exclude smaller mining companies from sharing; and a shared scenario where all mining companies share linear infrastructure. Fully shared development of linear infrastructure reduced overall biodiversity impacts by 76% and reduced capital costs by 64% compared with the independent scenario. However, there was considerable variation among companies. Our restricted-access scenario showed only modest biodiversity benefits relative to the independent scenario, indicating that reductions are likely to be limited if the dominant mining companies restrict access to infrastructure, which often occurs without policies that promote sharing of infrastructure. Our research helps illuminate the circumstances under which infrastructure sharing can minimize the biodiversity impacts of development. © 2017 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  14. A minimal cost function method for optimizing the age-Depth relation of deep-sea sediment cores

    NASA Astrophysics Data System (ADS)

    Brüggemann, Wolfgang

    1992-08-01

    The question of an optimal age-depth relation for deep-sea sediment cores has been raised frequently. The data from such cores (e.g., δ18O values) are used to test the astronomical theory of ice ages as established by Milankovitch in 1938. In this work, we use a minimal cost function approach to find simultaneously an optimal age-depth relation and a linear model that optimally links solar insolation or other model input with global ice volume. Thus a general tool for the calibration of deep-sea cores to arbitrary tuning targets is presented. In this inverse modeling type approach, an objective function is minimized that penalizes: (1) the deviation of the data from the theoretical linear model (whose transfer function can be computed analytically for a given age-depth relation) and (2) the violation of a set of plausible assumptions about the model, the data and the obtained correction of a first guess age-depth function. These assumptions have been suggested before but are now quantified and incorporated explicitly into the objective function as penalty terms. We formulate an optimization problem that is solved numerically by conjugate gradient type methods. Using this direct approach, we obtain high coherences in the Milankovitch frequency bands (over 90%). Not only the data time series but also the the derived correction to a first guess linear age-depth function (and therefore the sedimentation rate) itself contains significant energy in a broad frequency band around 100 kyr. The use of a sedimentation rate which varies continuously on ice age time scales results in a shift of energy from 100 kyr in the original data spectrum to 41, 23, and 19 kyr in the spectrum of the corrected data. However, a large proportion of the data variance remains unexplained, particularly in the 100 kyr frequency band, where there is no significant input by orbital forcing. The presented method is applied to a real sediment core and to the SPECMAP stack, and results are compared with those obtained in earlier investigations.

  15. Electrical cable utilization for wave energy converters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bull, Diana; Baca, Michael; Schenkman, Benjamin

    Here, this paper investigates the suitability of sizing the electrical export cable based on the rating of the contributing WECs within a farm. These investigations have produced a new methodology to evaluate the probabilities associated with peak power values on an annual basis. It has been shown that the peaks in pneumatic power production will follow an exponential probability function for a linear model. A methodology to combine all the individual probability functions into an annual view has been demonstrated on pneumatic power production by a Backward Bent Duct Buoy (BBDB). These investigations have also resulted in a highly simplifiedmore » and perfunctory model of installed cable cost as a function of voltage and conductor cross-section. This work solidifies the need to determine electrical export cable rating based on expected energy delivery as opposed to device rating as small decreases in energy delivery can result in cost savings.« less

  16. Electrical cable utilization for wave energy converters

    DOE PAGES

    Bull, Diana; Baca, Michael; Schenkman, Benjamin

    2018-04-27

    Here, this paper investigates the suitability of sizing the electrical export cable based on the rating of the contributing WECs within a farm. These investigations have produced a new methodology to evaluate the probabilities associated with peak power values on an annual basis. It has been shown that the peaks in pneumatic power production will follow an exponential probability function for a linear model. A methodology to combine all the individual probability functions into an annual view has been demonstrated on pneumatic power production by a Backward Bent Duct Buoy (BBDB). These investigations have also resulted in a highly simplifiedmore » and perfunctory model of installed cable cost as a function of voltage and conductor cross-section. This work solidifies the need to determine electrical export cable rating based on expected energy delivery as opposed to device rating as small decreases in energy delivery can result in cost savings.« less

  17. Artificial neural networks using complex numbers and phase encoded weights.

    PubMed

    Michel, Howard E; Awwal, Abdul Ahad S

    2010-04-01

    The model of a simple perceptron using phase-encoded inputs and complex-valued weights is proposed. The aggregation function, activation function, and learning rule for the proposed neuron are derived and applied to Boolean logic functions and simple computer vision tasks. The complex-valued neuron (CVN) is shown to be superior to traditional perceptrons. An improvement of 135% over the theoretical maximum of 104 linearly separable problems (of three variables) solvable by conventional perceptrons is achieved without additional logic, neuron stages, or higher order terms such as those required in polynomial logic gates. The application of CVN in distortion invariant character recognition and image segmentation is demonstrated. Implementation details are discussed, and the CVN is shown to be very attractive for optical implementation since optical computations are naturally complex. The cost of the CVN is less in all cases than the traditional neuron when implemented optically. Therefore, all the benefits of the CVN can be obtained without additional cost. However, on those implementations dependent on standard serial computers, CVN will be more cost effective only in those applications where its increased power can offset the requirement for additional neurons.

  18. Fuzzy Multi-Objective Transportation Planning with Modified S-Curve Membership Function

    NASA Astrophysics Data System (ADS)

    Peidro, D.; Vasant, P.

    2009-08-01

    In this paper, the S-Curve membership function methodology is used in a transportation planning decision (TPD) problem. An interactive method for solving multi-objective TPD problems with fuzzy goals, available supply and forecast demand is developed. The proposed method attempts simultaneously to minimize the total production and transportation costs and the total delivery time with reference to budget constraints and available supply, machine capacities at each source, as well as forecast demand and warehouse space constraints at each destination. We compare in an industrial case the performance of S-curve membership functions, representing uncertainty goals and constraints in TPD problems, with linear membership functions.

  19. Waste management under multiple complexities: inexact piecewise-linearization-based fuzzy flexible programming.

    PubMed

    Sun, Wei; Huang, Guo H; Lv, Ying; Li, Gongchen

    2012-06-01

    To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerance intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Daubechies wavelets for linear scaling density functional theory.

    PubMed

    Mohr, Stephan; Ratcliff, Laura E; Boulanger, Paul; Genovese, Luigi; Caliste, Damien; Deutsch, Thierry; Goedecker, Stefan

    2014-05-28

    We demonstrate that Daubechies wavelets can be used to construct a minimal set of optimized localized adaptively contracted basis functions in which the Kohn-Sham orbitals can be represented with an arbitrarily high, controllable precision. Ground state energies and the forces acting on the ions can be calculated in this basis with the same accuracy as if they were calculated directly in a Daubechies wavelets basis, provided that the amplitude of these adaptively contracted basis functions is sufficiently small on the surface of the localization region, which is guaranteed by the optimization procedure described in this work. This approach reduces the computational costs of density functional theory calculations, and can be combined with sparse matrix algebra to obtain linear scaling with respect to the number of electrons in the system. Calculations on systems of 10,000 atoms or more thus become feasible in a systematic basis set with moderate computational resources. Further computational savings can be achieved by exploiting the similarity of the adaptively contracted basis functions for closely related environments, e.g., in geometry optimizations or combined calculations of neutral and charged systems.

  1. Effect of educational preparation on the accuracy of linear growth measurement in pediatric primary care practices: results of a multicenter nursing study.

    PubMed

    Hench, Karen D; Shults, Justine; Benyi, Terri; Clow, Cheryl; Delaune, Joanne; Gilluly, Kathy; Johnson, Lydia; Johnson, Maryann; Rossiter, Katherine; McKnight-Menci, Heather; Shorkey, Doris; Waite, Fran; Weber, Colleen; Lipman, Terri H

    2005-04-01

    Consistently monitoring a child's linear growth is one of the least invasive, most sensitive tools to identify normal physiologic functioning and a healthy lifestyle. However, studies, mostly from the United Kingdom, indicate that children are frequently measured incorrectly. Inaccurate linear measurements may result in some children having undetected growth disorders whereas others with normal growth being referred for costly, unwarranted specialty evaluations. This study presents the secondary analysis of a primary study that used a randomized control study design to demonstrate that a didactic educational intervention resulted in significantly more children being measured accurately within eight pediatric practices. The secondary analysis explored the influence of the measurer's educational level on the outcome of accurate linear measurement. Results indicated that RNs were twice as likely as non-RNs to measure children accurately.

  2. Identification of Linear and Nonlinear Aerodynamic Impulse Responses Using Digital Filter Techniques

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1997-01-01

    This paper discusses the mathematical existence and the numerically-correct identification of linear and nonlinear aerodynamic impulse response functions. Differences between continuous-time and discrete-time system theories, which permit the identification and efficient use of these functions, will be detailed. Important input/output definitions and the concept of linear and nonlinear systems with memory will also be discussed. It will be shown that indicial (step or steady) responses (such as Wagner's function), forced harmonic responses (such as Theodorsen's function or those from doublet lattice theory), and responses to random inputs (such as gusts) can all be obtained from an aerodynamic impulse response function. This paper establishes the aerodynamic impulse response function as the most fundamental, and, therefore, the most computationally efficient, aerodynamic function that can be extracted from any given discrete-time, aerodynamic system. The results presented in this paper help to unify the understanding of classical two-dimensional continuous-time theories with modern three-dimensional, discrete-time theories. First, the method is applied to the nonlinear viscous Burger's equation as an example. Next the method is applied to a three-dimensional aeroelastic model using the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code and then to a two-dimensional model using the CFL3D Navier-Stokes code. Comparisons of accuracy and computational cost savings are presented. Because of its mathematical generality, an important attribute of this methodology is that it is applicable to a wide range of nonlinear, discrete-time problems.

  3. Identification of Linear and Nonlinear Aerodynamic Impulse Responses Using Digital Filter Techniques

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.

    1997-01-01

    This paper discusses the mathematical existence and the numerically-correct identification of linear and nonlinear aerodynamic impulse response functions. Differences between continuous-time and discrete-time system theories, which permit the identification and efficient use of these functions, will be detailed. Important input/output definitions and the concept of linear and nonlinear systems with memory will also be discussed. It will be shown that indicial (step or steady) responses (such as Wagner's function), forced harmonic responses (such as Tbeodorsen's function or those from doublet lattice theory), and responses to random inputs (such as gusts) can all be obtained from an aerodynamic impulse response function. This paper establishes the aerodynamic impulse response function as the most fundamental, and, therefore, the most computationally efficient, aerodynamic function that can be extracted from any given discrete-time, aerodynamic system. The results presented in this paper help to unify the understanding of classical two-dimensional continuous-time theories with modem three-dimensional, discrete-time theories. First, the method is applied to the nonlinear viscous Burger's equation as an example. Next the method is applied to a three-dimensional aeroelastic model using the CAP-TSD (Computational Aeroelasticity Program - Transonic Small Disturbance) code and then to a two-dimensional model using the CFL3D Navier-Stokes code. Comparisons of accuracy and computational cost savings are presented. Because of its mathematical generality, an important attribute of this methodology is that it is applicable to a wide range of nonlinear, discrete-time problems.

  4. Does Targeting Higher Health Risk Employees or Increasing Intervention Intensity Yield Savings in a Workplace Wellness Program?

    PubMed

    Kapinos, Kandice A; Caloyeras, John P; Liu, Hangsheng; Mattke, Soeren

    2015-12-01

    This article aims to test whether a workplace wellness program reduces health care cost for higher risk employees or employees with greater participation. The program effect on costs was estimated using a generalized linear model with a log-link function using a difference-in-difference framework with a propensity score matched sample of employees using claims and program data from a large US firm from 2003 to 2011. The program targeting higher risk employees did not yield cost savings. Employees participating in five or more sessions aimed at encouraging more healthful living had about $20 lower per member per month costs relative to matched comparisons (P = 0.002). Our results add to the growing evidence base that workplace wellness programs aimed at primary prevention do not reduce health care cost, with the exception of those employees who choose to participate more actively.

  5. Stochasticity Favoring the Effects of the R&D Strategies of the Firms

    NASA Astrophysics Data System (ADS)

    Pinto, Alberto A.; Oliveira, Bruno M. P. M.; Ferreira, Fernanda A.; Ferreira, Flávio

    We present stochastic dynamics on the production costs of Cournot competitions, based on perfect Nash equilibria of nonlinear R&D investment strategies to reduce the production costs of the firms at every period of the game. We analyse the effects that the R&D investment strategies can have in the profits of the firms along the time. We observe that, in certain cases, the uncertainty can improve the effects of the R&D strategies in the profits of the firms due to the non-linearity of the profit functions and also of the R&D parameters.

  6. Relations between Housing Characteristics and the Well-Being of Low-Income Children and Adolescents

    PubMed Central

    Coley, Rebekah Levine; Leventhal, Tama; Lynch, Alicia Doyle; Kull, Melissa

    2013-01-01

    Extant research has highlighted the importance of multiple characteristics of housing, but has not comprehensively assessed a broad range of housing characteristics and their relative contributions to children's well-being. Using a representative, longitudinal sample of low-income children and adolescents from low-income urban neighborhoods (N = 2,437, ages 2 through 21 years) from the Three-City Study, this study assessed housing quality, stability, type (i.e., ownership status and subsidy status), and cost simultaneously to delineate their unique associations with children's development. Hierarchical linear models found that poor housing quality was most consistently associated with children's and adolescents’ development, including worse emotional and behavioral functioning and lower cognitive skills. These associations operated in part through mothers’ psychological functioning. Residential instability showed mixed links with functioning, whereas housing cost and type were not consistently predictive. Results suggest that housing contexts are associated with functioning across the developmental span from early childhood through late adolescence, with some differences in patterns by child age. PMID:23244408

  7. Using block pulse functions for seismic vibration semi-active control of structures with MR dampers

    NASA Astrophysics Data System (ADS)

    Rahimi Gendeshmin, Saeed; Davarnia, Daniel

    2018-03-01

    This article applied the idea of block pulse functions in the semi-active control of structures. The BP functions give effective tools to approximate complex problems. The applied control algorithm has a major effect on the performance of the controlled system and the requirements of the control devices. In control problems, it is important to devise an accurate analytical technique with less computational cost. It is proved that the BP functions are fundamental tools in approximation problems which have been applied in disparate areas in last decades. This study focuses on the employment of BP functions in control algorithm concerning reduction the computational cost. Magneto-rheological (MR) dampers are one of the well-known semi-active tools that can be used to control the response of civil Structures during earthquake. For validation purposes, numerical simulations of a 5-story shear building frame with MR dampers are presented. The results of suggested method were compared with results obtained by controlling the frame by the optimal control method based on linear quadratic regulator theory. It can be seen from simulation results that the suggested method can be helpful in reducing seismic structural responses. Besides, this method has acceptable accuracy and is in agreement with optimal control method with less computational costs.

  8. Simple, Defensible Sample Sizes Based on Cost Efficiency

    PubMed Central

    Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.

    2009-01-01

    Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055

  9. Excess costs from functional somatic syndromes in Germany - An analysis using entropy balancing.

    PubMed

    Grupp, Helen; Kaufmann, Claudia; König, Hans-Helmut; Bleibler, Florian; Wild, Beate; Szecsenyi, Joachim; Herzog, Wolfgang; Schellberg, Dieter; Schäfert, Rainer; Konnopka, Alexander

    2017-06-01

    The aim of this study was to calculate disorder-specific excess costs in patients with functional somatic syndromes (FSS). We compared 6-month direct and indirect costs in a patient group with FSS (n=273) to a control group of the general adult population in Germany without FSS (n=2914). Data on the patient group were collected between 2007 and 2009 in a randomized controlled trial (speciAL). Data on the control group were obtained from a telephone survey, representative for the general German population, conducted in 2014. Covariate balance between the patient group and the control group was achieved using entropy balancing. Excess costs were calculated by estimating generalized linear models and two-part models for direct costs and indirect costs. Further, we estimated excess costs according to the level of somatic symptom severity (SSS). FSS patients differed significantly from the control group regarding 6-month costs of outpatient physicians (+€280) and other outpatient providers (+€74). According to SSS, significantly higher outpatient physician costs were found for mild (+€151), moderate (+€306) and severe (+€376) SSS. We also found significantly higher costs of other outpatient providers in patients with mild, moderate and severe SSS. Regarding costs of rehabilitation and hospital treatments, FSS patients did not differ significantly from the control group for any level of SSS. Indirect costs were significantly higher in patients with severe SSS (+€760). FSS were of major importance in the outpatient sector. Further, we found significantly higher indirect costs in patients with severe SSS. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Functional form and risk adjustment of hospital costs: Bayesian analysis of a Box-Cox random coefficients model.

    PubMed

    Hollenbeak, Christopher S

    2005-10-15

    While risk-adjusted outcomes are often used to compare the performance of hospitals and physicians, the most appropriate functional form for the risk adjustment process is not always obvious for continuous outcomes such as costs. Semi-log models are used most often to correct skewness in cost data, but there has been limited research to determine whether the log transformation is sufficient or whether another transformation is more appropriate. This study explores the most appropriate functional form for risk-adjusting the cost of coronary artery bypass graft (CABG) surgery. Data included patients undergoing CABG surgery at four hospitals in the midwest and were fit to a Box-Cox model with random coefficients (BCRC) using Markov chain Monte Carlo methods. Marginal likelihoods and Bayes factors were computed to perform model comparison of alternative model specifications. Rankings of hospital performance were created from the simulation output and the rankings produced by Bayesian estimates were compared to rankings produced by standard models fit using classical methods. Results suggest that, for these data, the most appropriate functional form is not logarithmic, but corresponds to a Box-Cox transformation of -1. Furthermore, Bayes factors overwhelmingly rejected the natural log transformation. However, the hospital ranking induced by the BCRC model was not different from the ranking produced by maximum likelihood estimates of either the linear or semi-log model. Copyright (c) 2005 John Wiley & Sons, Ltd.

  11. An approximation of herd effect due to vaccinating children against seasonal influenza - a potential solution to the incorporation of indirect effects into static models.

    PubMed

    Van Vlaenderen, Ilse; Van Bellinghen, Laure-Anne; Meier, Genevieve; Nautrup, Barbara Poulsen

    2013-01-22

    Indirect herd effect from vaccination of children offers potential for improving the effectiveness of influenza prevention in the remaining unvaccinated population. Static models used in cost-effectiveness analyses cannot dynamically capture herd effects. The objective of this study was to develop a methodology to allow herd effect associated with vaccinating children against seasonal influenza to be incorporated into static models evaluating the cost-effectiveness of influenza vaccination. Two previously published linear equations for approximation of herd effects in general were compared with the results of a structured literature review undertaken using PubMed searches to identify data on herd effects specific to influenza vaccination. A linear function was fitted to point estimates from the literature using the sum of squared residuals. The literature review identified 21 publications on 20 studies for inclusion. Six studies provided data on a mathematical relationship between effective vaccine coverage in subgroups and reduction of influenza infection in a larger unvaccinated population. These supported a linear relationship when effective vaccine coverage in a subgroup population was between 20% and 80%. Three studies evaluating herd effect at a community level, specifically induced by vaccinating children, provided point estimates for fitting linear equations. The fitted linear equation for herd protection in the target population for vaccination (children) was slightly less conservative than a previously published equation for herd effects in general. The fitted linear equation for herd protection in the non-target population was considerably less conservative than the previously published equation. This method of approximating herd effect requires simple adjustments to the annual baseline risk of influenza in static models: (1) for the age group targeted by the childhood vaccination strategy (i.e. children); and (2) for other age groups not targeted (e.g. adults and/or elderly). Two approximations provide a linear relationship between effective coverage and reduction in the risk of infection. The first is a conservative approximation, recommended as a base-case for cost-effectiveness evaluations. The second, fitted to data extracted from a structured literature review, provides a less conservative estimate of herd effect, recommended for sensitivity analyses.

  12. How humans integrate the prospects of pain and reward during choice

    PubMed Central

    Talmi, Deborah; Dayan, Peter; Kiebel, Stefan J.; Frith, Chris D.; Dolan, Raymond J.

    2010-01-01

    The maxim “no pain, no gain” summarises scenarios where an action leading to reward also entails a cost. Although we know a substantial amount about how the brain represents pain and reward separately, we know little about how they are integrated during goal directed behaviour. Two theoretical models might account for the integration of reward and pain. An additive model specifies that the disutility of costs is summed linearly with the utility of benefits, while an interactive model suggests that cost and benefit utilities interact so that the sensitivity to benefits is attenuated as costs become increasingly aversive. Using a novel task that required integration of physical pain and monetary reward, we examined the mechanism underlying cost-benefit integration in humans. We provide evidence in support of an interactive model in behavioural choice. Using functional neuroimaging we identify a neural signature for this interaction such that when the consequences of actions embody a mixture of reward and pain, there is an attenuation of a predictive reward-signal in both ventral anterior cingulate cortex and ventral striatum. We conclude that these regions subserve integration of action costs and benefits in humans, a finding that suggests a cross-species similarity in neural substrates that implement this function and illuminates mechanisms that underlie altered decision making under aversive conditions. PMID:19923294

  13. An algorithm for the solution of dynamic linear programs

    NASA Technical Reports Server (NTRS)

    Psiaki, Mark L.

    1989-01-01

    The algorithm's objective is to efficiently solve Dynamic Linear Programs (DLP) by taking advantage of their special staircase structure. This algorithm constitutes a stepping stone to an improved algorithm for solving Dynamic Quadratic Programs, which, in turn, would make the nonlinear programming method of Successive Quadratic Programs more practical for solving trajectory optimization problems. The ultimate goal is to being trajectory optimization solution speeds into the realm of real-time control. The algorithm exploits the staircase nature of the large constraint matrix of the equality-constrained DLPs encountered when solving inequality-constrained DLPs by an active set approach. A numerically-stable, staircase QL factorization of the staircase constraint matrix is carried out starting from its last rows and columns. The resulting recursion is like the time-varying Riccati equation from multi-stage LQR theory. The resulting factorization increases the efficiency of all of the typical LP solution operations over that of a dense matrix LP code. At the same time numerical stability is ensured. The algorithm also takes advantage of dynamic programming ideas about the cost-to-go by relaxing active pseudo constraints in a backwards sweeping process. This further decreases the cost per update of the LP rank-1 updating procedure, although it may result in more changes of the active set that if pseudo constraints were relaxed in a non-stagewise fashion. The usual stability of closed-loop Linear/Quadratic optimally-controlled systems, if it carries over to strictly linear cost functions, implies that the saving due to reduced factor update effort may outweigh the cost of an increased number of updates. An aerospace example is presented in which a ground-to-ground rocket's distance is maximized. This example demonstrates the applicability of this class of algorithms to aerospace guidance. It also sheds light on the efficacy of the proposed pseudo constraint relaxation scheme.

  14. Price dynamics and market power in an agent-based power exchange

    NASA Astrophysics Data System (ADS)

    Cincotti, Silvano; Guerci, Eric; Raberto, Marco

    2005-05-01

    This paper presents an agent-based model of a power exchange. Supply of electric power is provided by competing generating companies, whereas demand is assumed to be inelastic with respect to price and is constant over time. The transmission network topology is assumed to be a fully connected graph and no transmission constraints are taken into account. The price formation process follows a common scheme for real power exchanges: a clearing house mechanism with uniform price, i.e., with price set equal across all matched buyer-seller pairs. A single class of generating companies is considered, characterized by linear cost function for each technology. Generating companies compete for the sale of electricity through repeated rounds of the uniform auction and determine their supply functions according to production costs. However, an individual reinforcement learning algorithm characterizes generating companies behaviors in order to attain the expected maximum possible profit in each auction round. The paper investigates how the market competitive equilibrium is affected by market microstructure and production costs.

  15. Linear polarizer local characterizations by polarimetric imaging for applications to polarimetric sensors for torque measurement for hybrid cars

    NASA Astrophysics Data System (ADS)

    Georges, F.; Remouche, M.; Meyrueis, P.

    2011-06-01

    Usually manufacturer's specifications do not deal with the ability of linear sheet polarizers to have a constant transmittance function over their geometric area. These parameters are fundamental for developing low cost polarimetric sensors(for instance rotation, torque, displacement) specifically for hybrid car (thermic + electricity power). It is then necessary to specially characterize commercial polarizers sheets to find if they are adapted to this kind of applications. In this paper, we present measuring methods and bench developed for this purpose, and some preliminary characterization results. We state conclusions for effective applications to hybrid car gearbox control and monitoring.

  16. Analysis of an inventory model for both linearly decreasing demand and holding cost

    NASA Astrophysics Data System (ADS)

    Malik, A. K.; Singh, Parth Raj; Tomar, Ajay; Kumar, Satish; Yadav, S. K.

    2016-03-01

    This study proposes the analysis of an inventory model for linearly decreasing demand and holding cost for non-instantaneous deteriorating items. The inventory model focuses on commodities having linearly decreasing demand without shortages. The holding cost doesn't remain uniform with time due to any form of variation in the time value of money. Here we consider that the holding cost decreases with respect to time. The optimal time interval for the total profit and the optimal order quantity are determined. The developed inventory model is pointed up through a numerical example. It also includes the sensitivity analysis.

  17. Linear parameter varying representations for nonlinear control design

    NASA Astrophysics Data System (ADS)

    Carter, Lance Huntington

    Linear parameter varying (LPV) systems are investigated as a framework for gain-scheduled control design and optimal hybrid control. An LPV system is defined as a linear system whose dynamics depend upon an a priori unknown but measurable exogenous parameter. A gain-scheduled autopilot design is presented for a bank-to-turn (BTT) missile. The method is novel in that the gain-scheduled design does not involve linearizations about operating points. Instead, the missile dynamics are brought to LPV form via a state transformation. This idea is applied to the design of a coupled longitudinal/lateral BTT missile autopilot. The pitch and yaw/roll dynamics are separately transformed to LPV form, where the cross axis states are treated as "exogenous" parameters. These are actually endogenous variables, so such a plant is called "quasi-LPV." Once in quasi-LPV form, a family of robust controllers using mu synthesis is designed for both the pitch and yaw/roll channels, using angle-of-attack and roll rate as the scheduling variables. The closed-loop time response is simulated using the original nonlinear model and also using perturbed aerodynamic coefficients. Modeling and control of engine idle speed is investigated using LPV methods. It is shown how generalized discrete nonlinear systems may be transformed into quasi-LPV form. A discrete nonlinear engine model is developed and expressed in quasi-LPV form with engine speed as the scheduling variable. An example control design is presented using linear quadratic methods. Simulations are shown comparing the LPV based controller performance to that using PID control. LPV representations are also shown to provide a setting for hybrid systems. A hybrid system is characterized by control inputs consisting of both analog signals and discrete actions. A solution is derived for the optimal control of hybrid systems with generalized cost functions. This is shown to be computationally intensive, so a suboptimal strategy is proposed that neglects a subset of possible parameter trajectories. A computational algorithm is constructed for this suboptimal solution applied to a class of linear non-quadratic cost functions.

  18. Modelling and genetic algorithm based optimisation of inverse supply chain

    NASA Astrophysics Data System (ADS)

    Bányai, T.

    2009-04-01

    The design and control of recycling systems of products with environmental risk have been discussed in the world already for a long time. The main reasons to address this subject are the followings: reduction of waste volume, intensification of recycling of materials, closing the loop, use of less resource, reducing environmental risk [1, 2]. The development of recycling systems is based on the integrated solution of technological and logistic resources and know-how [3]. However the financial conditions of recycling systems is partly based on the recovery, disassembly and remanufacturing options of the used products [4, 5, 6], but the investment and operation costs of recycling systems can be characterised with high logistic costs caused by the geographically wide collection system with more collection level and a high number of operation points of the inverse supply chain. The reduction of these costs is a popular area of the logistics researches. These researches include the design and implementation of comprehensive environmental waste and recycling program to suit business strategies (global system), design and supply all equipment for production line collection (external system), design logistics process to suit the economical and ecological requirements (external system) [7]. To the knowledge of the author, there has been no research work on supply chain design problems that purpose is the logistics oriented optimisation of inverse supply chain in the case of non-linear total cost function consisting not only operation costs but also environmental risk cost. The antecedent of this research is, that the author has taken part in some research projects in the field of closed loop economy ("Closing the loop of electr(on)ic products and domestic appliances from product planning to end-of-life technologies), environmental friendly disassembly (Concept for logistical and environmental disassembly technologies) and design of recycling systems of household appliances (Recycling of household appliances with emphasis on reuse options). The purpose of this paper is the presentation of a possible method for avoiding the unnecessary environmental risk and landscape use through unprovoked large supply chain of collection systems of recycling processes. In the first part of the paper the author presents the mathematical model of recycling related collection systems (applied especially for wastes of electric and electronic products) and in the second part of the work a genetic algorithm based optimisation method will be demonstrated, by the aid of which it is possible to determine the optimal structure of the inverse supply chain from the point of view economical, ecological and logistic objective functions. The model of the inverse supply chain is based on a multi-level, hierarchical collection system. In case of this static model it is assumed that technical conditions are permanent. The total costs consist of three parts: total infrastructure costs, total material handling costs and environmental risk costs. The infrastructure-related costs are dependent only on the specific fixed costs and the specific unit costs of the operation points (collection, pre-treatment, treatment, recycling and reuse plants). The costs of warehousing and transportation are represented by the material handling related costs. The most important factors determining the level of environmental risk cost are the number of out of time recycled (treated or reused) products, the number of supply chain objects and the length of transportation routes. The objective function is the minimization of the total cost taking into consideration the constraints. However a lot of research work discussed the design of supply chain [8], but most of them concentrate on linear cost functions. In the case of this model non-linear cost functions were used. The non-linear cost functions and the possible high number of objects of the inverse supply chain leaded to the problem of choosing a possible solution method. By the aid of analytical methods, the problem can not be solved, so a genetic algorithm based heuristic optimisation method was chosen to find the optimal solution. The input parameters of the optimisation are the followings: specific fixed, unit and environmental risk costs of the collection points of the inverse supply chain, specific warehousing and transportation costs and environmental risk costs of transportation. The output parameters are the followings: the number of objects in the different hierarchical levels of the collection system, infrastructure costs, logistics costs and environmental risk costs from used infrastructures, transportation and number of products recycled out of time. The next step of the research work was the application of the above mentioned method. The developed application makes it possible to define the input parameters of the real system, the graphical view of the chosen optimal solution in the case of the given input parameters, graphical view of the cost structure of the optimal solution, determination of the parameters of the algorithm (e.g. number of individuals, operators and termination conditions). The sensibility analysis of the objective function and the test results showed that the structure of the inverse supply chain depends on the proportion of the specific costs. Especially the proportion of the specific environmental risk costs influences the structure of the system and the number of objects at each hierarchical level of the collection system. The sensitivity analysis of the total cost function was performed in three cases. In the first case the effect of the proportion of specific infrastructure and logistics costs were analysed. If the infrastructure costs are significantly lower than the total costs of warehousing and transportation, then almost all objects of the first hierarchical level of the collection (collection directly from the users) were set up. In the other case of the proportion of costs the first level of the collection is not necessary, because it is replaceable by the more expensive transportation directly to the objects of the second or lower hierarchical level. In the second case the effect of the proportion of the logistics and environmental risk costs were analysed. In this case the analysis resulted to the followings: if the logistics costs are significantly higher than the total environmental risk costs, then because of the constant infrastructure costs the preference of logistics operations depends on the proportion of the environmental risk costs caused by of out of time recycled products and transportation. In the third case of the analysis the effect of the proportion of infrastructure and environmental risk costs were examined. If the infrastructure costs are significantly lower than the environmental risk costs, then almost all objects of the first hierarchical level of the collection (collection directly from the users) were set up. In the other case of the proportion of costs the first collection phase will be shifted near to the last hierarchical level of the supply chain to avoid a very high infrastructure set up and operation cost. The advantages of the presented model and solution method can be summarised in the followings: the model makes it possible to decide the structure of the inverse supply chain (which object to open or close); reduces infrastructure cost, especially for supply chain with high specific fixed costs; reduces the environmental risk cost through finding an optimal balance between number of objects of the system and out of time recycled products, reduces the logistics costs through determining the optimal quantitative parameters of material flow operations. The future of this research work is the use of differentiated lead-time, which makes it possible to take into consideration the above mentioned non-linear infrastructure, transportation, warehousing and environmental risk costs in the case of a given product portfolio segmented by lead-time. This publication was supported by the National Office for Research and Technology within the frame of Pázmány Péter programme. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Office for Research and Technology. Literature: [1] H. F. Lund: McGraw-Hill Recycling Handbook. McGraw-Hill. 2000. [2] P. T. Williams: Waste Treatment and Disposal. John Wiley and Sons Ltd. 2005. [3] M. Christopher: Logistics & Supply Chain Management: creating value-adding networks. Pearson Education [4] A. Gungor, S. M. Gupta: Issues in environmentally conscious manufacturing and product recovery: a survey. Computers & Industrial Engineering. Volume 36. Issue 4. 1999. pp. 811-853. [5] H. C. Zhang, T. C. Kuo, H. Lu, S. H. Huang: Environmentally conscious design and manufacturing: A state-of-the-art survey. Journal of Manufacturing Systems. Volume 16. Issue 5. 1997. pp. 352-371. [6] P. Veerakamolmal, S. Gupta: Design for Disassembly, Reuse, and Recycling. Green Electronics/Green Bottom Line. 2000. pp. 69-82. [7] A. Rushton, P. Croucher, P. Baker: The Handbook of Logistics and Distribution Management. Kogan P.page Limited. 2006. [8] H. Stadtler, C. Kilger: Supply Chain Management and Advanced Planning: Concepts, Models, Software, and Case Studies. Springer. 2005.

  19. Very Low-Cost Nutritious Diet Plans Designed by Linear Programming.

    ERIC Educational Resources Information Center

    Foytik, Jerry

    1981-01-01

    Provides procedural details of Linear Programing, developed by the U.S. Department of Agriculture to devise a dietary guide for consumers that minimizes food costs without sacrificing nutritional quality. Compares Linear Programming with the Thrifty Food Plan, which has been a basis for allocating coupons under the Food Stamp Program. (CS)

  20. Development of pollution reduction strategies for Mexico City: Estimating cost and ozone reduction effectiveness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thayer, G.R.; Hardie, R.W.; Barrera-Roldan, A.

    1993-12-31

    This reports on the collection and preparation of data (costs and air quality improvement) for the strategic evaluation portion of the Mexico City Air Quality Research Initiative (MARI). Reports written for the Mexico City government by various international organizations were used to identify proposed options along with estimates of cost and emission reductions. Information from appropriate options identified by SCAQMD for Southem California were also used in the analysis. A linear optimization method was used to select a group of options or a strategy to be evaluated by decision analysis. However, the reduction of ozone levels is not a linearmore » function of the reduction of hydrocarbon and NO{sub x} emissions. Therefore, a more detailed analysis was required for ozone. An equation for a plane on an isopleth calculated with a trajectory model was obtained using two endpoints that bracket the expected total ozone precursor reductions plus the starting concentrations for hydrocarbons and NO{sub x}. The relationship between ozone levels and the hydrocarbon and NO{sub x} concentrations was assumed to lie on this plane. This relationship was used in the linear optimization program to select the options comprising a strategy.« less

  1. Total direct cost, length of hospital stay, institutional discharges and their determinants from rehabilitation settings in stroke patients.

    PubMed

    Saxena, S K; Ng, T P; Yong, D; Fong, N P; Gerald, K

    2006-11-01

    Length of hospital stay (LOHS) is the largest determinant of direct cost for stroke care. Institutional discharges (acute care and nursing homes) from rehabilitation settings add to the direct cost. It is important to identify potentially preventable medical and non-medical reasons determining LOHS and institutional discharges to reduce the direct cost of stroke care. The aim of the study was to ascertain the total direct cost, LOHS, frequency of institutional discharges and their determinants from rehabilitation settings. Observational study was conducted on 200 stroke patients in two rehabilitation settings. The patients were examined for various socio-demographic, neurological and clinical variables upon admission to the rehabilitation hospitals. Information on total direct cost and medical complications during hospitalization were also recorded. The outcome variables measured were total direct cost, LOHS and discharges to institutions (acute care and nursing home facility) and their determinants. The mean and median LOHS in our study were 34 days (SD = 18) and 32 days respectively. LOHS and the cost of hospital stay were significantly correlated. The significant variables associated with LOHS on multiple linear regression analysis were: (i) severe functional impairment/functional dependence Barthel Index < or = 50, (ii) medical complications, (iii) first time stroke, (iv) unplanned discharges and (v) discharges to nursing homes. Of the stroke patients 19.5% had institutional discharges (22 to acute care and 17 to nursing homes). On multivariate analysis the significant predictors of discharges to institutions from rehabilitation hospitals were medical complications (OR = 4.37; 95% CI 1.01-12.53) and severe functional impairment/functional dependence. (OR = 5.90, 95% CI 2.32-14.98). Length of hospital stay and discharges to institutions from rehabilitation settings are significantly determined by medical complications. Importance of adhering to clinical pathway/protocol for stroke care is further discussed.

  2. Linear and nonlinear regression techniques for simultaneous and proportional myoelectric control.

    PubMed

    Hahne, J M; Biessmann, F; Jiang, N; Rehbaum, H; Farina, D; Meinecke, F C; Muller, K-R; Parra, L C

    2014-03-01

    In recent years the number of active controllable joints in electrically powered hand-prostheses has increased significantly. However, the control strategies for these devices in current clinical use are inadequate as they require separate and sequential control of each degree-of-freedom (DoF). In this study we systematically compare linear and nonlinear regression techniques for an independent, simultaneous and proportional myoelectric control of wrist movements with two DoF. These techniques include linear regression, mixture of linear experts (ME), multilayer-perceptron, and kernel ridge regression (KRR). They are investigated offline with electro-myographic signals acquired from ten able-bodied subjects and one person with congenital upper limb deficiency. The control accuracy is reported as a function of the number of electrodes and the amount and diversity of training data providing guidance for the requirements in clinical practice. The results showed that KRR, a nonparametric statistical learning method, outperformed the other methods. However, simple transformations in the feature space could linearize the problem, so that linear models could achieve similar performance as KRR at much lower computational costs. Especially ME, a physiologically inspired extension of linear regression represents a promising candidate for the next generation of prosthetic devices.

  3. Rehand: Realistic electric prosthetic hand created with a 3D printer.

    PubMed

    Yoshikawa, Masahiro; Sato, Ryo; Higashihara, Takanori; Ogasawara, Tsukasa; Kawashima, Noritaka

    2015-01-01

    Myoelectric prosthetic hands provide an appearance with five fingers and a grasping function to forearm amputees. However, they have problems in weight, appearance, and cost. This paper reports on the Rehand, a realistic electric prosthetic hand created with a 3D printer. It provides a realistic appearance that is same as the cosmetic prosthetic hand and a grasping function. A simple link mechanism with one linear actuator for grasping and 3D printed parts achieve low cost, light weight, and ease of maintenance. An operating system based on a distance sensor provides a natural operability equivalent to the myoelectric control system. A supporter socket allows them to wear the prosthetic hand easily. An evaluation using the Southampton Hand Assessment Procedure (SHAP) demonstrated that an amputee was able to operate various objects and do everyday activities with the Rehand.

  4. A game theoretic approach to a finite-time disturbance attenuation problem

    NASA Technical Reports Server (NTRS)

    Rhee, Ihnseok; Speyer, Jason L.

    1991-01-01

    A disturbance attenuation problem over a finite-time interval is considered by a game theoretic approach where the control, restricted to a function of the measurement history, plays against adversaries composed of the process and measurement disturbances, and the initial state. A zero-sum game, formulated as a quadratic cost criterion subject to linear time-varying dynamics and measurements, is solved by a calculus of variation technique. By first maximizing the quadratic cost criterion with respect to the process disturbance and initial state, a full information game between the control and the measurement residual subject to the estimator dynamics results. The resulting solution produces an n-dimensional compensator which expresses the controller as a linear combination of the measurement history. A disturbance attenuation problem is solved based on the results of the game problem. For time-invariant systems it is shown that under certain conditions the time-varying controller becomes time-invariant on the infinite-time interval. The resulting controller satisfies an H(infinity) norm bound.

  5. A quasi-Monte-Carlo comparison of parametric and semiparametric regression methods for heavy-tailed and non-normal data: an application to healthcare costs.

    PubMed

    Jones, Andrew M; Lomas, James; Moore, Peter T; Rice, Nigel

    2016-10-01

    We conduct a quasi-Monte-Carlo comparison of the recent developments in parametric and semiparametric regression methods for healthcare costs, both against each other and against standard practice. The population of English National Health Service hospital in-patient episodes for the financial year 2007-2008 (summed for each patient) is randomly divided into two equally sized subpopulations to form an estimation set and a validation set. Evaluating out-of-sample using the validation set, a conditional density approximation estimator shows considerable promise in forecasting conditional means, performing best for accuracy of forecasting and among the best four for bias and goodness of fit. The best performing model for bias is linear regression with square-root-transformed dependent variables, whereas a generalized linear model with square-root link function and Poisson distribution performs best in terms of goodness of fit. Commonly used models utilizing a log-link are shown to perform badly relative to other models considered in our comparison.

  6. Linear-scaling generation of potential energy surfaces using a double incremental expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    König, Carolin, E-mail: carolink@kth.se; Christiansen, Ove, E-mail: ove@chem.au.dk

    We present a combination of the incremental expansion of potential energy surfaces (PESs), known as n-mode expansion, with the incremental evaluation of the electronic energy in a many-body approach. The application of semi-local coordinates in this context allows the generation of PESs in a very cost-efficient way. For this, we employ the recently introduced flexible adaptation of local coordinates of nuclei (FALCON) coordinates. By introducing an additional transformation step, concerning only a fraction of the vibrational degrees of freedom, we can achieve linear scaling of the accumulated cost of the single point calculations required in the PES generation. Numerical examplesmore » of these double incremental approaches for oligo-phenyl examples show fast convergence with respect to the maximum number of simultaneously treated fragments and only a modest error introduced by the additional transformation step. The approach, presented here, represents a major step towards the applicability of vibrational wave function methods to sizable, covalently bound systems.« less

  7. Low-rank separated representation surrogates of high-dimensional stochastic functions: Application in Bayesian inference

    NASA Astrophysics Data System (ADS)

    Validi, AbdoulAhad

    2014-03-01

    This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.

  8. SU-E-T-163: Thin-Film Organic Photocell (OPV) Properties in MV and KV Beams for Dosimetry Applications.

    PubMed

    Ng, S K; Hesser, J; Zhang, H; Gowrisanker, S; Yakushevich, S; Shulhevich, Y; Abkai, C; Wack, L; Zygmanski, P

    2012-06-01

    To characterize dosimetric properties of low-cost thin film organic-based photovoltaic (OPV) cells to kV and MV x-ray beams for their usage as large area dosimeter for QA and patient safety monitoring device. A series of thin film OPV cells of various areas and thicknesses were irradiated with MV beams to evaluate the stability and reproducibility of their response, linearity and sensitivity to absorbed dose. The OPV response to x-rays of various linac energies were also characterized. Furthermore the practical (clinical) sensitivity of the cells was determined using IMRT sweeping gap test generated with various gap sizes. To evaluate their potential usage in the development of low cost kV imaging device, the OPV cells were irradiated with kV beam (60-120 kVp) from a fluoroscopy unit. Photocell response to the absorbed dose was characterized as a function of the organic thin film thickness and size, beam energy and exposure for kV beams as well. In addition, photocell response was determined with and without thin plastic scintillator. Response of the OPV cells to the absorbed dose from kV and MV beams are stable and reproducible. The photocell response was linearly proportional to the size and about slightly decreasing with the thickness of the organic thin film, which agrees with the general performance of the photocells in visible light. The photocell response increases as a linear function of absorbed dose and x-ray energy. The sweeping gap tests performed showed that OPV cells have sufficient practical sensitivity to measured MV x-ray delivery with gap size as small as 1 mm. With proper calibration, the OPV cells could be used for online radiation dose measurement for quality assurance and patient safety purposes. Their response to kV beam show promising potential in development of low cost kV radiation detection devices. © 2012 American Association of Physicists in Medicine.

  9. Biological effects and equivalent doses in radiotherapy: A software solution

    PubMed Central

    Voyant, Cyril; Julian, Daniel; Roustit, Rudy; Biffi, Katia; Lantieri, Céline

    2013-01-01

    Background The limits of TDF (time, dose, and fractionation) and linear quadratic models have been known for a long time. Medical physicists and physicians are required to provide fast and reliable interpretations regarding delivered doses or any future prescriptions relating to treatment changes. Aim We, therefore, propose a calculation interface under the GNU license to be used for equivalent doses, biological doses, and normal tumor complication probability (Lyman model). Materials and methods The methodology used draws from several sources: the linear-quadratic-linear model of Astrahan, the repopulation effects of Dale, and the prediction of multi-fractionated treatments of Thames. Results and conclusions The results are obtained from an algorithm that minimizes an ad-hoc cost function, and then compared to an equivalent dose computed using standard calculators in seven French radiotherapy centers. PMID:24936319

  10. Extending the accuracy of the SNAP interatomic potential form

    NASA Astrophysics Data System (ADS)

    Wood, Mitchell A.; Thompson, Aidan P.

    2018-06-01

    The Spectral Neighbor Analysis Potential (SNAP) is a classical interatomic potential that expresses the energy of each atom as a linear function of selected bispectrum components of the neighbor atoms. An extension of the SNAP form is proposed that includes quadratic terms in the bispectrum components. The extension is shown to provide a large increase in accuracy relative to the linear form, while incurring only a modest increase in computational cost. The mathematical structure of the quadratic SNAP form is similar to the embedded atom method (EAM), with the SNAP bispectrum components serving as counterparts to the two-body density functions in EAM. The effectiveness of the new form is demonstrated using an extensive set of training data for tantalum structures. Similar to artificial neural network potentials, the quadratic SNAP form requires substantially more training data in order to prevent overfitting. The quality of this new potential form is measured through a robust cross-validation analysis.

  11. CALiPER Report 21.3: Cost-Effectiveness of Linear (T8) LED Lamps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Naomi J.; Perrin, Tess E.; Royer, Michael P.

    2014-05-27

    Meeting performance expectations is important for driving adoption of linear LED lamps, but cost-effectiveness may be an overriding factor in many cases. Linear LED lamps cost more initially than fluorescent lamps, but energy and maintenance savings may mean that the life-cycle cost is lower. This report details a series of life-cycle cost simulations that compared a two-lamp troffer using LED lamps (38 W total power draw) or fluorescent lamps (51 W total power draw) over a 10-year study period. Variables included LED system cost ($40, $80, or $120), annual operating hours (2,000 hours or 4,000 hours), LED installation time (15more » minutes or 30 minutes), and melded electricity rate ($0.06/kWh, $0.12/kWh, $0.18/kWh, or $0.24/kWh). A full factorial of simulations allows users to interpolate between these values to aid in making rough estimates of economic feasibility for their own projects. In general, while their initial cost premium remains high, linear LED lamps are more likely to be cost-effective when electric utility rates are higher than average and hours of operation are long, and if their installation time is shorter.« less

  12. CALiPER Report 21.3. Cost Effectiveness of Linear (T8) LED Lamps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2014-05-01

    Meeting performance expectations is important for driving adoption of linear LED lamps, but cost-effectiveness may be an overriding factor in many cases. Linear LED lamps cost more initially than fluorescent lamps, but energy and maintenance savings may mean that the life-cycle cost is lower. This report details a series of life-cycle cost simulations that compared a two-lamp troffer using LED lamps (38 W total power draw) or fluorescent lamps (51 W total power draw) over a 10-year study period. Variables included LED system cost ($40, $80, or $120), annual operating hours (2,000 hours or 4,000 hours), LED installation time (15more » minutes or 30 minutes), and melded electricity rate ($0.06/kWh, $0.12/kWh, $0.18/kWh, or $0.24/kWh). A full factorial of simulations allows users to interpolate between these values to aid in making rough estimates of economic feasibility for their own projects. In general, while their initial cost premium remains high, linear LED lamps are more likely to be cost-effective when electric utility rates are higher than average and hours of operation are long, and if their installation time is shorter.« less

  13. Maximum Principle in the Optimal Design of Plates with Stratified Thickness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roubicek, Tomas

    2005-03-15

    An optimal design problem for a plate governed by a linear, elliptic equation with bounded thickness varying only in a single prescribed direction and with unilateral isoperimetrical-type constraints is considered. Using Murat-Tartar's homogenization theory for stratified plates and Young-measure relaxation theory, smoothness of the extended cost and constraint functionals is proved, and then the maximum principle necessary for an optimal relaxed design is derived.

  14. A topological proof of chaos for two nonlinear heterogeneous triopoly game models

    NASA Astrophysics Data System (ADS)

    Pireddu, Marina

    2016-08-01

    We rigorously prove the existence of chaotic dynamics for two nonlinear Cournot triopoly game models with heterogeneous players, for which in the existing literature the presence of complex phenomena and strange attractors has been shown via numerical simulations. In the first model that we analyze, costs are linear but the demand function is isoelastic, while, in the second model, the demand function is linear and production costs are quadratic. As concerns the decisional mechanisms adopted by the firms, in both models one firm adopts a myopic adjustment mechanism, considering the marginal profit of the last period; the second firm maximizes its own expected profit under the assumption that the competitors' production levels will not vary with respect to the previous period; the third firm acts adaptively, changing its output proportionally to the difference between its own output in the previous period and the naive expectation value. The topological method we employ in our analysis is the so-called "Stretching Along the Paths" technique, based on the Poincaré-Miranda Theorem and the properties of the cutting surfaces, which allows to prove the existence of a semi-conjugacy between the system under consideration and the Bernoulli shift, so that the former inherits from the latter several crucial chaotic features, among which a positive topological entropy.

  15. Linear feasibility algorithms for treatment planning in interstitial photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Rendon, A.; Beck, J. C.; Lilge, Lothar

    2008-02-01

    Interstitial Photodynamic therapy (IPDT) has been under intense investigation in recent years, with multiple clinical trials underway. This effort has demanded the development of optimization strategies that determine the best locations and output powers for light sources (cylindrical or point diffusers) to achieve an optimal light delivery. Furthermore, we have recently introduced cylindrical diffusers with customizable emission profiles, placing additional requirements on the optimization algorithms, particularly in terms of the stability of the inverse problem. Here, we present a general class of linear feasibility algorithms and their properties. Moreover, we compare two particular instances of these algorithms, which are been used in the context of IPDT: the Cimmino algorithm and a weighted gradient descent (WGD) algorithm. The algorithms were compared in terms of their convergence properties, the cost function they minimize in the infeasible case, their ability to regularize the inverse problem, and the resulting optimal light dose distributions. Our results show that the WGD algorithm overall performs slightly better than the Cimmino algorithm and that it converges to a minimizer of a clinically relevant cost function in the infeasible case. Interestingly however, treatment plans resulting from either algorithms were very similar in terms of the resulting fluence maps and dose volume histograms, once the diffuser powers adjusted to achieve equal prostate coverage.

  16. A topological proof of chaos for two nonlinear heterogeneous triopoly game models.

    PubMed

    Pireddu, Marina

    2016-08-01

    We rigorously prove the existence of chaotic dynamics for two nonlinear Cournot triopoly game models with heterogeneous players, for which in the existing literature the presence of complex phenomena and strange attractors has been shown via numerical simulations. In the first model that we analyze, costs are linear but the demand function is isoelastic, while, in the second model, the demand function is linear and production costs are quadratic. As concerns the decisional mechanisms adopted by the firms, in both models one firm adopts a myopic adjustment mechanism, considering the marginal profit of the last period; the second firm maximizes its own expected profit under the assumption that the competitors' production levels will not vary with respect to the previous period; the third firm acts adaptively, changing its output proportionally to the difference between its own output in the previous period and the naive expectation value. The topological method we employ in our analysis is the so-called "Stretching Along the Paths" technique, based on the Poincaré-Miranda Theorem and the properties of the cutting surfaces, which allows to prove the existence of a semi-conjugacy between the system under consideration and the Bernoulli shift, so that the former inherits from the latter several crucial chaotic features, among which a positive topological entropy.

  17. Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques

    NASA Astrophysics Data System (ADS)

    Elliott, Louie C.

    This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.

  18. Linear Array Ambient Noise Adjoint Tomography Reveals Intense Crust-Mantle Interactions in North China Craton

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Yao, Huajian; Liu, Qinya; Zhang, Ping; Yuan, Yanhua O.; Feng, Jikun; Fang, Lihua

    2018-01-01

    We present a 2-D ambient noise adjoint tomography technique for a linear array with a significant reduction in computational cost and show its application to an array in North China. We first convert the observed data for 3-D media, i.e., surface-wave empirical Green's functions (EGFs) to the reconstructed EGFs (REGFs) for 2-D media using a 3-D/2-D transformation scheme. Different from the conventional steps of measuring phase dispersion, this technology refines 2-D shear wave speeds along the profile directly from REGFs. With an initial model based on traditional ambient noise tomography, adjoint tomography updates the model by minimizing the frequency-dependent Rayleigh wave traveltime delays between the REGFs and synthetic Green functions calculated by the spectral-element method. The multitaper traveltime difference measurement is applied in four-period bands: 20-35 s, 15-30 s, 10-20 s, and 6-15 s. The recovered model shows detailed crustal structures including pronounced low-velocity anomalies in the lower crust and a gradual crust-mantle transition zone beneath the northern Trans-North China Orogen, which suggest the possible intense thermo-chemical interactions between mantle-derived upwelling melts and the lower crust, probably associated with the magmatic underplating during the Mesozoic to Cenozoic evolution of this region. To our knowledge, it is the first time that ambient noise adjoint tomography is implemented for a 2-D medium. Compared with the intensive computational cost and storage requirement of 3-D adjoint tomography, this method offers a computationally efficient and inexpensive alternative to imaging fine-scale crustal structures beneath linear arrays.

  19. Uso de recursos y costos de hospitalizaciones por insufi ciencia card í aca: un estudio retrospective multic é ntrico en Argentina.

    PubMed

    Augustovski, Federico; Caporale, Joaquín; Fosco, Matías; Alcaraz, Andrea; Diez, Mirta; Thierer, Jorge; Peradejordi, Margarita; Pichon Riviere, Andrés

    2017-12-01

    Heart failure has a great impact on health budget, mainly due to the cost of hospitalizations. Our aim was to describe health resource use and costs of heart failure admissions in three important institutions in Argentina. Multi-center retrospective cohort study, with descriptive and analytical analysis by subgroups of ejection fraction, blood pressure and renal function at admission. Generalized linear models were used to assess the association of independent variables to main outcomes. We included 301 subjects; age 75.3±11.8 years; 37% women; 57% with depressed ejection fraction; 46% of coronary etiology. Blood pressure at admission was 129.8±29.7 mmHg; renal function 57.9±26.2 ml/min/1.73 m 2 . Overall mortality was 7%. Average length of stay was 7.82±7.06 days (median 5.69), and was significantly longer in patients with renal impairment (8.9 vs. 8.18; p=0.03) and shorter in those with high initial blood pressure (6.08±4.03; p=0.009). Mean cost per patient was AR$68,861±96,066 (US$=8,071; 1US$=AR$8.532); 71% attributable to hospital stay, 20% to interventional procedures and 6.7% to diagnostic studies. Variables independently associated with higher costs were depressed ejection fraction, presence of valvular disease, and impaired renal function. Resource use and costs associated to hospitalizations for heart failure is high, and the highest proportion is attributable to the costs related to hospital stay. Copyright © 2017. Published by Elsevier Inc.

  20. Resting-State Functional Connectivity Underlying Costly Punishment: A Machine-Learning Approach.

    PubMed

    Feng, Chunliang; Zhu, Zhiyuan; Gu, Ruolei; Wu, Xia; Luo, Yue-Jia; Krueger, Frank

    2018-06-08

    A large number of studies have demonstrated costly punishment to unfair events across human societies. However, individuals exhibit a large heterogeneity in costly punishment decisions, whereas the neuropsychological substrates underlying the heterogeneity remain poorly understood. Here, we addressed this issue by applying a multivariate machine-learning approach to compare topological properties of resting-state brain networks as a potential neuromarker between individuals exhibiting different punishment propensities. A linear support vector machine classifier obtained an accuracy of 74.19% employing the features derived from resting-state brain networks to distinguish two groups of individuals with different punishment tendencies. Importantly, the most discriminative features that contributed to the classification were those regions frequently implicated in costly punishment decisions, including dorsal anterior cingulate cortex (dACC) and putamen (salience network), dorsomedial prefrontal cortex (dmPFC) and temporoparietal junction (mentalizing network), and lateral prefrontal cortex (central-executive network). These networks are previously implicated in encoding norm violation and intentions of others and integrating this information for punishment decisions. Our findings thus demonstrated that resting-state functional connectivity (RSFC) provides a promising neuromarker of social preferences, and bolster the assertion that human costly punishment behaviors emerge from interactions among multiple neural systems. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  1. Efficient evaluation of the Coulomb force in the Gaussian and finite-element Coulomb method.

    PubMed

    Kurashige, Yuki; Nakajima, Takahito; Sato, Takeshi; Hirao, Kimihiko

    2010-06-28

    We propose an efficient method for evaluating the Coulomb force in the Gaussian and finite-element Coulomb (GFC) method, which is a linear-scaling approach for evaluating the Coulomb matrix and energy in large molecular systems. The efficient evaluation of the analytical gradient in the GFC is not straightforward as well as the evaluation of the energy because the SCF procedure with the Coulomb matrix does not give a variational solution for the Coulomb energy. Thus, an efficient approximate method is alternatively proposed, in which the Coulomb potential is expanded in the Gaussian and finite-element auxiliary functions as done in the GFC. To minimize the error in the gradient not just in the energy, the derived functions of the original auxiliary functions of the GFC are used additionally for the evaluation of the Coulomb gradient. In fact, the use of the derived functions significantly improves the accuracy of this approach. Although these additional auxiliary functions enlarge the size of the discretized Poisson equation and thereby increase the computational cost, it maintains the near linear scaling as the GFC and does not affects the overall efficiency of the GFC approach.

  2. Cyber-Physical Correlations for Infrastructure Resilience: A Game-Theoretic Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; He, Fei; Ma, Chris Y. T.

    In several critical infrastructures, the cyber and physical parts are correlated so that disruptions to one affect the other and hence the whole system. These correlations may be exploited to strategically launch components attacks, and hence must be accounted for ensuring the infrastructure resilience, specified by its survival probability. We characterize the cyber-physical interactions at two levels: (i) the failure correlation function specifies the conditional survival probability of cyber sub-infrastructure given the physical sub-infrastructure as a function of their marginal probabilities, and (ii) the individual survival probabilities of both sub-infrastructures are characterized by first-order differential conditions. We formulate a resiliencemore » problem for infrastructures composed of discrete components as a game between the provider and attacker, wherein their utility functions consist of an infrastructure survival probability term and a cost term expressed in terms of the number of components attacked and reinforced. We derive Nash Equilibrium conditions and sensitivity functions that highlight the dependence of infrastructure resilience on the cost term, correlation function and sub-infrastructure survival probabilities. These results generalize earlier ones based on linear failure correlation functions and independent component failures. We apply the results to models of cloud computing infrastructures and energy grids.« less

  3. Comparative Study of SVM Methods Combined with Voxel Selection for Object Category Classification on fMRI Data

    PubMed Central

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-01-01

    Background Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Methodology/Principal Findings Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. Conclusions/Significance The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice. PMID:21359184

  4. Comparative study of SVM methods combined with voxel selection for object category classification on fMRI data.

    PubMed

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-02-16

    Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice.

  5. SYNTHESIS OF NOVEL ALL-DIELECTRIC GRATING FILTERS USING GENETIC ALGORITHMS

    NASA Technical Reports Server (NTRS)

    Zuffada, Cinzia; Cwik, Tom; Ditchman, Christopher

    1997-01-01

    We are concerned with the design of inhomogeneous, all dielectric (lossless) periodic structures which act as filters. Dielectric filters made as stacks of inhomogeneous gratings and layers of materials are being used in optical technology, but are not common at microwave frequencies. The problem is then finding the periodic cell's geometric configuration and permittivity values which correspond to a specified reflectivity/transmittivity response as a function of frequency/illumination angle. This type of design can be thought of as an inverse-source problem, since it entails finding a distribution of sources which produce fields (or quantities derived from them) of given characteristics. Electromagnetic sources (electric and magnetic current densities) in a volume are related to the outside fields by a well known linear integral equation. Additionally, the sources are related to the fields inside the volume by a constitutive equation, involving the material properties. Then, the relationship linking the fields outside the source region to those inside is non-linear, in terms of material properties such as permittivity, permeability and conductivity. The solution of the non-linear inverse problem is cast here as a combination of two linear steps, by explicitly introducing the electromagnetic sources in the computational volume as a set of unknowns in addition to the material unknowns. This allows to solve for material parameters and related electric fields in the source volume which are consistent with Maxwell's equations. Solutions are obtained iteratively by decoupling the two steps. First, we invert for the permittivity only in the minimization of a cost function and second, given the materials, we find the corresponding electric fields through direct solution of the integral equation in the source volume. The sources thus computed are used to generate the far fields and the synthesized triter response. The cost function is obtained by calculating the deviation between the synthesized value of reflectivity/transmittivity and the desired one. Solution geometries for the periodic cell are sought as gratings (ensembles of columns of different heights and widths), or combinations of homogeneous layers of different dielectric materials and gratings. Hence the explicit unknowns of the inversion step are the material permittivities and the relative boundaries separating homogeneous parcels of the periodic cell.

  6. A green vehicle routing problem with customer satisfaction criteria

    NASA Astrophysics Data System (ADS)

    Afshar-Bakeshloo, M.; Mehrabi, A.; Safari, H.; Maleki, M.; Jolai, F.

    2016-12-01

    This paper develops an MILP model, named Satisfactory-Green Vehicle Routing Problem. It consists of routing a heterogeneous fleet of vehicles in order to serve a set of customers within predefined time windows. In this model in addition to the traditional objective of the VRP, both the pollution and customers' satisfaction have been taken into account. Meanwhile, the introduced model prepares an effective dashboard for decision-makers that determines appropriate routes, the best mixed fleet, speed and idle time of vehicles. Additionally, some new factors evaluate the greening of each decision based on three criteria. This model applies piecewise linear functions (PLFs) to linearize a nonlinear fuzzy interval for incorporating customers' satisfaction into other linear objectives. We have presented a mixed integer linear programming formulation for the S-GVRP. This model enriches managerial insights by providing trade-offs between customers' satisfaction, total costs and emission levels. Finally, we have provided a numerical study for showing the applicability of the model.

  7. Stiffness optimization of non-linear elastic structures

    DOE PAGES

    Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel

    2017-11-13

    Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less

  8. Development of a new linearly variable edge filter (LVEF)-based compact slit-less mini-spectrometer

    NASA Astrophysics Data System (ADS)

    Mahmoud, Khaled; Park, Seongchong; Lee, Dong-Hoon

    2018-02-01

    This paper presents the development of a compact charge-coupled detector (CCD) spectrometer. We describe the design, concept and characterization of VNIR linear variable edge filter (LVEF)- based mini-spectrometer. The new instrument has been realized for operation in the 300 nm to 850 nm wavelength range. The instrument consists of a linear variable edge filter in front of CCD array. Low-size, light-weight and low-cost could be achieved using the linearly variable filters with no need to use any moving parts for wavelength selection as in the case of commercial spectrometers available in the market. This overview discusses the main components characteristics, the main concept with the main advantages and limitations reported. Experimental characteristics of the LVEFs are described. The mathematical approach to get the position-dependent slit function of the presented prototype spectrometer and its numerical de-convolution solution for a spectrum reconstruction is described. The performance of our prototype instrument is demonstrated by measuring the spectrum of a reference light source.

  9. Stiffness optimization of non-linear elastic structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel

    Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less

  10. A novel heuristic for optimization aggregate production problem: Evidence from flat panel display in Malaysia

    NASA Astrophysics Data System (ADS)

    Al-Kuhali, K.; Hussain M., I.; Zain Z., M.; Mullenix, P.

    2015-05-01

    Aim: This paper contribute to the flat panel display industry it terms of aggregate production planning. Methodology: For the minimization cost of total production of LCD manufacturing, a linear programming was applied. The decision variables are general production costs, additional cost incurred for overtime production, additional cost incurred for subcontracting, inventory carrying cost, backorder costs and adjustments for changes incurred within labour levels. Model has been developed considering a manufacturer having several product types, which the maximum types are N, along a total time period of T. Results: Industrial case study based on Malaysia is presented to test and to validate the developed linear programming model for aggregate production planning. Conclusion: The model development is fit under stable environment conditions. Overall it can be recommended to adapt the proven linear programming model to production planning of Malaysian flat panel display industry.

  11. Investigating the linearity assumption between lumber grade mix and yield using design of experiments (DOE)

    Treesearch

    Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas

    2004-01-01

    Solving the least-cost lumber grade mix problem allows dimension mills to minimize the cost of dimension part production. This problem, due to its economic importance, has attracted much attention from researchers and industry in the past. Most solutions used linear programming models and assumed that a simple linear relationship existed between lumber grade mix and...

  12. Climate Intervention as an Optimization Problem

    NASA Astrophysics Data System (ADS)

    Caldeira, Ken; Ban-Weiss, George A.

    2010-05-01

    Typically, climate models simulations of intentional intervention in the climate system have taken the approach of imposing a change (eg, in solar flux, aerosol concentrations, aerosol emissions) and then predicting how that imposed change might affect Earth's climate or chemistry. Computations proceed from cause to effect. However, humans often proceed from "What do I want?" to "How do I get it?" One approach to thinking about intentional intervention in the climate system ("geoengineering") is to ask "What kind of climate do we want?" and then ask "What pattern of radiative forcing would come closest to achieving that desired climate state?" This involves defining climate goals and a cost function that measures how closely those goals are attained. (An important next step is to ask "How would we go about producing these desired patterns of radiative forcing?" However, this question is beyond the scope of our present study.) We performed a variety of climate simulations in NCAR's CAM3.1 atmospheric general circulation model with a slab ocean model and thermodynamic sea ice model. We then evaluated, for a specific set of climate forcing basis functions (ie, aerosol concentration distributions), the extent to which the climate response to a linear combination of those basis functions was similar to a linear combination of the climate response to each basis function taken individually. We then developed several cost functions (eg, relative to the 1xCO2 climate, minimize rms difference in zonal and annual mean land temperature, minimize rms difference in zonal and annual mean runoff, minimize rms difference in a combination of these temperature and runoff indices) and then predicted optimal combinations of our basis functions that would minimize these cost functions. Lastly, we produced forward simulations of the predicted optimal radiative forcing patterns and compared these with our expected results. Obviously, our climate model is much simpler than reality and predictions from individual models do not provide a sound basis for action; nevertheless, our model results indicate that the general approach outlined here can lead to patterns of radiative forcing that make the zonal annual mean climate of a high CO2 world markedly more similar to that of a low CO2 world simultaneously for both temperature and hydrological indices, where degree of similarity is measured using our explicit cost functions. We restricted ourselves to zonally uniform aerosol concentrations distributions that can be defined in terms of a positive-definite quadratic equation on the sine of latitude. Under this constraint, applying an aerosol distribution in a 2xCO2 climate that minimized a combination of rms difference in zonal and annual mean land temperature and runoff relative to the 1xCO2 climate, the rms difference in zonal and annual mean temperatures was reduced by ~90% and the rms difference in zonal and annual mean runoff was reduced by ~80%. This indicates that there may be potential for stratospheric aerosols to diminish simultaneously both temperature and hydrological cycle changes caused by excess CO2 in the atmosphere. Clearly, our model does not include many factors (eg, socio-political consequences, chemical consequences, ocean circulation changes, aerosol transport and microphysics) so we do not argue strongly for our specific climate model results, however, we do argue strongly in favor of our methodological approach. The proposed approach is general, in the sense that cost functions can be developed that represent different valuations. While the choice of appropriate cost functions is inherently a value judgment, evaluating those functions for a specific climate simulation is a quantitative exercise. Thus, the use of explicit cost functions in evaluating model results for climate intervention scenarios is a clear way of separating value judgments from purely scientific and technical issues.

  13. Nonlinear functional for solvation in Density Functional Theory

    NASA Astrophysics Data System (ADS)

    Gunceler, Deniz; Sundararaman, Ravishankar; Schwarz, Kathleen; Letchworth-Weaver, Kendra; Arias, T. A.

    2013-03-01

    Density functional calculations of molecules and surfaces in a liquid can accelerate the development of many technologies ranging from solar energy harvesting to lithium batteries. Such studies require the development of robust functionals describing the liquid. Polarizable continuum models (PCM's) have been applied to some solvated systems; but they do not sufficiently capture solvation effects to describe highly polar systems like surfaces of ionic solids. In this work, we present a nonlinear fluid functional within the framework of Joint Density Functional Theory. The fluid is treated not as a linear dielectric, but as a distribution of dipoles that responds to the solute, which we describe starting from the exact free energy functional for point dipoles. We also show PCM's can be recovered as the linear limit of our functional. Our description is of similar computational cost to PCM's, and captures complex solvation effects like dielectric saturation without requiring new fit parameters. For polar and nonpolar molecules, it achieves millihartree level agreement with experimental solvation energies. Furthermore, our functional now makes it possible to investigate chemistry on the surface of lithium battery materials, which PCM's predict to be unstable. Supported as part of the Energy Materials Center at Cornell, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001086

  14. An approximation of herd effect due to vaccinating children against seasonal influenza – a potential solution to the incorporation of indirect effects into static models

    PubMed Central

    2013-01-01

    Background Indirect herd effect from vaccination of children offers potential for improving the effectiveness of influenza prevention in the remaining unvaccinated population. Static models used in cost-effectiveness analyses cannot dynamically capture herd effects. The objective of this study was to develop a methodology to allow herd effect associated with vaccinating children against seasonal influenza to be incorporated into static models evaluating the cost-effectiveness of influenza vaccination. Methods Two previously published linear equations for approximation of herd effects in general were compared with the results of a structured literature review undertaken using PubMed searches to identify data on herd effects specific to influenza vaccination. A linear function was fitted to point estimates from the literature using the sum of squared residuals. Results The literature review identified 21 publications on 20 studies for inclusion. Six studies provided data on a mathematical relationship between effective vaccine coverage in subgroups and reduction of influenza infection in a larger unvaccinated population. These supported a linear relationship when effective vaccine coverage in a subgroup population was between 20% and 80%. Three studies evaluating herd effect at a community level, specifically induced by vaccinating children, provided point estimates for fitting linear equations. The fitted linear equation for herd protection in the target population for vaccination (children) was slightly less conservative than a previously published equation for herd effects in general. The fitted linear equation for herd protection in the non-target population was considerably less conservative than the previously published equation. Conclusions This method of approximating herd effect requires simple adjustments to the annual baseline risk of influenza in static models: (1) for the age group targeted by the childhood vaccination strategy (i.e. children); and (2) for other age groups not targeted (e.g. adults and/or elderly). Two approximations provide a linear relationship between effective coverage and reduction in the risk of infection. The first is a conservative approximation, recommended as a base-case for cost-effectiveness evaluations. The second, fitted to data extracted from a structured literature review, provides a less conservative estimate of herd effect, recommended for sensitivity analyses. PMID:23339290

  15. From functional structure to packaging: full-printing fabrication of a microfluidic chip.

    PubMed

    Zheng, Fengyi; Pu, Zhihua; He, Enqi; Huang, Jiasheng; Yu, Bocheng; Li, Dachao; Li, Zhihong

    2018-05-24

    This paper presents a concept of a full-printing methodology aiming at convenient and fast fabrication of microfluidic devices. For the first time, we achieved a microfluidic biochemical sensor with all functional structures fabricated by inkjet printing, including electrodes, immobilized enzymes, microfluidic components and packaging. With the cost-effective and rapid process, this method provides the possibility of quick model validation of a novel lab-on-chip system. In this study, a three-electrode electrochemical system was integrated successfully with glucose oxidase immobilization gel and sealed in an ice channel, forming a disposable microfluidic sensor for glucose detection. This fully-printed chip was characterized and showed good sensitivity and a linear section at a low-level concentration of glucose (0-10 mM). With the aid of automatic equipment, the fully-printed sensor can be massively produced with low cost.

  16. Health information technology impact on productivity.

    PubMed

    Eastaugh, Steven R

    2012-01-01

    Managers work to achieve the greatest output for the least input effort, better balancing all factors of delivery to achieve the most with the smallest resource effort. Documentation of actual health information technology (HIT) cost savings has been elusive. Information technology and linear programming help to control hospital costs without harming service quality or staff morale. This study presents production function results from a study of hospital output during the period 2008-2011. The results suggest that productivity varies widely among the 58 hospitals as a function of staffing patterns, methods of organization, and the degree of reliance on information support systems. Financial incentives help to enhance productivity. Incentive pay for staff based on actual productivity gains is associated with improved productivity. HIT can enhance the marginal value product of nurses and staff, so that they concentrate their workday around patient care activities. The implementation of electronic health records (EHR) was associated with a 1.6 percent improvement in productivity.

  17. An Efficient Scheduling Scheme on Charging Stations for Smart Transportation

    NASA Astrophysics Data System (ADS)

    Kim, Hye-Jin; Lee, Junghoon; Park, Gyung-Leen; Kang, Min-Jae; Kang, Mikyung

    This paper proposes a reservation-based scheduling scheme for the charging station to decide the service order of multiple requests, aiming at improving the satisfiability of electric vehicles. The proposed scheme makes it possible for a customer to reduce the charge cost and waiting time, while a station can extend the number of clients it can serve. A linear rank function is defined based on estimated arrival time, waiting time bound, and the amount of needed power, reducing the scheduling complexity. Receiving the requests from the clients, the power station decides the charge order by the rank function and then replies to the requesters with the waiting time and cost it can guarantee. Each requester can decide whether to charge at that station or try another station. This scheduler can evolve to integrate a new pricing policy and services, enriching the electric vehicle transport system.

  18. Analysis, design, and testing of a low cost, direct force command linear proof mass actuator for structural control

    NASA Technical Reports Server (NTRS)

    Slater, G. L.; Shelley, Stuart; Jacobson, Mark

    1993-01-01

    In this paper, the design, analysis, and test of a low cost, linear proof mass actuator for vibration control is presented. The actuator is based on a linear induction coil from a large computer disk drive. Such disk drives are readily available and provide the linear actuator, current feedback amplifier, and power supply for a highly effective, yet inexpensive, experimental laboratory actuator. The device is implemented as a force command input system, and the performance is virtually the same as other, more sophisticated, linear proof mass systems.

  19. The Design-To-Cost Manifold

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1990-01-01

    Design-to-cost is a popular technique for controlling costs. Although qualitative techniques exist for implementing design to cost, quantitative methods are sparse. In the launch vehicle and spacecraft engineering process, the question whether to minimize mass is usually an issue. The lack of quantification in this issue leads to arguments on both sides. This paper presents a mathematical technique which both quantifies the design-to-cost process and the mass/complexity issue. Parametric cost analysis generates and applies mathematical formulas called cost estimating relationships. In their most common forms, they are continuous and differentiable. This property permits the application of the mathematics of differentiable manifolds. Although the terminology sounds formidable, the application of the techniques requires only a knowledge of linear algebra and ordinary differential equations, common subjects in undergraduate scientific and engineering curricula. When the cost c is expressed as a differentiable function of n system metrics, setting the cost c to be a constant generates an n-1 dimensional subspace of the space of system metrics such that any set of metric values in that space satisfies the constant design-to-cost criterion. This space is a differentiable manifold upon which all mathematical properties of a differentiable manifold may be applied. One important property is that an easily implemented system of ordinary differential equations exists which permits optimization of any function of the system metrics, mass for example, over the design-to-cost manifold. A dual set of equations defines the directions of maximum and minimum cost change. A simplified approximation of the PRICE H(TM) production-production cost is used to generate this set of differential equations over [mass, complexity] space. The equations are solved in closed form to obtain the one dimensional design-to-cost trade and design-for-cost spaces. Preliminary results indicate that cost is relatively insensitive to changes in mass and that the reduction of complexity, both in the manufacturing process and of the spacecraft, is dominant in reducing cost.

  20. Controlling for endogeneity in attributable costs of vancomycin-resistant enterococci from a Canadian hospital.

    PubMed

    Lloyd-Smith, Patrick

    2017-12-01

    Decisions regarding the optimal provision of infection prevention and control resources depend on accurate estimates of the attributable costs of health care-associated infections. This is challenging given the skewed nature of health care cost data and the endogeneity of health care-associated infections. The objective of this study is to determine the hospital costs attributable to vancomycin-resistant enterococci (VRE) while accounting for endogeneity. This study builds on an attributable cost model conducted by a retrospective cohort study including 1,292 patients admitted to an urban hospital in Vancouver, Canada. Attributable hospital costs were estimated with multivariate generalized linear models (GLMs). To account for endogeneity, a control function approach was used. The analysis sample included 217 patients with health care-associated VRE. In the standard GLM, the costs attributable to VRE are $17,949 (SEM, $2,993). However, accounting for endogeneity, the attributable costs were estimated to range from $14,706 (SEM, $7,612) to $42,101 (SEM, $15,533). Across all model specifications, attributable costs are 76% higher on average when controlling for endogeneity. VRE was independently associated with increased hospital costs, and controlling for endogeneity lead to higher attributable cost estimates. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  1. Mitigation of epidemics in contact networks through optimal contact adaptation *

    PubMed Central

    Youssef, Mina; Scoglio, Caterina

    2013-01-01

    This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights. PMID:23906209

  2. Mitigation of epidemics in contact networks through optimal contact adaptation.

    PubMed

    Youssef, Mina; Scoglio, Caterina

    2013-08-01

    This paper presents an optimal control problem formulation to minimize the total number of infection cases during the spread of susceptible-infected-recovered SIR epidemics in contact networks. In the new approach, contact weighted are reduced among nodes and a global minimum contact level is preserved in the network. In addition, the infection cost and the cost associated with the contact reduction are linearly combined in a single objective function. Hence, the optimal control formulation addresses the tradeoff between minimization of total infection cases and minimization of contact weights reduction. Using Pontryagin theorem, the obtained solution is a unique candidate representing the dynamical weighted contact network. To find the near-optimal solution in a decentralized way, we propose two heuristics based on Bang-Bang control function and on a piecewise nonlinear control function, respectively. We perform extensive simulations to evaluate the two heuristics on different networks. Our results show that the piecewise nonlinear control function outperforms the well-known Bang-Bang control function in minimizing both the total number of infection cases and the reduction of contact weights. Finally, our results show awareness of the infection level at which the mitigation strategies are effectively applied to the contact weights.

  3. Cost analysis of incidental durotomy in spine surgery.

    PubMed

    Nandyala, Sreeharsha V; Elboghdady, Islam M; Marquez-Lara, Alejandro; Noureldin, Mohamed N B; Sankaranarayanan, Sriram; Singh, Kern

    2014-08-01

    Retrospective database analysis. To characterize the consequences of an incidental durotomy with regard to perioperative complications and total hospital costs. There is a paucity of data regarding how an incidental durotomy and its associated complications may relate to total hospital costs. The Nationwide Inpatient Sample database was queried from 2008 to 2011. Patients who underwent cervical or lumbar decompression and/or fusion procedures were identified, stratified by approach, and separated into cohorts based on a documented intraoperative incidental durotomy. Patient demographics, comorbidities (Charlson Comorbidity Index), length of hospital stay, perioperative outcomes, and costs were assessed. Analysis of covariance and multivariate linear regression were used to assess the adjusted mean costs of hospitalization as a function of durotomy. The incidental durotomy rate in cervical and lumbar spine surgery is 0.4% and 2.9%, respectively. Patients with an incidental durotomy incurred a longer hospitalization and a greater incidence of perioperative complications including hematoma and neurological injury (P < 0.001). Regression analysis demonstrated that a cervical durotomy and its postoperative sequelae contributed an additional adjusted $7638 (95% confidence interval, 6489-8787; P < 0.001) to the total hospital costs. Similarly, lumbar durotomy contributed an additional adjusted $2412 (95% confidence interval, 1920-2902; P < 0.001) to the total hospital costs. The approach-specific procedural groups demonstrated similar discrepancies in the mean total hospital costs as a function of durotomy. This analysis of the Nationwide Inpatient Sample database demonstrates that incidental durotomies increase hospital resource utilization and costs. In addition, it seems that a cervical durotomy and its associated complications carry a greater financial burden than a lumbar durotomy. Further studies are warranted to investigate the long-term financial implications of incidental durotomies in spine surgery and to reduce the costs associated with this complication. 3.

  4. Ride comfort control in large flexible aircraft. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Warren, M. E.

    1971-01-01

    The problem of ameliorating the discomfort of passengers on a large air transport subject to flight disturbances is examined. The longitudinal dynamics of the aircraft, including effects of body flexing, are developed in terms of linear, constant coefficient differential equations in state variables. A cost functional, penalizing the rigid body displacements and flexure accelerations over the surface of the aircraft is formulated as a quadratic form. The resulting control problem, to minimize the cost subject to the state equation constraints, is of a class whose solutions are well known. The feedback gains for the optimal controller are calculated digitally, and the resulting autopilot is simulated on an analog computer and its performance evaluated.

  5. Pricing policy for declining demand using item preservation technology.

    PubMed

    Khedlekar, Uttam Kumar; Shukla, Diwakar; Namdeo, Anubhav

    2016-01-01

    We have designed an inventory model for seasonal products in which deterioration can be controlled by item preservation technology investment. Demand for the product is considered price sensitive and decreases linearly. This study has shown that the profit is a concave function of optimal selling price, replenishment time and preservation cost parameter. We simultaneously determined the optimal selling price of the product, the replenishment cycle and the cost of item preservation technology. Additionally, this study has shown that there exists an optimal selling price and optimal preservation investment to maximize the profit for every business set-up. Finally, the model is illustrated by numerical examples and sensitive analysis of the optimal solution with respect to major parameters.

  6. Determination of optimum values for maximizing the profit in bread production: Daily bakery Sdn Bhd

    NASA Astrophysics Data System (ADS)

    Muda, Nora; Sim, Raymond

    2015-02-01

    An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints (other than the integer constraints) are linear. An ILP has many applications in industrial production, including job-shop modelling. A possible objective is to maximize the total production, without exceeding the available resources. In some cases, this can be expressed in terms of a linear program, but variables must be constrained to be integer. It concerned with the optimization of a linear function while satisfying a set of linear equality and inequality constraints and restrictions. It has been used to solve optimization problem in many industries area such as banking, nutrition, agriculture, and bakery and so on. The main purpose of this study is to formulate the best combination of all ingredients in producing different type of bread in Daily Bakery in order to gain maximum profit. This study also focuses on the sensitivity analysis due to changing of the profit and the cost of each ingredient. The optimum result obtained from QM software is RM 65,377.29 per day. This study will be benefited for Daily Bakery and also other similar industries. By formulating a combination of all ingredients make up, they can easily know their total profit in producing bread everyday.

  7. Communication Avoiding and Overlapping for Numerical Linear Algebra

    DTIC Science & Technology

    2012-05-08

    future exascale systems, communication cost must be avoided or overlapped. Communication-avoiding 2.5D algorithms improve scalability by reducing...linear algebra problems to future exascale systems, communication cost must be avoided or overlapped. Communication-avoiding 2.5D algorithms improve...will continue to grow relative to the cost of computation. With exascale computing as the long-term goal, the community needs to develop techniques

  8. Principal Component Geostatistical Approach for large-dimensional inverse problems

    PubMed Central

    Kitanidis, P K; Lee, J

    2014-01-01

    The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m, and the number of observations, n, is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m2n, though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n. The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m2 as in the textbook approach. For problems of very large m, this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best. PMID:25558113

  9. Principal Component Geostatistical Approach for large-dimensional inverse problems.

    PubMed

    Kitanidis, P K; Lee, J

    2014-07-01

    The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m , and the number of observations, n , is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m 2 n , though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n . The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m 2 as in the textbook approach. For problems of very large m , this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best.

  10. Spectral simplicity of apparent complexity. I. The nondiagonalizable metadynamics of prediction

    NASA Astrophysics Data System (ADS)

    Riechers, Paul M.; Crutchfield, James P.

    2018-03-01

    Virtually all questions that one can ask about the behavioral and structural complexity of a stochastic process reduce to a linear algebraic framing of a time evolution governed by an appropriate hidden-Markov process generator. Each type of question—correlation, predictability, predictive cost, observer synchronization, and the like—induces a distinct generator class. Answers are then functions of the class-appropriate transition dynamic. Unfortunately, these dynamics are generically nonnormal, nondiagonalizable, singular, and so on. Tractably analyzing these dynamics relies on adapting the recently introduced meromorphic functional calculus, which specifies the spectral decomposition of functions of nondiagonalizable linear operators, even when the function poles and zeros coincide with the operator's spectrum. Along the way, we establish special properties of the spectral projection operators that demonstrate how they capture the organization of subprocesses within a complex system. Circumventing the spurious infinities of alternative calculi, this leads in the sequel, Part II [P. M. Riechers and J. P. Crutchfield, Chaos 28, 033116 (2018)], to the first closed-form expressions for complexity measures, couched either in terms of the Drazin inverse (negative-one power of a singular operator) or the eigenvalues and projection operators of the appropriate transition dynamic.

  11. From diets to foods: using linear programming to formulate a nutritious, minimum-cost porridge mix for children aged 1 to 2 years.

    PubMed

    De Carvalho, Irene Stuart Torrié; Granfeldt, Yvonne; Dejmek, Petr; Håkansson, Andreas

    2015-03-01

    Linear programming has been used extensively as a tool for nutritional recommendations. Extending the methodology to food formulation presents new challenges, since not all combinations of nutritious ingredients will produce an acceptable food. Furthermore, it would help in implementation and in ensuring the feasibility of the suggested recommendations. To extend the previously used linear programming methodology from diet optimization to food formulation using consistency constraints. In addition, to exemplify usability using the case of a porridge mix formulation for emergency situations in rural Mozambique. The linear programming method was extended with a consistency constraint based on previously published empirical studies on swelling of starch in soft porridges. The new method was exemplified using the formulation of a nutritious, minimum-cost porridge mix for children aged 1 to 2 years for use as a complete relief food, based primarily on local ingredients, in rural Mozambique. A nutritious porridge fulfilling the consistency constraints was found; however, the minimum cost was unfeasible with local ingredients only. This illustrates the challenges in formulating nutritious yet economically feasible foods from local ingredients. The high cost was caused by the high cost of mineral-rich foods. A nutritious, low-cost porridge that fulfills the consistency constraints was obtained by including supplements of zinc and calcium salts as ingredients. The optimizations were successful in fulfilling all constraints and provided a feasible porridge, showing that the extended constrained linear programming methodology provides a systematic tool for designing nutritious foods.

  12. Analysis of Seasonal Chlorophyll-a Using An Adjoint Three-Dimensional Ocean Carbon Cycle Model

    NASA Astrophysics Data System (ADS)

    Tjiputra, J.; Winguth, A.; Polzin, D.

    2004-12-01

    The misfit between numerical ocean model and observations can be reduced using data assimilation. This can be achieved by optimizing the model parameter values using adjoint model. The adjoint model minimizes the model-data misfit by estimating the sensitivity or gradient of the cost function with respect to initial condition, boundary condition, or parameters. The adjoint technique was used to assimilate seasonal chlorophyll-a data from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) satellite to a marine biogeochemical model HAMOCC5.1. An Identical Twin Experiment (ITE) was conducted to test the robustness of the model and the non-linearity level of the forward model. The ITE experiment successfully recovered most of the perturbed parameter to their initial values, and identified the most sensitive ecosystem parameters, which contribute significantly to model-data bias. The regional assimilations of SeaWiFS chlorophyll-a data into the model were able to reduce the model-data misfit (i.e. the cost function) significantly. The cost function reduction mostly occurred in the high latitudes (e.g. the model-data misfit in the northern region during summer season was reduced by 54%). On the other hand, the equatorial regions appear to be relatively stable with no strong reduction in cost function. The optimized parameter set is used to forecast the carbon fluxes between marine ecosystem compartments (e.g. Phytoplankton, Zooplankton, Nutrients, Particulate Organic Carbon, and Dissolved Organic Carbon). The a posteriori model run using the regional best-fit parameterization yields approximately 36 PgC/yr of global net primary productions in the euphotic zone.

  13. Extension of suboptimal control theory for flow around a square cylinder

    NASA Astrophysics Data System (ADS)

    Fujita, Yosuke; Fukagata, Koji

    2017-11-01

    We extend the suboptimal control theory to control of flow around a square cylinder, which has no point symmetry on the impulse response from the wall in contrast to circular cylinders and spheres previously studied. The cost functions examined are the pressure drag (J1), the friction drag (J2), the squared difference between target pressure and wall pressure (J3) and the time-averaged dissipation (J4). The control input is assumed to be continuous blowing and suction on the cylinder wall and the feedback sensors are assumued on the entire wall surface. The control law is derived so as to minimize the cost function under the constraint of linearized Navier-Stokes equation, and the impulse response field to be convolved with the instantaneous flow quanties are numerically obtained. The amplitide of control input is fixed so that the maximum blowing/suction velocity is 40% of the freestream velocity. When J2 is used as the cost function, the friction drag is reduced as expected but the mean drag is found to increase. In constast, when J1, J3, and J4 were used, the mean drag was found to decrease by 21%, 12%, and 22%, respectively; in addition, vortex shedding is suppressed, which leads to reduction of lift fluctuations.

  14. Adaptive control of stochastic linear systems with unknown parameters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Ku, R. T.

    1972-01-01

    The problem of optimal control of linear discrete-time stochastic dynamical system with unknown and, possibly, stochastically varying parameters is considered on the basis of noisy measurements. It is desired to minimize the expected value of a quadratic cost functional. Since the simultaneous estimation of the state and plant parameters is a nonlinear filtering problem, the extended Kalman filter algorithm is used. Several qualitative and asymptotic properties of the open loop feedback optimal control and the enforced separation scheme are discussed. Simulation results via Monte Carlo method show that, in terms of the performance measure, for stable systems the open loop feedback optimal control system is slightly better than the enforced separation scheme, while for unstable systems the latter scheme is far better.

  15. An optimal policy for deteriorating items with time-proportional deterioration rate and constant and time-dependent linear demand rate

    NASA Astrophysics Data System (ADS)

    Singh, Trailokyanath; Mishra, Pandit Jagatananda; Pattanayak, Hadibandhu

    2017-12-01

    In this paper, an economic order quantity (EOQ) inventory model for a deteriorating item is developed with the following characteristics: (i) The demand rate is deterministic and two-staged, i.e., it is constant in first part of the cycle and linear function of time in the second part. (ii) Deterioration rate is time-proportional. (iii) Shortages are not allowed to occur. The optimal cycle time and the optimal order quantity have been derived by minimizing the total average cost. A simple solution procedure is provided to illustrate the proposed model. The article concludes with a numerical example and sensitivity analysis of various parameters as illustrations of the theoretical results.

  16. Grid-based lattice summation of electrostatic potentials by assembled rank-structured tensor approximation

    NASA Astrophysics Data System (ADS)

    Khoromskaia, Venera; Khoromskij, Boris N.

    2014-12-01

    Our recent method for low-rank tensor representation of sums of the arbitrarily positioned electrostatic potentials discretized on a 3D Cartesian grid reduces the 3D tensor summation to operations involving only 1D vectors however retaining the linear complexity scaling in the number of potentials. Here, we introduce and study a novel tensor approach for fast and accurate assembled summation of a large number of lattice-allocated potentials represented on 3D N × N × N grid with the computational requirements only weakly dependent on the number of summed potentials. It is based on the assembled low-rank canonical tensor representations of the collected potentials using pointwise sums of shifted canonical vectors representing the single generating function, say the Newton kernel. For a sum of electrostatic potentials over L × L × L lattice embedded in a box the required storage scales linearly in the 1D grid-size, O(N) , while the numerical cost is estimated by O(NL) . For periodic boundary conditions, the storage demand remains proportional to the 1D grid-size of a unit cell, n = N / L, while the numerical cost reduces to O(N) , that outperforms the FFT-based Ewald-type summation algorithms of complexity O(N3 log N) . The complexity in the grid parameter N can be reduced even to the logarithmic scale O(log N) by using data-sparse representation of canonical N-vectors via the quantics tensor approximation. For justification, we prove an upper bound on the quantics ranks for the canonical vectors in the overall lattice sum. The presented approach is beneficial in applications which require further functional calculus with the lattice potential, say, scalar product with a function, integration or differentiation, which can be performed easily in tensor arithmetics on large 3D grids with 1D cost. Numerical tests illustrate the performance of the tensor summation method and confirm the estimated bounds on the tensor ranks.

  17. Multi-Party Privacy-Preserving Set Intersection with Quasi-Linear Complexity

    NASA Astrophysics Data System (ADS)

    Cheon, Jung Hee; Jarecki, Stanislaw; Seo, Jae Hong

    Secure computation of the set intersection functionality allows n parties to find the intersection between their datasets without revealing anything else about them. An efficient protocol for such a task could have multiple potential applications in commerce, health care, and security. However, all currently known secure set intersection protocols for n>2 parties have computational costs that are quadratic in the (maximum) number of entries in the dataset contributed by each party, making secure computation of the set intersection only practical for small datasets. In this paper, we describe the first multi-party protocol for securely computing the set intersection functionality with both the communication and the computation costs that are quasi-linear in the size of the datasets. For a fixed security parameter, our protocols require O(n2k) bits of communication and Õ(n2k) group multiplications per player in the malicious adversary setting, where k is the size of each dataset. Our protocol follows the basic idea of the protocol proposed by Kissner and Song, but we gain efficiency by using different representations of the polynomials associated with users' datasets and careful employment of algorithms that interpolate or evaluate polynomials on multiple points more efficiently. Moreover, the proposed protocol is robust. This means that the protocol outputs the desired result even if some corrupted players leave during the execution of the protocol.

  18. An information-theoretic approach to motor action decoding with a reconfigurable parallel architecture.

    PubMed

    Craciun, Stefan; Brockmeier, Austin J; George, Alan D; Lam, Herman; Príncipe, José C

    2011-01-01

    Methods for decoding movements from neural spike counts using adaptive filters often rely on minimizing the mean-squared error. However, for non-Gaussian distribution of errors, this approach is not optimal for performance. Therefore, rather than using probabilistic modeling, we propose an alternate non-parametric approach. In order to extract more structure from the input signal (neuronal spike counts) we propose using minimum error entropy (MEE), an information-theoretic approach that minimizes the error entropy as part of an iterative cost function. However, the disadvantage of using MEE as the cost function for adaptive filters is the increase in computational complexity. In this paper we present a comparison between the decoding performance of the analytic Wiener filter and a linear filter trained with MEE, which is then mapped to a parallel architecture in reconfigurable hardware tailored to the computational needs of the MEE filter. We observe considerable speedup from the hardware design. The adaptation of filter weights for the multiple-input, multiple-output linear filters, necessary in motor decoding, is a highly parallelizable algorithm. It can be decomposed into many independent computational blocks with a parallel architecture readily mapped to a field-programmable gate array (FPGA) and scales to large numbers of neurons. By pipelining and parallelizing independent computations in the algorithm, the proposed parallel architecture has sublinear increases in execution time with respect to both window size and filter order.

  19. A topological proof of chaos for two nonlinear heterogeneous triopoly game models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pireddu, Marina, E-mail: marina.pireddu@unimib.it

    We rigorously prove the existence of chaotic dynamics for two nonlinear Cournot triopoly game models with heterogeneous players, for which in the existing literature the presence of complex phenomena and strange attractors has been shown via numerical simulations. In the first model that we analyze, costs are linear but the demand function is isoelastic, while, in the second model, the demand function is linear and production costs are quadratic. As concerns the decisional mechanisms adopted by the firms, in both models one firm adopts a myopic adjustment mechanism, considering the marginal profit of the last period; the second firm maximizesmore » its own expected profit under the assumption that the competitors' production levels will not vary with respect to the previous period; the third firm acts adaptively, changing its output proportionally to the difference between its own output in the previous period and the naive expectation value. The topological method we employ in our analysis is the so-called “Stretching Along the Paths” technique, based on the Poincaré-Miranda Theorem and the properties of the cutting surfaces, which allows to prove the existence of a semi-conjugacy between the system under consideration and the Bernoulli shift, so that the former inherits from the latter several crucial chaotic features, among which a positive topological entropy.« less

  20. Functional design of electrolytic biosensor

    NASA Astrophysics Data System (ADS)

    Gamage Preethichandra, D. M.; Mala Ekanayake, E. M. I.; Onoda, M.

    2017-11-01

    A novel amperometric biosensbased on conjugated polypyrrole (PPy) deposited on a Pt modified ITO (indium tin oxide) conductive glass substrate and their performances are described. We have presented a method of developing a highly sensitive and low-cost nano-biosensor for blood glucose measurements. The fabrication method proposed decreases the cost of production significantly as the amount of noble metals used is minimized. A nano-corrugated PPy substrate was developed through pulsed electrochemical deposition. The sensitivity achieved was 325 mA/(Mcm2) and the linear range of the developed sensor was 50-60 mmol/l. Then the application of the electrophoresis helps the glucose oxidase (GOx) on the PPy substrate. The main reason behind this high enzyme loading is the high electric field applied across the sensor surface (working electrode) and the counter electrode where that pushes the nano-scale enzyme particles floating in the phosphate buffer solution towards the substrate. The novel technique used has provided an extremely high sensitivities and very high linear ranges for enzyme (GOx) and therefore can be concluded that this is a very good technique to load enzyme onto the conducting polymer substrates.

  1. Design and Validation of a Ten-Port Waveguide Reflectometer Sensor: Application to Efficiency Measurement and Optimization of Microwave-Heating Ovens

    PubMed Central

    Pedreño-Molina, Juan L.; Monzó-Cabrera, Juan; Lozano-Guerrero, Antonio; Toledo-Moreo, Ana

    2008-01-01

    This work presents the design, manufacturing process, calibration and validation of a new microwave ten-port waveguide reflectometer based on the use of neural networks. This low-cost novel device solves some of the shortcomings of previous reflectometers such as non-linear behavior of power sensors, noise presence and the complexity of the calibration procedure, which is often based on complex mathematical equations. These problems, which imply the reduction of the reflection coefficient measurement accuracy, have been overcome by using a higher number of probes than usual six-port configurations and by means of the use of Radial Basis Function (RBF) neural networks in order to reduce the influence of noise and non-linear processes over the measurements. Additionally, this sensor can be reconfigured whenever some of the eight coaxial power detectors fail, still providing accurate values in real time. The ten-port performance has been compared against a high-cost measurement instrument such as a vector network analyzer and applied to the measurement and optimization of energy efficiency of microwave ovens, with good results. PMID:27873961

  2. What linear programming contributes: world food programme experience with the "cost of the diet" tool.

    PubMed

    Frega, Romeo; Lanfranco, Jose Guerra; De Greve, Sam; Bernardini, Sara; Geniez, Perrine; Grede, Nils; Bloem, Martin; de Pee, Saskia

    2012-09-01

    Linear programming has been used for analyzing children's complementary feeding diets, for optimizing nutrient adequacy of dietary recommendations for a population, and for estimating the economic value of fortified foods. To describe and apply a linear programming tool ("Cost of the Diet") with data from Mozambique to determine what could be cost-effective fortification strategies. Based on locally assessed average household dietary needs, seasonal market prices of available food products, and food composition data, the tool estimates the lowest-cost diet that meets almost all nutrient needs. The results were compared with expenditure data from Mozambique to establish the affordability of this diet by quintiles of the population. Three different applications were illustrated: identifying likely "limiting nutrients," comparing cost effectiveness of different fortification interventions at the household level, and assessing economic access to nutritious foods. The analysis identified iron, vitamin B2, and pantothenic acid as "limiting nutrients." Under the Mozambique conditions, vegetable oil was estimated as a more cost-efficient vehicle for vitamin A fortification than sugar; maize flour may also be an effective vehicle to provide other constraining micronutrients. Multiple micronutrient fortification of maize flour could reduce the cost of the "lowest-cost nutritious diet" by 18%, but even this diet can be afforded by only 20% of the Mozambican population. Within the context of fortification, linear programming can be a useful tool for identifying likely nutrient inadequacies, for comparing fortification options in terms of cost effectiveness, and for illustrating the potential benefit of fortification for improving household access to a nutritious diet.

  3. A simple smoothness indicator for the WENO scheme with adaptive order

    NASA Astrophysics Data System (ADS)

    Huang, Cong; Chen, Li Li

    2018-01-01

    The fifth order WENO scheme with adaptive order is competent for solving hyperbolic conservation laws, its reconstruction is a convex combination of a fifth order linear reconstruction and three third order linear reconstructions. Note that, on uniform mesh, the computational cost of smoothness indicator for fifth order linear reconstruction is comparable with the sum of ones for three third order linear reconstructions, thus it is too heavy; on non-uniform mesh, the explicit form of smoothness indicator for fifth order linear reconstruction is difficult to be obtained, and its computational cost is much heavier than the one on uniform mesh. In order to overcome these problems, a simple smoothness indicator for fifth order linear reconstruction is proposed in this paper.

  4. Flexible polyurethane foam modelling and identification of viscoelastic parameters for automotive seating applications

    NASA Astrophysics Data System (ADS)

    Deng, R.; Davies, P.; Bajaj, A. K.

    2003-05-01

    A hereditary model and a fractional derivative model for the dynamic properties of flexible polyurethane foams used in automotive seat cushions are presented. Non-linear elastic and linear viscoelastic properties are incorporated into these two models. A polynomial function of compression is used to represent the non-linear elastic behavior. The viscoelastic property is modelled by a hereditary integral with a relaxation kernel consisting of two exponential terms in the hereditary model and by a fractional derivative term in the fractional derivative model. The foam is used as the only viscoelastic component in a foam-mass system undergoing uniaxial compression. One-term harmonic balance solutions are developed to approximate the steady state response of the foam-mass system to the harmonic base excitation. System identification procedures based on the direct non-linear optimization and a sub-optimal method are formulated to estimate the material parameters. The effects of the choice of the cost function, frequency resolution of data and imperfections in experiments are discussed. The system identification procedures are also applied to experimental data from a foam-mass system. The performances of the two models for data at different compression and input excitation levels are compared, and modifications to the structure of the fractional derivative model are briefly explored. The role of the viscous damping term in both types of model is discussed.

  5. Stability and Optimal Harvesting of Modified Leslie-Gower Predator-Prey Model

    NASA Astrophysics Data System (ADS)

    Toaha, S.; Azis, M. I.

    2018-03-01

    This paper studies a modified of dynamics of Leslie-Gower predator-prey population model. The model is stated as a system of first order differential equations. The model consists of one predator and one prey. The Holling type II as a predation function is considered in this model. The predator and prey populations are assumed to be beneficial and then the two populations are harvested with constant efforts. Existence and stability of the interior equilibrium point are analysed. Linearization method is used to get the linearized model and the eigenvalue is used to justify the stability of the interior equilibrium point. From the analyses, we show that under a certain condition the interior equilibrium point exists and is locally asymptotically stable. For the model with constant efforts of harvesting, cost function, revenue function, and profit function are considered. The stable interior equilibrium point is then related to the maximum profit problem as well as net present value of revenues problem. We show that there exists a certain value of the efforts that maximizes the profit function and net present value of revenues while the interior equilibrium point remains stable. This means that the populations can live in coexistence for a long time and also maximize the benefit even though the populations are harvested with constant efforts.

  6. Method for Household Refrigerators Efficiency Increasing

    NASA Astrophysics Data System (ADS)

    Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.

    2017-11-01

    The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.

  7. Hospital costs estimation and prediction as a function of patient and admission characteristics.

    PubMed

    Ramiarina, Robert; Almeida, Renan Mvr; Pereira, Wagner Ca

    2008-01-01

    The present work analyzed the association between hospital costs and patient admission characteristics in a general public hospital in the city of Rio de Janeiro, Brazil. The unit costs method was used to estimate inpatient day costs associated to specific hospital clinics. With this aim, three "cost centers" were defined in order to group direct and indirect expenses pertaining to the clinics. After the costs were estimated, a standard linear regression model was developed for correlating cost units and their putative predictors (the patients gender and age, the admission type (urgency/elective), ICU admission (yes/no), blood transfusion (yes/no), the admission outcome (death/no death), the complexity of the medical procedures performed, and a risk-adjustment index). Data were collected for 3100 patients, January 2001-January 2003. Average inpatient costs across clinics ranged from (US$) 1135 [Orthopedics] to 3101 [Cardiology]. Costs increased according to increases in the risk-adjustment index in all clinics, and the index was statistically significant in all clinics except Urology, General surgery, and Clinical medicine. The occupation rate was inversely correlated to costs, and age had no association with costs. The (adjusted) per cent of explained variance varied between 36.3% [Clinical medicine] and 55.1% [Thoracic surgery clinic]. The estimates are an important step towards the standardization of hospital costs calculation, especially for countries that lack formal hospital accounting systems.

  8. Four-Component Damped Density Functional Response Theory Study of UV/Vis Absorption Spectra and Phosphorescence Parameters of Group 12 Metal-Substituted Porphyrins.

    PubMed

    Fransson, Thomas; Saue, Trond; Norman, Patrick

    2016-05-10

    The influences of group 12 (Zn, Cd, Hg) metal-substitution on the valence spectra and phosphorescence parameters of porphyrins (P) have been investigated in a relativistic setting. In order to obtain valence spectra, this study reports the first application of the damped linear response function, or complex polarization propagator, in the four-component density functional theory framework [as formulated in Villaume et al. J. Chem. Phys. 2010 , 133 , 064105 ]. It is shown that the steep increase in the density of states as due to the inclusion of spin-orbit coupling yields only minor changes in overall computational costs involved with the solution of the set of linear response equations. Comparing single-frequency to multifrequency spectral calculations, it is noted that the number of iterations in the iterative linear equation solver per frequency grid-point decreases monotonously from 30 to 0.74 as the number of frequency points goes from one to 19. The main heavy-atom effect on the UV/vis-absorption spectra is indirect and attributed to the change of point group symmetry due to metal-substitution, and it is noted that substitutions using heavier atoms yield small red-shifts of the intense Soret-band. Concerning phosphorescence parameters, the adoption of a four-component relativistic setting enables the calculation of such properties at a linear order of response theory, and any higher-order response functions do not need to be considered-a real, conventional, form of linear response theory has been used for the calculation of these parameters. For the substituted porphyrins, electronic coupling between the lowest triplet states is strong and results in theoretical estimates of lifetimes that are sensitive to the wave function and electron density parametrization. With this in mind, we report our best estimates of the phosphorescence lifetimes to be 460, 13.8, 11.2, and 0.00155 s for H2P, ZnP, CdP, and HgP, respectively, with the corresponding transition energies being equal to 1.46, 1.50, 1.38, and 0.89 eV.

  9. Low-Cost Nested-MIMO Array for Large-Scale Wireless Sensor Applications.

    PubMed

    Zhang, Duo; Wu, Wen; Fang, Dagang; Wang, Wenqin; Cui, Can

    2017-05-12

    In modern communication and radar applications, large-scale sensor arrays have increasingly been used to improve the performance of a system. However, the hardware cost and circuit power consumption scale linearly with the number of sensors, which makes the whole system expensive and power-hungry. This paper presents a low-cost nested multiple-input multiple-output (MIMO) array, which is capable of providing O ( 2 N 2 ) degrees of freedom (DOF) with O ( N ) physical sensors. The sensor locations of the proposed array have closed-form expressions. Thus, the aperture size and number of DOF can be predicted as a function of the total number of sensors. Additionally, with the help of time-sequence-phase-weighting (TSPW) technology, only one receiver channel is required for sampling the signals received by all of the sensors, which is conducive to reducing the hardware cost and power consumption. Numerical simulation results demonstrate the effectiveness and superiority of the proposed array.

  10. Low-Cost Nested-MIMO Array for Large-Scale Wireless Sensor Applications

    PubMed Central

    Zhang, Duo; Wu, Wen; Fang, Dagang; Wang, Wenqin; Cui, Can

    2017-01-01

    In modern communication and radar applications, large-scale sensor arrays have increasingly been used to improve the performance of a system. However, the hardware cost and circuit power consumption scale linearly with the number of sensors, which makes the whole system expensive and power-hungry. This paper presents a low-cost nested multiple-input multiple-output (MIMO) array, which is capable of providing O(2N2) degrees of freedom (DOF) with O(N) physical sensors. The sensor locations of the proposed array have closed-form expressions. Thus, the aperture size and number of DOF can be predicted as a function of the total number of sensors. Additionally, with the help of time-sequence-phase-weighting (TSPW) technology, only one receiver channel is required for sampling the signals received by all of the sensors, which is conducive to reducing the hardware cost and power consumption. Numerical simulation results demonstrate the effectiveness and superiority of the proposed array. PMID:28498329

  11. Reduced-cost linear-response CC2 method based on natural orbitals and natural auxiliary functions

    PubMed Central

    Mester, Dávid

    2017-01-01

    A reduced-cost density fitting (DF) linear-response second-order coupled-cluster (CC2) method has been developed for the evaluation of excitation energies. The method is based on the simultaneous truncation of the molecular orbital (MO) basis and the auxiliary basis set used for the DF approximation. For the reduction of the size of the MO basis, state-specific natural orbitals (NOs) are constructed for each excited state using the average of the second-order Møller–Plesset (MP2) and the corresponding configuration interaction singles with perturbative doubles [CIS(D)] density matrices. After removing the NOs of low occupation number, natural auxiliary functions (NAFs) are constructed [M. Kállay, J. Chem. Phys. 141, 244113 (2014)], and the NAF basis is also truncated. Our results show that, for a triple-zeta basis set, about 60% of the virtual MOs can be dropped, while the size of the fitting basis can be reduced by a factor of five. This results in a dramatic reduction of the computational costs of the solution of the CC2 equations, which are in our approach about as expensive as the evaluation of the MP2 and CIS(D) density matrices. All in all, an average speedup of more than an order of magnitude can be achieved at the expense of a mean absolute error of 0.02 eV in the calculated excitation energies compared to the canonical CC2 results. Our benchmark calculations demonstrate that the new approach enables the efficient computation of CC2 excitation energies for excited states of all types of medium-sized molecules composed of up to 100 atoms with triple-zeta quality basis sets. PMID:28527453

  12. A rapid learning and dynamic stepwise updating algorithm for flat neural networks and the application to time-series prediction.

    PubMed

    Chen, C P; Wan, J Z

    1999-01-01

    A fast learning algorithm is proposed to find an optimal weights of the flat neural networks (especially, the functional-link network). Although the flat networks are used for nonlinear function approximation, they can be formulated as linear systems. Thus, the weights of the networks can be solved easily using a linear least-square method. This formulation makes it easier to update the weights instantly for both a new added pattern and a new added enhancement node. A dynamic stepwise updating algorithm is proposed to update the weights of the system on-the-fly. The model is tested on several time-series data including an infrared laser data set, a chaotic time-series, a monthly flour price data set, and a nonlinear system identification problem. The simulation results are compared to existing models in which more complex architectures and more costly training are needed. The results indicate that the proposed model is very attractive to real-time processes.

  13. A method for determining optimum phasing of a multiphase propulsion system for a single-stage vehicle with linearized inert weight

    NASA Technical Reports Server (NTRS)

    Martin, J. A.

    1974-01-01

    A general analytical treatment is presented of a single-stage vehicle with multiple propulsion phases. A closed-form solution for the cost and for the performance and a derivation of the optimal phasing of the propulsion are included. Linearized variations in the inert weight elements are included, and the function to be minimized can be selected. The derivation of optimal phasing results in a set of nonlinear algebraic equations for optimal fuel volumes, for which a solution method is outlined. Three specific example cases are analyzed: minimum gross lift-off weight, minimum inert weight, and a minimized general function for a two-phase vehicle. The results for the two-phase vehicle are applied to the dual-fuel rocket. Comparisons with single-fuel vehicles indicate that dual-fuel vehicles can have lower inert weight either by development of a dual-fuel engine or by parallel burning of separate engines from lift-off.

  14. Tire Force Estimation using a Proportional Integral Observer

    NASA Astrophysics Data System (ADS)

    Farhat, Ahmad; Koenig, Damien; Hernandez-Alcantara, Diana; Morales-Menendez, Ruben

    2017-01-01

    This paper addresses a method for detecting critical stability situations in the lateral vehicle dynamics by estimating the non-linear part of the tire forces. These forces indicate the road holding performance of the vehicle. The estimation method is based on a robust fault detection and estimation approach which minimize the disturbance and uncertainties to residual sensitivity. It consists in the design of a Proportional Integral Observer (PIO), while minimizing the well known H ∞ norm for the worst case uncertainties and disturbance attenuation, and combining a transient response specification. This multi-objective problem is formulated as a Linear Matrix Inequalities (LMI) feasibility problem where a cost function subject to LMI constraints is minimized. This approach is employed to generate a set of switched robust observers for uncertain switched systems, where the convergence of the observer is ensured using a Multiple Lyapunov Function (MLF). Whilst the forces to be estimated can not be physically measured, a simulation scenario with CarSimTM is presented to illustrate the developed method.

  15. Optimization model of vaccination strategy for dengue transmission

    NASA Astrophysics Data System (ADS)

    Widayani, H.; Kallista, M.; Nuraini, N.; Sari, M. Y.

    2014-02-01

    Dengue fever is emerging tropical and subtropical disease caused by dengue virus infection. The vaccination should be done as a prevention of epidemic in population. The host-vector model are modified with consider a vaccination factor to prevent the occurrence of epidemic dengue in a population. An optimal vaccination strategy using non-linear objective function was proposed. The genetic algorithm programming techniques are combined with fourth-order Runge-Kutta method to construct the optimal vaccination. In this paper, the appropriate vaccination strategy by using the optimal minimum cost function which can reduce the number of epidemic was analyzed. The numerical simulation for some specific cases of vaccination strategy is shown.

  16. Analog "neuronal" networks in early vision.

    PubMed Central

    Koch, C; Marroquin, J; Yuille, A

    1986-01-01

    Many problems in early vision can be formulated in terms of minimizing a cost function. Examples are shape from shading, edge detection, motion analysis, structure from motion, and surface interpolation. As shown by Poggio and Koch [Poggio, T. & Koch, C. (1985) Proc. R. Soc. London, Ser. B 226, 303-323], quadratic variational problems, an important subset of early vision tasks, can be "solved" by linear, analog electrical, or chemical networks. However, in the presence of discontinuities, the cost function is nonquadratic, raising the question of designing efficient algorithms for computing the optimal solution. Recently, Hopfield and Tank [Hopfield, J. J. & Tank, D. W. (1985) Biol. Cybern. 52, 141-152] have shown that networks of nonlinear analog "neurons" can be effective in computing the solution of optimization problems. We show how these networks can be generalized to solve the nonconvex energy functionals of early vision. We illustrate this approach by implementing a specific analog network, solving the problem of reconstructing a smooth surface from sparse data while preserving its discontinuities. These results suggest a novel computational strategy for solving early vision problems in both biological and real-time artificial vision systems. PMID:3459172

  17. Short-term benefits from central unit commitment and dispatch: Application to the Southern African Power Pool

    NASA Astrophysics Data System (ADS)

    Bowen, Brian Hugh

    1998-12-01

    Electricity utilities in the Southern African region are conscious that gains could be made from more economically efficient trading but have had no tools with which to analyze the effects of a change in policy. This research is the first to provide transparent quantitative techniques to quantify the impacts of new trading arrangements in this region. The study poses a model of the recently formed Southern African Power Pool, built with the collaboration of the region's national utilities to represent each country's demand and generation/transmission system. The multi-region model includes commitment and dispatch from diverse hydrothermal sources over a vast area. Economic gains are determined by comparing the total costs under free-trade conditions with those from the existing fixed-trade bilateral arrangements. The objective function minimizes production costs needed to meet total demand, subject to each utility's constraints for thermal and hydro generation, transmission, load balance and losses. Linearized thermal cost functions are used along with linearized input output hydropower plant curves and hydrothermal on/off status variables to formulate a mixed-integer programming problem. Results from the modeling show that moving to optimal trading patterns could save between 70 million and 130 million per year. With free-trade policies the quantity of power flow between utilities is doubled and maximum usage is made of the hydropower stations thus reducing costs and fuel use. In electricity exporting countries such as Zambia and Mozambique gains from increased trade are achieved which equal 16% and 18% respectively of the value of their total manufactured exports. A sensitivity analysis is conducted on the possible effects of derating generation, derating transmission and reducing water inflows but gains remain large. Maximum economic gains from optimal trading patterns can be achieved by each country allowing centralized control through the newly founded SAPP coordination center. Using standard mixed integer programming solvers makes the cost of such modeling activity easily affordable to each utility in the Southern African pool. This research provides the utilities with the modeling tools to quantify the gains from increased trade and thereby furthers a move towards greater efficiency, faster economic growth and reduced use of fossil fuels.

  18. A systematic way for the cost reduction of density fitting methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kállay, Mihály, E-mail: kallay@mail.bme.hu

    2014-12-28

    We present a simple approach for the reduction of the size of auxiliary basis sets used in methods exploiting the density fitting (resolution of identity) approximation for electron repulsion integrals. Starting out of the singular value decomposition of three-center two-electron integrals, new auxiliary functions are constructed as linear combinations of the original fitting functions. The new functions, which we term natural auxiliary functions (NAFs), are analogous to the natural orbitals widely used for the cost reduction of correlation methods. The use of the NAF basis enables the systematic truncation of the fitting basis, and thereby potentially the reduction of themore » computational expenses of the methods, though the scaling with the system size is not altered. The performance of the new approach has been tested for several quantum chemical methods. It is demonstrated that the most pronounced gain in computational efficiency can be expected for iterative models which scale quadratically with the size of the fitting basis set, such as the direct random phase approximation. The approach also has the promise of accelerating local correlation methods, for which the processing of three-center Coulomb integrals is a bottleneck.« less

  19. Constitutive error based parameter estimation technique for plate structures using free vibration signatures

    NASA Astrophysics Data System (ADS)

    Guchhait, Shyamal; Banerjee, Biswanath

    2018-04-01

    In this paper, a variant of constitutive equation error based material parameter estimation procedure for linear elastic plates is developed from partially measured free vibration sig-natures. It has been reported in many research articles that the mode shape curvatures are much more sensitive compared to mode shape themselves to localize inhomogeneity. Complying with this idea, an identification procedure is framed as an optimization problem where the proposed cost function measures the error in constitutive relation due to incompatible curvature/strain and moment/stress fields. Unlike standard constitutive equation error based procedure wherein a solution of a couple system is unavoidable in each iteration, we generate these incompatible fields via two linear solves. A simple, yet effective, penalty based approach is followed to incorporate measured data. The penalization parameter not only helps in incorporating corrupted measurement data weakly but also acts as a regularizer against the ill-posedness of the inverse problem. Explicit linear update formulas are then developed for anisotropic linear elastic material. Numerical examples are provided to show the applicability of the proposed technique. Finally, an experimental validation is also provided.

  20. Parabolic discounting of monetary rewards by physical effort.

    PubMed

    Hartmann, Matthias N; Hager, Oliver M; Tobler, Philippe N; Kaiser, Stefan

    2013-11-01

    When humans and other animals make decisions in their natural environments prospective rewards have to be weighed against costs. It is well established that increasing costs lead to devaluation or discounting of reward. While our knowledge about discount functions for time and probability costs is quite advanced, little is known about how physical effort discounts reward. In the present study we compared three different models in a binary choice task in which human participants had to squeeze a handgrip to earn monetary rewards: a linear, a hyperbolic, and a parabolic model. On the group as well as the individual level, the concave parabolic model explained most variance of the choice data, thus contrasting with the typical hyperbolic discounting of reward value by delay. Research on effort discounting is not only important to basic science but also holds the potential to quantify aberrant motivational states in neuropsychiatric disorders. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. A mechatronics platform to study prosthetic hand control using EMG signals.

    PubMed

    Geethanjali, P

    2016-09-01

    In this paper, a low-cost mechatronics platform for the design and development of robotic hands as well as a surface electromyogram (EMG) pattern recognition system is proposed. This paper also explores various EMG classification techniques using a low-cost electronics system in prosthetic hand applications. The proposed platform involves the development of a four channel EMG signal acquisition system; pattern recognition of acquired EMG signals; and development of a digital controller for a robotic hand. Four-channel surface EMG signals, acquired from ten healthy subjects for six different movements of the hand, were used to analyse pattern recognition in prosthetic hand control. Various time domain features were extracted and grouped into five ensembles to compare the influence of features in feature-selective classifiers (SLR) with widely considered non-feature-selective classifiers, such as neural networks (NN), linear discriminant analysis (LDA) and support vector machines (SVM) applied with different kernels. The results divulged that the average classification accuracy of the SVM, with a linear kernel function, outperforms other classifiers with feature ensembles, Hudgin's feature set and auto regression (AR) coefficients. However, the slight improvement in classification accuracy of SVM incurs more processing time and memory space in the low-level controller. The Kruskal-Wallis (KW) test also shows that there is no significant difference in the classification performance of SLR with Hudgin's feature set to that of SVM with Hudgin's features along with AR coefficients. In addition, the KW test shows that SLR was found to be better in respect to computation time and memory space, which is vital in a low-level controller. Similar to SVM, with a linear kernel function, other non-feature selective LDA and NN classifiers also show a slight improvement in performance using twice the features but with the drawback of increased memory space requirement and time. This prototype facilitated the study of various issues of pattern recognition and identified an efficient classifier, along with a feature ensemble, in the implementation of EMG controlled prosthetic hands in a laboratory setting at low-cost. This platform may help to motivate and facilitate prosthetic hand research in developing countries.

  2. AQMAN; linear and quadratic programming matrix generator using two-dimensional ground-water flow simulation for aquifer management modeling

    USGS Publications Warehouse

    Lefkoff, L.J.; Gorelick, S.M.

    1987-01-01

    A FORTRAN-77 computer program code that helps solve a variety of aquifer management problems involving the control of groundwater hydraulics. It is intended for use with any standard mathematical programming package that uses Mathematical Programming System input format. The computer program creates the input files to be used by the optimization program. These files contain all the hydrologic information and management objectives needed to solve the management problem. Used in conjunction with a mathematical programming code, the computer program identifies the pumping or recharge strategy that achieves a user 's management objective while maintaining groundwater hydraulic conditions within desired limits. The objective may be linear or quadratic, and may involve the minimization of pumping and recharge rates or of variable pumping costs. The problem may contain constraints on groundwater heads, gradients, and velocities for a complex, transient hydrologic system. Linear superposition of solutions to the transient, two-dimensional groundwater flow equation is used by the computer program in conjunction with the response matrix optimization method. A unit stress is applied at each decision well and transient responses at all control locations are computed using a modified version of the U.S. Geological Survey two dimensional aquifer simulation model. The program also computes discounted cost coefficients for the objective function and accounts for transient aquifer conditions. (Author 's abstract)

  3. A New Model for Solving Time-Cost-Quality Trade-Off Problems in Construction

    PubMed Central

    Fu, Fang; Zhang, Tao

    2016-01-01

    A poor quality affects project makespan and its total costs negatively, but it can be recovered by repair works during construction. We construct a new non-linear programming model based on the classic multi-mode resource constrained project scheduling problem considering repair works. In order to obtain satisfactory quality without a high increase of project cost, the objective is to minimize total quality cost which consists of the prevention cost and failure cost according to Quality-Cost Analysis. A binary dependent normal distribution function is adopted to describe the activity quality; Cumulative quality is defined to determine whether to initiate repair works, according to the different relationships among activity qualities, namely, the coordinative and precedence relationship. Furthermore, a shuffled frog-leaping algorithm is developed to solve this discrete trade-off problem based on an adaptive serial schedule generation scheme and adjusted activity list. In the program of the algorithm, the frog-leaping progress combines the crossover operator of genetic algorithm and a permutation-based local search. Finally, an example of a construction project for a framed railway overpass is provided to examine the algorithm performance, and it assist in decision making to search for the appropriate makespan and quality threshold with minimal cost. PMID:27911939

  4. From Data to Images:. a Shape Based Approach for Fluorescence Tomography

    NASA Astrophysics Data System (ADS)

    Dorn, O.; Prieto, K. E.

    2012-12-01

    Fluorescence tomography is treated as a shape reconstruction problem for a coupled system of two linear transport equations in 2D. The shape evolution is designed in order to minimize the least squares data misfit cost functional either in the excitation frequency or in the emission frequency. Furthermore, a level set technique is employed for numerically modelling the evolving shapes. Numerical results are presented which demonstrate the performance of this novel technique in the situation of noisy simulated data in 2D.

  5. Using Parametric Cost Models to Estimate Engineering and Installation Costs of Selected Electronic Communications Systems

    DTIC Science & Technology

    1994-09-01

    Institute of Technology, Wright- Patterson AFB OH, January 1994. 4. Neter, John and others. Applied Linear Regression Models. Boston: Irwin, 1989. 5...Technology, Wright-Patterson AFB OH 5 April 1994. 29. Neter, John and others. Applied Linear Regression Models. Boston: Irwin, 1989. 30. Office of

  6. A Factorization Approach to the Linear Regulator Quadratic Cost Problem

    NASA Technical Reports Server (NTRS)

    Milman, M. H.

    1985-01-01

    A factorization approach to the linear regulator quadratic cost problem is developed. This approach makes some new connections between optimal control, factorization, Riccati equations and certain Wiener-Hopf operator equations. Applications of the theory to systems describable by evolution equations in Hilbert space and differential delay equations in Euclidean space are presented.

  7. A nearly-linear computational-cost scheme for the forward dynamics of an N-body pendulum

    NASA Technical Reports Server (NTRS)

    Chou, Jack C. K.

    1989-01-01

    The dynamic equations of motion of an n-body pendulum with spherical joints are derived to be a mixed system of differential and algebraic equations (DAE's). The DAE's are kept in implicit form to save arithmetic and preserve the sparsity of the system and are solved by the robust implicit integration method. At each solution point, the predicted solution is corrected to its exact solution within given tolerance using Newton's iterative method. For each iteration, a linear system of the form J delta X = E has to be solved. The computational cost for solving this linear system directly by LU factorization is O(n exp 3), and it can be reduced significantly by exploring the structure of J. It is shown that by recognizing the recursive patterns and exploiting the sparsity of the system the multiplicative and additive computational costs for solving J delta X = E are O(n) and O(n exp 2), respectively. The formulation and solution method for an n-body pendulum is presented. The computational cost is shown to be nearly linearly proportional to the number of bodies.

  8. Heavy Metal Adsorption onto Kappaphycus sp. from Aqueous Solutions: The Use of Error Functions for Validation of Isotherm and Kinetics Models

    PubMed Central

    Rahman, Md. Sayedur; Sathasivam, Kathiresan V.

    2015-01-01

    Biosorption process is a promising technology for the removal of heavy metals from industrial wastes and effluents using low-cost and effective biosorbents. In the present study, adsorption of Pb2+, Cu2+, Fe2+, and Zn2+ onto dried biomass of red seaweed Kappaphycus sp. was investigated as a function of pH, contact time, initial metal ion concentration, and temperature. The experimental data were evaluated by four isotherm models (Langmuir, Freundlich, Temkin, and Dubinin-Radushkevich) and four kinetic models (pseudo-first-order, pseudo-second-order, Elovich, and intraparticle diffusion models). The adsorption process was feasible, spontaneous, and endothermic in nature. Functional groups in the biomass involved in metal adsorption process were revealed as carboxylic and sulfonic acids and sulfonate by Fourier transform infrared analysis. A total of nine error functions were applied to validate the models. We strongly suggest the analysis of error functions for validating adsorption isotherm and kinetic models using linear methods. The present work shows that the red seaweed Kappaphycus sp. can be used as a potentially low-cost biosorbent for the removal of heavy metal ions from aqueous solutions. Further study is warranted to evaluate its feasibility for the removal of heavy metals from the real environment. PMID:26295032

  9. Heavy Metal Adsorption onto Kappaphycus sp. from Aqueous Solutions: The Use of Error Functions for Validation of Isotherm and Kinetics Models.

    PubMed

    Rahman, Md Sayedur; Sathasivam, Kathiresan V

    2015-01-01

    Biosorption process is a promising technology for the removal of heavy metals from industrial wastes and effluents using low-cost and effective biosorbents. In the present study, adsorption of Pb(2+), Cu(2+), Fe(2+), and Zn(2+) onto dried biomass of red seaweed Kappaphycus sp. was investigated as a function of pH, contact time, initial metal ion concentration, and temperature. The experimental data were evaluated by four isotherm models (Langmuir, Freundlich, Temkin, and Dubinin-Radushkevich) and four kinetic models (pseudo-first-order, pseudo-second-order, Elovich, and intraparticle diffusion models). The adsorption process was feasible, spontaneous, and endothermic in nature. Functional groups in the biomass involved in metal adsorption process were revealed as carboxylic and sulfonic acids and sulfonate by Fourier transform infrared analysis. A total of nine error functions were applied to validate the models. We strongly suggest the analysis of error functions for validating adsorption isotherm and kinetic models using linear methods. The present work shows that the red seaweed Kappaphycus sp. can be used as a potentially low-cost biosorbent for the removal of heavy metal ions from aqueous solutions. Further study is warranted to evaluate its feasibility for the removal of heavy metals from the real environment.

  10. A linear programming manual

    NASA Technical Reports Server (NTRS)

    Tuey, R. C.

    1972-01-01

    Computer solutions of linear programming problems are outlined. Information covers vector spaces, convex sets, and matrix algebra elements for solving simultaneous linear equations. Dual problems, reduced cost analysis, ranges, and error analysis are illustrated.

  11. [Electronic versus paper-based patient records: a cost-benefit analysis].

    PubMed

    Neubauer, A S; Priglinger, S; Ehrt, O

    2001-11-01

    The aim of this study is to compare the costs and benefits of electronic, paperless patient records with the conventional paper-based charts. Costs and benefits of planned electronic patient records are calculated for a University eye hospital with 140 beds. Benefit is determined by direct costs saved by electronic records. In the example shown, the additional benefits of electronic patient records, as far as they can be quantified total 192,000 DM per year. The costs of the necessary investments are 234,000 DM per year when using a linear depreciation over 4 years. In total, there are additional annual costs for electronic patient records of 42,000 DM. Different scenarios were analyzed. By increasing the time of depreciation to 6 years, the cost deficit reduces to only approximately 9,000 DM. Increased wages reduce the deficit further while the deficit increases with a loss of functions of the electronic patient record. However, several benefits of electronic records regarding research, teaching, quality control and better data access cannot be easily quantified and would greatly increase the benefit to cost ratio. Only part of the advantages of electronic patient records can easily be quantified in terms of directly saved costs. The small cost deficit calculated in this example is overcompensated by several benefits, which can only be enumerated qualitatively due to problems in quantification.

  12. Multi-objective and Perishable Fuzzy Inventory Models Having Weibull Life-time With Time Dependent Demand, Demand Dependent Production and Time Varying Holding Cost: A Possibility/Necessity Approach

    NASA Astrophysics Data System (ADS)

    Pathak, Savita; Mondal, Seema Sarkar

    2010-10-01

    A multi-objective inventory model of deteriorating item has been developed with Weibull rate of decay, time dependent demand, demand dependent production, time varying holding cost allowing shortages in fuzzy environments for non- integrated and integrated businesses. Here objective is to maximize the profit from different deteriorating items with space constraint. The impreciseness of inventory parameters and goals for non-integrated business has been expressed by linear membership functions. The compromised solutions are obtained by different fuzzy optimization methods. To incorporate the relative importance of the objectives, the different cardinal weights crisp/fuzzy have been assigned. The models are illustrated with numerical examples and results of models with crisp/fuzzy weights are compared. The result for the model assuming them to be integrated business is obtained by using Generalized Reduced Gradient Method (GRG). The fuzzy integrated model with imprecise inventory cost is formulated to optimize the possibility necessity measure of fuzzy goal of the objective function by using credibility measure of fuzzy event by taking fuzzy expectation. The results of crisp/fuzzy integrated model are illustrated with numerical examples and results are compared.

  13. Energetic Constraints Produce Self-sustained Oscillatory Dynamics in Neuronal Networks

    PubMed Central

    Burroni, Javier; Taylor, P.; Corey, Cassian; Vachnadze, Tengiz; Siegelmann, Hava T.

    2017-01-01

    Overview: We model energy constraints in a network of spiking neurons, while exploring general questions of resource limitation on network function abstractly. Background: Metabolic states like dietary ketosis or hypoglycemia have a large impact on brain function and disease outcomes. Glia provide metabolic support for neurons, among other functions. Yet, in computational models of glia-neuron cooperation, there have been no previous attempts to explore the effects of direct realistic energy costs on network activity in spiking neurons. Currently, biologically realistic spiking neural networks assume that membrane potential is the main driving factor for neural spiking, and do not take into consideration energetic costs. Methods: We define local energy pools to constrain a neuron model, termed Spiking Neuron Energy Pool (SNEP), which explicitly incorporates energy limitations. Each neuron requires energy to spike, and resources in the pool regenerate over time. Our simulation displays an easy-to-use GUI, which can be run locally in a web browser, and is freely available. Results: Energy dependence drastically changes behavior of these neural networks, causing emergent oscillations similar to those in networks of biological neurons. We analyze the system via Lotka-Volterra equations, producing several observations: (1) energy can drive self-sustained oscillations, (2) the energetic cost of spiking modulates the degree and type of oscillations, (3) harmonics emerge with frequencies determined by energy parameters, and (4) varying energetic costs have non-linear effects on energy consumption and firing rates. Conclusions: Models of neuron function which attempt biological realism may benefit from including energy constraints. Further, we assert that observed oscillatory effects of energy limitations exist in networks of many kinds, and that these findings generalize to abstract graphs and technological applications. PMID:28289370

  14. Quadratic Programming for Allocating Control Effort

    NASA Technical Reports Server (NTRS)

    Singh, Gurkirpal

    2005-01-01

    A computer program calculates an optimal allocation of control effort in a system that includes redundant control actuators. The program implements an iterative (but otherwise single-stage) algorithm of the quadratic-programming type. In general, in the quadratic-programming problem, one seeks the values of a set of variables that minimize a quadratic cost function, subject to a set of linear equality and inequality constraints. In this program, the cost function combines control effort (typically quantified in terms of energy or fuel consumed) and control residuals (differences between commanded and sensed values of variables to be controlled). In comparison with prior control-allocation software, this program offers approximately equal accuracy but much greater computational efficiency. In addition, this program offers flexibility, robustness to actuation failures, and a capability for selective enforcement of control requirements. The computational efficiency of this program makes it suitable for such complex, real-time applications as controlling redundant aircraft actuators or redundant spacecraft thrusters. The program is written in the C language for execution in a UNIX operating system.

  15. Modeling personnel turnover in the parametric organization

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1991-01-01

    A model is developed for simulating the dynamics of a newly formed organization, credible during all phases of organizational development. The model development process is broken down into the activities of determining the tasks required for parametric cost analysis (PCA), determining the skills required for each PCA task, determining the skills available in the applicant marketplace, determining the structure of the model, implementing the model, and testing it. The model, parameterized by the likelihood of job function transition, has demonstrated by the capability to represent the transition of personnel across functional boundaries within a parametric organization using a linear dynamical system, and the ability to predict required staffing profiles to meet functional needs at the desired time. The model can be extended by revisions of the state and transition structure to provide refinements in functional definition for the parametric and extended organization.

  16. Mathematical programming models for the economic design and assessment of wind energy conversion systems

    NASA Astrophysics Data System (ADS)

    Reinert, K. A.

    The use of linear decision rules (LDR) and chance constrained programming (CCP) to optimize the performance of wind energy conversion clusters coupled to storage systems is described. Storage is modelled by LDR and output by CCP. The linear allocation rule and linear release rule prescribe the size and optimize a storage facility with a bypass. Chance constraints are introduced to explicitly treat reliability in terms of an appropriate value from an inverse cumulative distribution function. Details of deterministic programming structure and a sample problem involving a 500 kW and a 1.5 MW WECS are provided, considering an installed cost of $1/kW. Four demand patterns and three levels of reliability are analyzed for optimizing the generator choice and the storage configuration for base load and peak operating conditions. Deficiencies in ability to predict reliability and to account for serial correlations are noted in the model, which is concluded useful for narrowing WECS design options.

  17. Optimal inventories for overhaul of repairable redundant systems - A Markov decision model

    NASA Technical Reports Server (NTRS)

    Schaefer, M. K.

    1984-01-01

    A Markovian decision model was developed to calculate the optimal inventory of repairable spare parts for an avionics control system for commercial aircraft. Total expected shortage costs, repair costs, and holding costs are minimized for a machine containing a single system of redundant parts. Transition probabilities are calculated for each repair state and repair rate, and optimal spare parts inventory and repair strategies are determined through linear programming. The linear programming solutions are given in a table.

  18. Discrete-time Markovian-jump linear quadratic optimal control

    NASA Technical Reports Server (NTRS)

    Chizeck, H. J.; Willsky, A. S.; Castanon, D.

    1986-01-01

    This paper is concerned with the optimal control of discrete-time linear systems that possess randomly jumping parameters described by finite-state Markov processes. For problems having quadratic costs and perfect observations, the optimal control laws and expected costs-to-go can be precomputed from a set of coupled Riccati-like matrix difference equations. Necessary and sufficient conditions are derived for the existence of optimal constant control laws which stabilize the controlled system as the time horizon becomes infinite, with finite optimal expected cost.

  19. Structure and properties of fullerene molecular crystals with linear-scaling van der Waals density functional theory

    NASA Astrophysics Data System (ADS)

    Mostofi, Arash; Andrinopoulos, Lampros; Hine, Nicholas

    2014-03-01

    Fullerene molecular crystals are of technological promise for their use in heterojunction photovoltaic cells. An improved theoretical understanding of their structure and properties would be a step towards the rational design of new devices. Simulations based on density-functional theory (DFT) are invaluable for developing such insight, but standard semi-local functionals do not capture the important inter-molecular van der Waals (vdW) interactions in fullerene crystals. Furthermore the computational cost associated with the large unit cells needed are at the limit or beyond the capabilities of traditional DFT methods. In this work we overcome these limitations by using our implementation of a number of vdW-DFs in the ONETEP linear-scaling DFT code to study the structural properties of C60 molecular crystals. Powder neutron diffraction shows that the low-temperature Pa-3 phase is orientationally ordered with individual C60 units rotated around the [111] direction. We fully explore the energy landscape associated with the rotation angle and find two stable structures that are energetically very close, one of which corresponds to the experimentally observed structure. We further consider the effect of orientational disorder in very large supercells of thousands of atoms.

  20. Dispersion interactions with linear scaling DFT: a study of planar molecules on charged polar surfaces

    NASA Astrophysics Data System (ADS)

    Andrinopoulos, Lampros; Hine, Nicholas; Haynes, Peter; Mostofi, Arash

    2010-03-01

    The placement of organic molecules such as CuPc (copper phthalocyanine) on wurtzite ZnO (zinc oxide) charged surfaces has been proposed as a way of creating photovoltaic solar cellsfootnotetextG.D. Sharma et al., Solar Energy Materials & Solar Cells 90, 933 (2006) ; optimising their performance may be aided by computational simulation. Electronic structure calculations provide high accuracy at modest computational cost but two challenges are encountered for such layered systems. First, the system size is at or beyond the limit of traditional cubic-scaling Density Functional Theory (DFT). Second, traditional exchange-correlation functionals do not account for van der Waals (vdW) interactions, crucial for determining the structure of weakly bonded systems. We present an implementation of recently developed approachesfootnotetextP.L. Silvestrelli, P.R.L. 100, 102 (2008) to include vdW in DFT within ONETEPfootnotetextC.-K. Skylaris, P.D. Haynes, A.A. Mostofi and M.C. Payne, J.C.P. 122, 084119 (2005) , a linear-scaling package for performing DFT calculations using a basis of localised functions. We have applied this methodology to simple planar organic molecules, such as benzene and pentacene, on ZnO surfaces.

  1. Accelerating molecular property calculations with nonorthonormal Krylov space methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.

    Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less

  2. Accelerating molecular property calculations with nonorthonormal Krylov space methods

    DOE PAGES

    Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...

    2016-05-03

    Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less

  3. The Use of Efficient Broadcast Protocols in Asynchronous Distributed Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Schmuck, Frank Bernhard

    1988-01-01

    Reliable broadcast protocols are important tools in distributed and fault-tolerant programming. They are useful for sharing information and for maintaining replicated data in a distributed system. However, a wide range of such protocols has been proposed. These protocols differ in their fault tolerance and delivery ordering characteristics. There is a tradeoff between the cost of a broadcast protocol and how much ordering it provides. It is, therefore, desirable to employ protocols that support only a low degree of ordering whenever possible. This dissertation presents techniques for deciding how strongly ordered a protocol is necessary to solve a given application problem. It is shown that there are two distinct classes of application problems: problems that can be solved with efficient, asynchronous protocols, and problems that require global ordering. The concept of a linearization function that maps partially ordered sets of events to totally ordered histories is introduced. How to construct an asynchronous implementation that solves a given problem if a linearization function for it can be found is shown. It is proved that in general the question of whether a problem has an asynchronous solution is undecidable. Hence there exists no general algorithm that would automatically construct a suitable linearization function for a given problem. Therefore, an important subclass of problems that have certain commutativity properties are considered. Techniques for constructing asynchronous implementations for this class are presented. These techniques are useful for constructing efficient asynchronous implementations for a broad range of practical problems.

  4. Study of high-performance canonical molecular orbitals calculation for proteins

    NASA Astrophysics Data System (ADS)

    Hirano, Toshiyuki; Sato, Fumitoshi

    2017-11-01

    The canonical molecular orbital (CMO) calculation can help to understand chemical properties and reactions in proteins. However, it is difficult to perform the CMO calculation of proteins because of its self-consistent field (SCF) convergence problem and expensive computational cost. To certainly obtain the CMO of proteins, we work in research and development of high-performance CMO applications and perform experimental studies. We have proposed the third-generation density-functional calculation method of calculating the SCF, which is more advanced than the FILE and direct method. Our method is based on Cholesky decomposition for two-electron integrals calculation and the modified grid-free method for the pure-XC term evaluation. By using the third-generation density-functional calculation method, the Coulomb, the Fock-exchange, and the pure-XC terms can be given by simple linear algebraic procedure in the SCF loop. Therefore, we can expect to get a good parallel performance in solving the SCF problem by using a well-optimized linear algebra library such as BLAS on the distributed memory parallel computers. The third-generation density-functional calculation method is implemented to our program, ProteinDF. To achieve computing electronic structure of the large molecule, not only overcoming expensive computation cost and also good initial guess for safe SCF convergence are required. In order to prepare a precise initial guess for the macromolecular system, we have developed the quasi-canonical localized orbital (QCLO) method. The QCLO has the characteristics of both localized and canonical orbital in a certain region of the molecule. We have succeeded in the CMO calculations of proteins by using the QCLO method. For simplified and semi-automated calculation of the QCLO method, we have also developed a Python-based program, QCLObot.

  5. SCM: A method to improve network service layout efficiency with network evolution.

    PubMed

    Zhao, Qi; Zhang, Chuanhao; Zhao, Zheng

    2017-01-01

    Network services are an important component of the Internet, which are used to expand network functions for third-party developers. Network function virtualization (NFV) can improve the speed and flexibility of network service deployment. However, with the evolution of the network, network service layout may become inefficient. Regarding this problem, this paper proposes a service chain migration (SCM) method with the framework of "software defined network + network function virtualization" (SDN+NFV), which migrates service chains to adapt to network evolution and improves the efficiency of the network service layout. SCM is modeled as an integer linear programming problem and resolved via particle swarm optimization. An SCM prototype system is designed based on an SDN controller. Experiments demonstrate that SCM could reduce the network traffic cost and energy consumption efficiently.

  6. The costs of heparin-induced thrombocytopenia: a patient-based cost of illness analysis.

    PubMed

    Wilke, T; Tesch, S; Scholz, A; Kohlmann, T; Greinacher, A

    2009-05-01

    SUMMARY BACKGROUND AND OBJECTIVES: Due to the complexity of heparin-induced thrombocytopenia (HIT), currently available cost analyses are rough estimates. The objectives of this study were quantification of costs involved in HIT and identification of main cost drivers based on a patient-oriented approach. Patients diagnosed with HIT (1995-2004, University-hospital Greifswald, Germany) based on a positive functional assay (HIPA test) were retrieved from the laboratory records and scored (4T-score) by two medical experts using the patient file. For cost of illness analysis, predefined HIT-relevant cost parameters (medication costs, prolonged in-hospital stay, diagnostic and therapeutic interventions, laboratory tests, blood transfusions) were retrieved from the patient files. The data were analysed by linear regression estimates with the log of costs and a gamma regression model. Mean length of stay data of non-HIT patients were obtained from the German Federal Statistical Office, adjusted for patient characteristics, comorbidities and year of treatment. Hospital costs were provided by the controlling department. One hundred and thirty HIT cases with a 4T-score >or=4 and a positive HIPA test were analyzed. Mean additional costs of a HIT case were 9008 euro. The main cost drivers were prolonged in-hospital stay (70.3%) and costs of alternative anticoagulants (19.7%). HIT was more costly in surgical patients compared with medical patients and in patients with thrombosis. Early start of alternative anticoagulation did not increase HIT costs despite the high medication costs indicating prevention of costly complications. An HIT cost calculator is provided, allowing online calculation of HIT costs based on local cost structures and different currencies.

  7. The business case for the reduction of surgical complications in VA hospitals.

    PubMed

    Vaughan-Sarrazin, Mary; Bayman, Levent; Rosenthal, Gary; Henderson, William; Hendricks, Ann; Cullen, Joseph J

    2011-04-01

    Surgical complications contribute substantially to costs. Most important, surgical complications contribute to morbidity and mortality, and some may be preventable. This study estimates costs of specific surgical complications for patients undergoing general surgery in VA hospitals using merged data from the VA Surgical Quality Improvement Program and VA Decision Support System. Costs associated with 19 potentially preventable complications within 6 broader categories were estimated using generalized, linear mixed regression models to control for patient-level determinants of costs (eg, type of operation, demographics, comorbidity, severity) and hospital-level variation in costs. Costs included costs of the index hospitalization and subsequent 30-day readmissions. In 14,639 patients undergoing general surgical procedures from 10/2005 through 9/2006, 20% of patients developed postoperative surgical complications. The presence of any complication significantly increased unadjusted costs nearly 3-fold ($61,083 vs $22,000), with the largest cost differential attributed to respiratory complications. Patients who developed complications had several markers for greater preoperative severity, including increased age and a lesser presurgery functional health status. After controlling for differences in patient severity, costs for patients with any complication were 1.89 times greater compared to costs for patients with no complications (P < .0001). Within major complication categories, adjusted costs were significantly greater for patients with respiratory, cardiac, central nervous system, urinary, wound, or other complications. Surgical complications contribute markedly to costs of inpatient operations. Investment in quality improvement that decreases the incidence of surgical complications could decrease costs. Copyright © 2011 Mosby, Inc. All rights reserved.

  8. A comparison of direct and indirect methods for the estimation of health utilities from clinical outcomes.

    PubMed

    Hernández Alava, Mónica; Wailoo, Allan; Wolfe, Fred; Michaud, Kaleb

    2014-10-01

    Analysts frequently estimate health state utility values from other outcomes. Utility values like EQ-5D have characteristics that make standard statistical methods inappropriate. We have developed a bespoke, mixture model approach to directly estimate EQ-5D. An indirect method, "response mapping," first estimates the level on each of the 5 dimensions of the EQ-5D and then calculates the expected tariff score. These methods have never previously been compared. We use a large observational database from patients with rheumatoid arthritis (N = 100,398). Direct estimation of UK EQ-5D scores as a function of the Health Assessment Questionnaire (HAQ), pain, and age was performed with a limited dependent variable mixture model. Indirect modeling was undertaken with a set of generalized ordered probit models with expected tariff scores calculated mathematically. Linear regression was reported for comparison purposes. Impact on cost-effectiveness was demonstrated with an existing model. The linear model fits poorly, particularly at the extremes of the distribution. The bespoke mixture model and the indirect approaches improve fit over the entire range of EQ-5D. Mean average error is 10% and 5% lower compared with the linear model, respectively. Root mean squared error is 3% and 2% lower. The mixture model demonstrates superior performance to the indirect method across almost the entire range of pain and HAQ. These lead to differences in cost-effectiveness of up to 20%. There are limited data from patients in the most severe HAQ health states. Modeling of EQ-5D from clinical measures is best performed directly using the bespoke mixture model. This substantially outperforms the indirect method in this example. Linear models are inappropriate, suffer from systematic bias, and generate values outside the feasible range. © The Author(s) 2013.

  9. Increased decision thresholds enhance information gathering performance in juvenile Obsessive-Compulsive Disorder (OCD)

    PubMed Central

    Iannaccone, Reto; Brem, Silvia; Walitza, Susanne

    2017-01-01

    Patients with obsessive-compulsive disorder (OCD) can be described as cautious and hesitant, manifesting an excessive indecisiveness that hinders efficient decision making. However, excess caution in decision making may also lead to better performance in specific situations where the cost of extended deliberation is small. We compared 16 juvenile OCD patients with 16 matched healthy controls whilst they performed a sequential information gathering task under different external cost conditions. We found that patients with OCD outperformed healthy controls, winning significantly more points. The groups also differed in the number of draws required prior to committing to a decision, but not in decision accuracy. A novel Bayesian computational model revealed that subjective sampling costs arose as a non-linear function of sampling, closely resembling an escalating urgency signal. Group difference in performance was best explained by a later emergence of these subjective costs in the OCD group, also evident in an increased decision threshold. Our findings present a novel computational model and suggest that enhanced information gathering in OCD can be accounted for by a higher decision threshold arising out of an altered perception of costs that, in some specific contexts, may be advantageous. PMID:28403139

  10. A rotor optimization using regression analysis

    NASA Technical Reports Server (NTRS)

    Giansante, N.

    1984-01-01

    The design and development of helicopter rotors is subject to the many design variables and their interactions that effect rotor operation. Until recently, selection of rotor design variables to achieve specified rotor operational qualities has been a costly, time consuming, repetitive task. For the past several years, Kaman Aerospace Corporation has successfully applied multiple linear regression analysis, coupled with optimization and sensitivity procedures, in the analytical design of rotor systems. It is concluded that approximating equations can be developed rapidly for a multiplicity of objective and constraint functions and optimizations can be performed in a rapid and cost effective manner; the number and/or range of design variables can be increased by expanding the data base and developing approximating functions to reflect the expanded design space; the order of the approximating equations can be expanded easily to improve correlation between analyzer results and the approximating equations; gradients of the approximating equations can be calculated easily and these gradients are smooth functions reducing the risk of numerical problems in the optimization; the use of approximating functions allows the problem to be started easily and rapidly from various initial designs to enhance the probability of finding a global optimum; and the approximating equations are independent of the analysis or optimization codes used.

  11. Feasibility of Decentralized Linear-Quadratic-Gaussian Control of Autonomous Distributed Spacecraft

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    1999-01-01

    A distributed satellite formation, modeled as an arbitrary number of fully connected nodes in a network, could be controlled using a decentralized controller framework that distributes operations in parallel over the network. For such problems, a solution that minimizes data transmission requirements, in the context of linear-quadratic-Gaussian (LQG) control theory, was given by Speyer. This approach is advantageous because it is non-hierarchical, detected failures gracefully degrade system performance, fewer local computations are required than for a centralized controller, and it is optimal with respect to the standard LQG cost function. Disadvantages of the approach are the need for a fully connected communications network, the total operations performed over all the nodes are greater than for a centralized controller, and the approach is formulated for linear time-invariant systems. To investigate the feasibility of the decentralized approach to satellite formation flying, a simple centralized LQG design for a spacecraft orbit control problem is adapted to the decentralized framework. The simple design uses a fixed reference trajectory (an equatorial, Keplerian, circular orbit), and by appropriate choice of coordinates and measurements is formulated as a linear time-invariant system.

  12. Flow assignment model for quantitative analysis of diverting bulk freight from road to railway

    PubMed Central

    Liu, Chang; Wang, Jiaxi; Xiao, Jie; Liu, Siqi; Wu, Jianping; Li, Jian

    2017-01-01

    Since railway transport possesses the advantage of high volume and low carbon emissions, diverting some freight from road to railway will help reduce the negative environmental impacts associated with transport. This paper develops a flow assignment model for quantitative analysis of diverting truck freight to railway. First, a general network which considers road transportation, railway transportation, handling and transferring is established according to all the steps in the whole transportation process. Then general functions which embody the factors which the shippers will pay attention to when choosing mode and path are formulated. The general functions contain the congestion cost on road, the capacity constraints of railways and freight stations. Based on the general network and general cost function, a user equilibrium flow assignment model is developed to simulate the flow distribution on the general network under the condition that all shippers choose transportation mode and path independently. Since the model is nonlinear and challenging, we adopt a method that uses tangent lines to constitute envelope curve to linearize it. Finally, a numerical example is presented to test the model and show the method of making quantitative analysis of bulk freight modal shift between road and railway. PMID:28771536

  13. Blind separation of positive sources by globally convergent gradient search.

    PubMed

    Oja, Erkki; Plumbley, Mark

    2004-09-01

    The instantaneous noise-free linear mixing model in independent component analysis is largely a solved problem under the usual assumption of independent nongaussian sources and full column rank mixing matrix. However, with some prior information on the sources, like positivity, new analysis and perhaps simplified solution methods may yet become possible. In this letter, we consider the task of independent component analysis when the independent sources are known to be nonnegative and well grounded, which means that they have a nonzero pdf in the region of zero. It can be shown that in this case, the solution method is basically very simple: an orthogonal rotation of the whitened observation vector into nonnegative outputs will give a positive permutation of the original sources. We propose a cost function whose minimum coincides with nonnegativity and derive the gradient algorithm under the whitening constraint, under which the separating matrix is orthogonal. We further prove that in the Stiefel manifold of orthogonal matrices, the cost function is a Lyapunov function for the matrix gradient flow, implying global convergence. Thus, this algorithm is guaranteed to find the nonnegative well-grounded independent sources. The analysis is complemented by a numerical simulation, which illustrates the algorithm.

  14. Predicting equipment needs of children with cerebral palsy using the Gross Motor Function Classification System: a cross-sectional study.

    PubMed

    Novak, Iona; Smithers-Sheedy, Hayley; Morgan, Cathy

    2012-01-01

    Children with cerebral palsy (CP) routinely use assistive equipment to improve their independence. Specialist equipment is expensive and therefore not always available to the child when needed. The aim of this study was to determine whether the assistive equipment needs of children with CP and the associated costs could be predicted. A cross-sectional study using a chart audit was completed. Two hundred forty-two children met eligibility criteria and were included in the study. Data abstracted from files pertained to the child's CP, associated impairments and assistive equipment prescribed. The findings were generated using linear regression modelling. Gross Motor Function Classification System (GMFCS) level [B = 3.01 (95% CI, 2.36-3.57), p = 0.000] and the presence of epilepsy [B = 2.35 (95% CI, 0.64-4.06), p = 0.008] predicted the prescription of assistive equipment. The more severely affected the gross motor function impairment, the more equipment that was required and the more the equipment cost. The equipment needs of children with CP can be predicted for the duration of childhood. This information may be useful for families and for budget and service planning.

  15. Reliable Adaptive Data Aggregation Route Strategy for a Trade-off between Energy and Lifetime in WSNs

    PubMed Central

    Guo, Wenzhong; Hong, Wei; Zhang, Bin; Chen, Yuzhong; Xiong, Naixue

    2014-01-01

    Mobile security is one of the most fundamental problems in Wireless Sensor Networks (WSNs). The data transmission path will be compromised for some disabled nodes. To construct a secure and reliable network, designing an adaptive route strategy which optimizes energy consumption and network lifetime of the aggregation cost is of great importance. In this paper, we address the reliable data aggregation route problem for WSNs. Firstly, to ensure nodes work properly, we propose a data aggregation route algorithm which improves the energy efficiency in the WSN. The construction process achieved through discrete particle swarm optimization (DPSO) saves node energy costs. Then, to balance the network load and establish a reliable network, an adaptive route algorithm with the minimal energy and the maximum lifetime is proposed. Since it is a non-linear constrained multi-objective optimization problem, in this paper we propose a DPSO with the multi-objective fitness function combined with the phenotype sharing function and penalty function to find available routes. Experimental results show that compared with other tree routing algorithms our algorithm can effectively reduce energy consumption and trade off energy consumption and network lifetime. PMID:25215944

  16. Approximating the linear quadratic optimal control law for hereditary systems with delays in the control

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.

    1987-01-01

    The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary systems. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.

  17. Approximating the linear quadratic optimal control law for hereditary systems with delays in the control

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.

    1988-01-01

    The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary schemes. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.

  18. Suboptimal LQR-based spacecraft full motion control: Theory and experimentation

    NASA Astrophysics Data System (ADS)

    Guarnaccia, Leone; Bevilacqua, Riccardo; Pastorelli, Stefano P.

    2016-05-01

    This work introduces a real time suboptimal control algorithm for six-degree-of-freedom spacecraft maneuvering based on a State-Dependent-Algebraic-Riccati-Equation (SDARE) approach and real-time linearization of the equations of motion. The control strategy is sub-optimal since the gains of the linear quadratic regulator (LQR) are re-computed at each sample time. The cost function of the proposed controller has been compared with the one obtained via a general purpose optimal control software, showing, on average, an increase in control effort of approximately 15%, compensated by real-time implementability. Lastly, the paper presents experimental tests on a hardware-in-the-loop six-degree-of-freedom spacecraft simulator, designed for testing new guidance, navigation, and control algorithms for nano-satellites in a one-g laboratory environment. The tests show the real-time feasibility of the proposed approach.

  19. Boundary Control of Linear Uncertain 1-D Parabolic PDE Using Approximate Dynamic Programming.

    PubMed

    Talaei, Behzad; Jagannathan, Sarangapani; Singler, John

    2018-04-01

    This paper develops a near optimal boundary control method for distributed parameter systems governed by uncertain linear 1-D parabolic partial differential equations (PDE) by using approximate dynamic programming. A quadratic surface integral is proposed to express the optimal cost functional for the infinite-dimensional state space. Accordingly, the Hamilton-Jacobi-Bellman (HJB) equation is formulated in the infinite-dimensional domain without using any model reduction. Subsequently, a neural network identifier is developed to estimate the unknown spatially varying coefficient in PDE dynamics. Novel tuning law is proposed to guarantee the boundedness of identifier approximation error in the PDE domain. A radial basis network (RBN) is subsequently proposed to generate an approximate solution for the optimal surface kernel function online. The tuning law for near optimal RBN weights is created, such that the HJB equation error is minimized while the dynamics are identified and closed-loop system remains stable. Ultimate boundedness (UB) of the closed-loop system is verified by using the Lyapunov theory. The performance of the proposed controller is successfully confirmed by simulation on an unstable diffusion-reaction process.

  20. Modified Chebyshev Picard Iteration for Efficient Numerical Integration of Ordinary Differential Equations

    NASA Astrophysics Data System (ADS)

    Macomber, B.; Woollands, R. M.; Probe, A.; Younes, A.; Bai, X.; Junkins, J.

    2013-09-01

    Modified Chebyshev Picard Iteration (MCPI) is an iterative numerical method for approximating solutions of linear or non-linear Ordinary Differential Equations (ODEs) to obtain time histories of system state trajectories. Unlike other step-by-step differential equation solvers, the Runge-Kutta family of numerical integrators for example, MCPI approximates long arcs of the state trajectory with an iterative path approximation approach, and is ideally suited to parallel computation. Orthogonal Chebyshev Polynomials are used as basis functions during each path iteration; the integrations of the Picard iteration are then done analytically. Due to the orthogonality of the Chebyshev basis functions, the least square approximations are computed without matrix inversion; the coefficients are computed robustly from discrete inner products. As a consequence of discrete sampling and weighting adopted for the inner product definition, Runge phenomena errors are minimized near the ends of the approximation intervals. The MCPI algorithm utilizes a vector-matrix framework for computational efficiency. Additionally, all Chebyshev coefficients and integrand function evaluations are independent, meaning they can be simultaneously computed in parallel for further decreased computational cost. Over an order of magnitude speedup from traditional methods is achieved in serial processing, and an additional order of magnitude is achievable in parallel architectures. This paper presents a new MCPI library, a modular toolset designed to allow MCPI to be easily applied to a wide variety of ODE systems. Library users will not have to concern themselves with the underlying mathematics behind the MCPI method. Inputs are the boundary conditions of the dynamical system, the integrand function governing system behavior, and the desired time interval of integration, and the output is a time history of the system states over the interval of interest. Examples from the field of astrodynamics are presented to compare the output from the MCPI library to current state-of-practice numerical integration methods. It is shown that MCPI is capable of out-performing the state-of-practice in terms of computational cost and accuracy.

  1. Temperature measurement method using temperature coefficient timing for resistive or capacitive sensors

    DOEpatents

    Britton, Jr., Charles L.; Ericson, M. Nance

    1999-01-01

    A method and apparatus for temperature measurement especially suited for low cost, low power, moderate accuracy implementation. It uses a sensor whose resistance varies in a known manner, either linearly or nonlinearly, with temperature, and produces a digital output which is proportional to the temperature of the sensor. The method is based on performing a zero-crossing time measurement of a step input signal that is double differentiated using two differentiators functioning as respective first and second time constants; one temperature stable, and the other varying with the sensor temperature.

  2. Experimental and Theoretical Results in Output-Trajectory Redesign for Flexible Structures

    NASA Technical Reports Server (NTRS)

    Dewey, J. S.; Devasia, Santosh

    1996-01-01

    In this paper we study the optimal redesign of output trajectory for linear invertible systems. This is particularly important for tracking control of flexible structures because the input-state trajectories that achieve the required output may cause excessive vibrations in the structure. A trade-off is then required between tracking and vibrations reduction. We pose and solve this problem as the minimization of a quadratic cost function. The theory is developed and applied to the output tracking of a flexible structure and experimental results are presented.

  3. DLP NIRscan Nano: an ultra-mobile DLP-based near-infrared Bluetooth spectrometer

    NASA Astrophysics Data System (ADS)

    Gelabert, Pedro; Pruett, Eric; Perrella, Gavin; Subramanian, Sreeram; Lakshminarayanan, Aravind

    2016-02-01

    The DLP NIRscan Nano is an ultra-portable spectrometer evaluation module utilizing DLP technology to meet lower cost, smaller size, and higher performance than traditional architectures. The replacement of a linear array detector with DLP digital micromirror device (DMD) in conjunction with a single point detector adds the functionality of programmable spectral filters and sampling techniques that were not previously available on NIR spectrometers. This paper presents the hardware, software, and optical systems of the DLP NIRscan Nano and its design considerations on the implementation of a DLP-based spectrometer.

  4. High-Order Automatic Differentiation of Unmodified Linear Algebra Routines via Nilpotent Matrices

    NASA Astrophysics Data System (ADS)

    Dunham, Benjamin Z.

    This work presents a new automatic differentiation method, Nilpotent Matrix Differentiation (NMD), capable of propagating any order of mixed or univariate derivative through common linear algebra functions--most notably third-party sparse solvers and decomposition routines, in addition to basic matrix arithmetic operations and power series--without changing data-type or modifying code line by line; this allows differentiation across sequences of arbitrarily many such functions with minimal implementation effort. NMD works by enlarging the matrices and vectors passed to the routines, replacing each original scalar with a matrix block augmented by derivative data; these blocks are constructed with special sparsity structures, termed "stencils," each designed to be isomorphic to a particular multidimensional hypercomplex algebra. The algebras are in turn designed such that Taylor expansions of hypercomplex function evaluations are finite in length and thus exactly track derivatives without approximation error. Although this use of the method in the "forward mode" is unique in its own right, it is also possible to apply it to existing implementations of the (first-order) discrete adjoint method to find high-order derivatives with lowered cost complexity; for example, for a problem with N inputs and an adjoint solver whose cost is independent of N--i.e., O(1)--the N x N Hessian can be found in O(N) time, which is comparable to existing second-order adjoint methods that require far more problem-specific implementation effort. Higher derivatives are likewise less expensive--e.g., a N x N x N rank-three tensor can be found in O(N2). Alternatively, a Hessian-vector product can be found in O(1) time, which may open up many matrix-based simulations to a range of existing optimization or surrogate modeling approaches. As a final corollary in parallel to the NMD-adjoint hybrid method, the existing complex-step differentiation (CD) technique is also shown to be capable of finding the Hessian-vector product. All variants are implemented on a stochastic diffusion problem and compared in-depth with various cost and accuracy metrics.

  5. Integration of a Decentralized Linear-Quadratic-Gaussian Control into GSFC's Universal 3-D Autonomous Formation Flying Algorithm

    NASA Technical Reports Server (NTRS)

    Folta, David C.; Carpenter, J. Russell

    1999-01-01

    A decentralized control is investigated for applicability to the autonomous formation flying control algorithm developed by GSFC for the New Millenium Program Earth Observer-1 (EO-1) mission. This decentralized framework has the following characteristics: The approach is non-hierarchical, and coordination by a central supervisor is not required; Detected failures degrade the system performance gracefully; Each node in the decentralized network processes only its own measurement data, in parallel with the other nodes; Although the total computational burden over the entire network is greater than it would be for a single, centralized controller, fewer computations are required locally at each node; Requirements for data transmission between nodes are limited to only the dimension of the control vector, at the cost of maintaining a local additional data vector. The data vector compresses all past measurement history from all the nodes into a single vector of the dimension of the state; and The approach is optimal with respect to standard cost functions. The current approach is valid for linear time-invariant systems only. Similar to the GSFC formation flying algorithm, the extension to linear LQG time-varying systems requires that each node propagate its filter covariance forward (navigation) and controller Riccati matrix backward (guidance) at each time step. Extension of the GSFC algorithm to non-linear systems can also be accomplished via linearization about a reference trajectory in the standard fashion, or linearization about the current state estimate as with the extended Kalman filter. To investigate the feasibility of the decentralized integration with the GSFC algorithm, an existing centralized LQG design for a single spacecraft orbit control problem is adapted to the decentralized framework while using the GSFC algorithm's state transition matrices and framework. The existing GSFC design uses both reference trajectories of each spacecraft in formation and by appropriate choice of coordinates and simplified measurement modeling is formulated as a linear time-invariant system. Results for improvements to the GSFC algorithm and a multiple satellite formation will be addressed. The goal of this investigation is to progressively relax the assumptions that result in linear time-invariance, ultimately to the point of linearization of the non-linear dynamics about the current state estimate as in the extended Kalman filter. An assessment will then be made about the feasibility of the decentralized approach to the realistic formation flying application of the EO-1/Landsat 7 formation flying experiment.

  6. Group physical therapy for veterans with knee osteoarthritis: study design and methodology.

    PubMed

    Allen, Kelli D; Bongiorni, Dennis; Walker, Tessa A; Bartle, John; Bosworth, Hayden B; Coffman, Cynthia J; Datta, Santanu K; Edelman, David; Hall, Katherine S; Hansen, Gloria; Jennings, Caroline; Lindquist, Jennifer H; Oddone, Eugene Z; Senick, Margaret J; Sizemore, John C; St John, Jamie; Hoenig, Helen

    2013-03-01

    Physical therapy (PT) is a key component of treatment for knee osteoarthritis (OA) and can decrease pain and improve function. Given the expected rise in prevalence of knee OA and the associated demand for treatment, there is a need for models of care that cost-effectively extend PT services for patients with this condition. This manuscript describes a randomized clinical trial of a group-based physical therapy program that can potentially extend services to more patients with knee OA, providing a greater number of sessions per patient, at lower staffing costs compared to traditional individual PT. Participants with symptomatic knee OA (n = 376) are randomized to either a 12-week group-based PT program (six 1 h sessions, eight patients per group, led by a physical therapist and physical therapist assistant) or usual PT care (two individual visits with a physical therapist). Participants in both PT arms receive instruction in an exercise program, information on joint care and protection, and individual consultations with a physical therapist to address specific functional and therapeutic needs. The primary outcome is the Western Ontario and McMasters Universities Osteoarthritis Index (self-reported pain, stiffness, and function), and the secondary outcome is the Short Physical Performance Test Protocol (objective physical function). Outcomes are assessed at baseline and 12-week follow-up, and the primary outcome is also assessed via telephone at 24-week follow-up to examine sustainability of effects. Linear mixed models will be used to compare outcomes for the two study arms. An economic cost analysis of the PT interventions will also be conducted. Published by Elsevier Inc.

  7. Use of Linear Programming to Develop Cost-Minimized Nutritionally Adequate Health Promoting Food Baskets.

    PubMed

    Parlesak, Alexandr; Tetens, Inge; Dejgård Jensen, Jørgen; Smed, Sinne; Gabrijelčič Blenkuš, Mojca; Rayner, Mike; Darmon, Nicole; Robertson, Aileen

    2016-01-01

    Food-Based Dietary Guidelines (FBDGs) are developed to promote healthier eating patterns, but increasing food prices may make healthy eating less affordable. The aim of this study was to design a range of cost-minimized nutritionally adequate health-promoting food baskets (FBs) that help prevent both micronutrient inadequacy and diet-related non-communicable diseases at lowest cost. Average prices for 312 foods were collected within the Greater Copenhagen area. The cost and nutrient content of five different cost-minimized FBs for a family of four were calculated per day using linear programming. The FBs were defined using five different constraints: cultural acceptability (CA), or dietary guidelines (DG), or nutrient recommendations (N), or cultural acceptability and nutrient recommendations (CAN), or dietary guidelines and nutrient recommendations (DGN). The variety and number of foods in each of the resulting five baskets was increased through limiting the relative share of individual foods. The one-day version of N contained only 12 foods at the minimum cost of DKK 27 (€ 3.6). The CA, DG, and DGN were about twice of this and the CAN cost ~DKK 81 (€ 10.8). The baskets with the greater variety of foods contained from 70 (CAN) to 134 (DGN) foods and cost between DKK 60 (€ 8.1, N) and DKK 125 (€ 16.8, DGN). Ensuring that the food baskets cover both dietary guidelines and nutrient recommendations doubled the cost while cultural acceptability (CAN) tripled it. Use of linear programming facilitates the generation of low-cost food baskets that are nutritionally adequate, health promoting, and culturally acceptable.

  8. Chance of Vulnerability Reduction in Application-Specific NoC through Distance Aware Mapping Algorithm

    NASA Astrophysics Data System (ADS)

    Janidarmian, Majid; Fekr, Atena Roshan; Bokharaei, Vahhab Samadi

    2011-08-01

    Mapping algorithm which means which core should be linked to which router is one of the key issues in the design flow of network-on-chip. To achieve an application-specific NoC design procedure that minimizes the communication cost and improves the fault tolerant property, first a heuristic mapping algorithm that produces a set of different mappings in a reasonable time is presented. This algorithm allows the designers to identify the set of most promising solutions in a large design space, which has low communication costs while yielding optimum communication costs in some cases. Another evaluated parameter, vulnerability index, is then considered as a principle of estimating the fault-tolerance property in all produced mappings. Finally, in order to yield a mapping which considers trade-offs between these two parameters, a linear function is defined and introduced. It is also observed that more flexibility to prioritize solutions within the design space is possible by adjusting a set of if-then rules in fuzzy logic.

  9. Thermodynamics of quasideterministic digital computers

    NASA Astrophysics Data System (ADS)

    Chu, Dominique

    2018-02-01

    A central result of stochastic thermodynamics is that irreversible state transitions of Markovian systems entail a cost in terms of an infinite entropy production. A corollary of this is that strictly deterministic computation is not possible. Using a thermodynamically consistent model, we show that quasideterministic computation can be achieved at finite, and indeed modest cost with accuracies that are indistinguishable from deterministic behavior for all practical purposes. Concretely, we consider the entropy production of stochastic (Markovian) systems that behave like and and a not gates. Combinations of these gates can implement any logical function. We require that these gates return the correct result with a probability that is very close to 1, and additionally, that they do so within finite time. The central component of the model is a machine that can read and write binary tapes. We find that the error probability of the computation of these gates falls with the power of the system size, whereas the cost only increases linearly with the system size.

  10. Two-year effects and cost-effectiveness of pelvic floor muscle training in mild pelvic organ prolapse: a randomised controlled trial in primary care.

    PubMed

    Panman, Cmcr; Wiegersma, M; Kollen, B J; Berger, M Y; Lisman-Van Leeuwen, Y; Vermeulen, K M; Dekker, J H

    2017-02-01

    To compare effects and cost-effectiveness of pelvic floor muscle training (PFMT) and watchful waiting in women with pelvic organ prolapse. Randomised controlled trial. Dutch general practice. Women (≥55 years) with symptomatic mild prolapse, identified by screening. Linear multilevel analysis. Primary outcome was change of pelvic floor symptoms (Pelvic-Floor-Distress-Inventory-20 [PFDI-20]) during 24 months. Secondary outcomes were condition-specific and general quality of life, costs, sexual functioning, prolapse stage, pelvic floor muscle function and women's perceived improvement of symptoms. PFMT (n = 145) resulted in a 12.2-point (95% CI 7.2-17.2, P < 0.001) greater improvement in PFDI-20 score during 24 months compared with watchful waiting (n = 142). Participants randomised to PFMT more often reported improved symptoms (43% versus 14% for watchful waiting). Direct medical costs per person were €330 for PFMT and €91 for watchful waiting but costs for absorbent pads were lower in the PFMT group (€40 versus €77). Other secondary outcomes did not differ between groups. Post-hoc subgroup analysis demonstrated that PFMT was more effective in women experiencing higher pelvic floor symptom distress at baseline. PFMT resulted in greater pelvic floor symptom improvement compared with watchful waiting. The difference was statistically significant, but below the presumed level of clinical relevance (15 points). PFMT more often led to women's perceived improvement of symptoms, lower absorbent pads costs, and was more effective in women experiencing higher pelvic floor symptom distress. Therefore, PFMT could be advised in women with bothersome symptoms of mild prolapse. Pelvic floor muscle training can be effective in women with bothersome symptoms of mild prolapse. © 2016 Royal College of Obstetricians and Gynaecologists.

  11. Chaotic simulated annealing by a neural network with a variable delay: design and application.

    PubMed

    Chen, Shyan-Shiou

    2011-10-01

    In this paper, we have three goals: the first is to delineate the advantages of a variably delayed system, the second is to find a more intuitive Lyapunov function for a delayed neural network, and the third is to design a delayed neural network for a quadratic cost function. For delayed neural networks, most researchers construct a Lyapunov function based on the linear matrix inequality (LMI) approach. However, that approach is not intuitive. We provide a alternative candidate Lyapunov function for a delayed neural network. On the other hand, if we are first given a quadratic cost function, we can construct a delayed neural network by suitably dividing the second-order term into two parts: a self-feedback connection weight and a delayed connection weight. To demonstrate the advantage of a variably delayed neural network, we propose a transiently chaotic neural network with variable delay and show numerically that the model should possess a better searching ability than Chen-Aihara's model, Wang's model, and Zhao's model. We discuss both the chaotic and the convergent phases. During the chaotic phase, we simply present bifurcation diagrams for a single neuron with a constant delay and with a variable delay. We show that the variably delayed model possesses the stochastic property and chaotic wandering. During the convergent phase, we not only provide a novel Lyapunov function for neural networks with a delay (the Lyapunov function is independent of the LMI approach) but also establish a correlation between the Lyapunov function for a delayed neural network and an objective function for the traveling salesman problem. © 2011 IEEE

  12. Using linear programming to minimize the cost of nurse personnel.

    PubMed

    Matthews, Charles H

    2005-01-01

    Nursing personnel costs make up a major portion of most hospital budgets. This report evaluates and optimizes the utility of the nurse personnel at the Internal Medicine Outpatient Clinic of Wake Forest University Baptist Medical Center. Linear programming (LP) was employed to determine the effective combination of nurses that would allow for all weekly clinic tasks to be covered while providing the lowest possible cost to the department. Linear programming is a standard application of standard spreadsheet software that allows the operator to establish the variables to be optimized and then requires the operator to enter a series of constraints that will each have an impact on the ultimate outcome. The application is therefore able to quantify and stratify the nurses necessary to execute the tasks. With the report, a specific sensitivity analysis can be performed to assess just how sensitive the outcome is to the stress of adding or deleting a nurse to or from the payroll. The nurse employee cost structure in this study consisted of five certified nurse assistants (CNA), three licensed practicing nurses (LPN), and five registered nurses (RN). The LP revealed that the outpatient clinic should staff four RNs, three LPNs, and four CNAs with 95 percent confidence of covering nurse demand on the floor. This combination of nurses would enable the clinic to: 1. Reduce annual staffing costs by 16 percent; 2. Force each level of nurse to be optimally productive by focusing on tasks specific to their expertise; 3. Assign accountability more efficiently as the nurses adhere to their specific duties; and 4. Ultimately provide a competitive advantage to the clinic as it relates to nurse employee and patient satisfaction. Linear programming can be used to solve capacity problems for just about any staffing situation, provided the model is indeed linear.

  13. Multi-objective experimental design for (13)C-based metabolic flux analysis.

    PubMed

    Bouvin, Jeroen; Cajot, Simon; D'Huys, Pieter-Jan; Ampofo-Asiama, Jerry; Anné, Jozef; Van Impe, Jan; Geeraerd, Annemie; Bernaerts, Kristel

    2015-10-01

    (13)C-based metabolic flux analysis is an excellent technique to resolve fluxes in the central carbon metabolism but costs can be significant when using specialized tracers. This work presents a framework for cost-effective design of (13)C-tracer experiments, illustrated on two different networks. Linear and non-linear optimal input mixtures are computed for networks for Streptomyces lividans and a carcinoma cell line. If only glucose tracers are considered as labeled substrate for a carcinoma cell line or S. lividans, the best parameter estimation accuracy is obtained by mixtures containing high amounts of 1,2-(13)C2 glucose combined with uniformly labeled glucose. Experimental designs are evaluated based on a linear (D-criterion) and non-linear approach (S-criterion). Both approaches generate almost the same input mixture, however, the linear approach is favored due to its low computational effort. The high amount of 1,2-(13)C2 glucose in the optimal designs coincides with a high experimental cost, which is further enhanced when labeling is introduced in glutamine and aspartate tracers. Multi-objective optimization gives the possibility to assess experimental quality and cost at the same time and can reveal excellent compromise experiments. For example, the combination of 100% 1,2-(13)C2 glucose with 100% position one labeled glutamine and the combination of 100% 1,2-(13)C2 glucose with 100% uniformly labeled glutamine perform equally well for the carcinoma cell line, but the first mixture offers a decrease in cost of $ 120 per ml-scale cell culture experiment. We demonstrated the validity of a multi-objective linear approach to perform optimal experimental designs for the non-linear problem of (13)C-metabolic flux analysis. Tools and a workflow are provided to perform multi-objective design. The effortless calculation of the D-criterion can be exploited to perform high-throughput screening of possible (13)C-tracers, while the illustrated benefit of multi-objective design should stimulate its application within the field of (13)C-based metabolic flux analysis. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Are larger dental practices more efficient? An analysis of dental services production.

    PubMed Central

    Lipscomb, J; Douglass, C W

    1986-01-01

    Whether cost-efficiency in dental services production increases with firm size is investigated through application of an activity analysis production function methodology to data from a national survey of dental practices. Under this approach, service delivery in a dental practice is modeled as a linear programming problem that acknowledges distinct input-output relationships for each service. These service-specific relationships are then combined to yield projections of overall dental practice productivity, subject to technical and organizational constraints. The activity analysis reported here represents arguably the most detailed evaluation yet of the relationship between dental practice size and cost-efficiency, controlling for such confounding factors as fee and service-mix differences across firms. We conclude that cost-efficiency does increase with practice size, over the range from solo to four-dentist practices. Largely because of data limitations, we were unable to test satisfactorily for scale economies in practices with five or more dentists. Within their limits, our findings are generally consistent with results from the neoclassical production function literature. From the standpoint of consumer welfare, the critical question raised (but not resolved) here is whether these apparent production efficiencies of group practice are ultimately translated by the market into lower fees, shorter queues, or other nonprice benefits. PMID:3102404

  15. CC2 oscillator strengths within the local framework for calculating excitation energies (LoFEx).

    PubMed

    Baudin, Pablo; Kjærgaard, Thomas; Kristensen, Kasper

    2017-04-14

    In a recent work [P. Baudin and K. Kristensen, J. Chem. Phys. 144, 224106 (2016)], we introduced a local framework for calculating excitation energies (LoFEx), based on second-order approximated coupled cluster (CC2) linear-response theory. LoFEx is a black-box method in which a reduced excitation orbital space (XOS) is optimized to provide coupled cluster (CC) excitation energies at a reduced computational cost. In this article, we present an extension of the LoFEx algorithm to the calculation of CC2 oscillator strengths. Two different strategies are suggested, in which the size of the XOS is determined based on the excitation energy or the oscillator strength of the targeted transitions. The two strategies are applied to a set of medium-sized organic molecules in order to assess both the accuracy and the computational cost of the methods. The results show that CC2 excitation energies and oscillator strengths can be calculated at a reduced computational cost, provided that the targeted transitions are local compared to the size of the molecule. To illustrate the potential of LoFEx for large molecules, both strategies have been successfully applied to the lowest transition of the bivalirudin molecule (4255 basis functions) and compared with time-dependent density functional theory.

  16. SCM: A method to improve network service layout efficiency with network evolution

    PubMed Central

    Zhao, Qi; Zhang, Chuanhao

    2017-01-01

    Network services are an important component of the Internet, which are used to expand network functions for third-party developers. Network function virtualization (NFV) can improve the speed and flexibility of network service deployment. However, with the evolution of the network, network service layout may become inefficient. Regarding this problem, this paper proposes a service chain migration (SCM) method with the framework of “software defined network + network function virtualization” (SDN+NFV), which migrates service chains to adapt to network evolution and improves the efficiency of the network service layout. SCM is modeled as an integer linear programming problem and resolved via particle swarm optimization. An SCM prototype system is designed based on an SDN controller. Experiments demonstrate that SCM could reduce the network traffic cost and energy consumption efficiently. PMID:29267299

  17. A method of determining where to target surveillance efforts in heterogeneous epidemiological systems

    PubMed Central

    van den Bosch, Frank; Gottwald, Timothy R.; Alonso Chavez, Vasthi

    2017-01-01

    The spread of pathogens into new environments poses a considerable threat to human, animal, and plant health, and by extension, human and animal wellbeing, ecosystem function, and agricultural productivity, worldwide. Early detection through effective surveillance is a key strategy to reduce the risk of their establishment. Whilst it is well established that statistical and economic considerations are of vital importance when planning surveillance efforts, it is also important to consider epidemiological characteristics of the pathogen in question—including heterogeneities within the epidemiological system itself. One of the most pronounced realisations of this heterogeneity is seen in the case of vector-borne pathogens, which spread between ‘hosts’ and ‘vectors’—with each group possessing distinct epidemiological characteristics. As a result, an important question when planning surveillance for emerging vector-borne pathogens is where to place sampling resources in order to detect the pathogen as early as possible. We answer this question by developing a statistical function which describes the probability distributions of the prevalences of infection at first detection in both hosts and vectors. We also show how this method can be adapted in order to maximise the probability of early detection of an emerging pathogen within imposed sample size and/or cost constraints, and demonstrate its application using two simple models of vector-borne citrus pathogens. Under the assumption of a linear cost function, we find that sampling costs are generally minimised when either hosts or vectors, but not both, are sampled. PMID:28846676

  18. When enough is enough: The worth of monitoring data in aquifer remediation design

    NASA Astrophysics Data System (ADS)

    James, Bruce R.; Gorelick, Steven M.

    1994-12-01

    Given the high cost of data collection at groundwater contamination remediation sites, it is becoming increasingly important to make data collection as cost-effective as possible. A Bayesian data worth framework is developed in an attempt to carry out this task for remediation programs in which a groundwater contaminant plume must be located and then hydraulically contained. The framework is applied to a hypothetical contamination problem where uncertainty in plume location and extent are caused by uncertainty in source location, source loading time, and aquifer heterogeneity. The goal is to find the optimum number and the best locations for a sequence of observation wells that minimize the expected cost of remediation plus sampling. Simplifying assumptions include steady state heads, advective transport, simple retardation, and remediation costs as a linear function of discharge rate. In the case here, an average of six observation wells was needed. Results indicate that this optimum number was particularly sensitive to the mean hydraulic conductivity. The optimum number was also sensitive to the variance of the hydraulic conductivity, annual discount rate, operating cost, and sample unit cost. It was relatively insensitive to the correlation length of hydraulic conductivity. For the case here, points of greatest uncertainty in plume presence were on average poor candidates for sample locations, and randomly located samples were not cost-effective.

  19. Variations of cosmic large-scale structure covariance matrices across parameter space

    NASA Astrophysics Data System (ADS)

    Reischke, Robert; Kiessling, Alina; Schäfer, Björn Malte

    2017-03-01

    The likelihood function for cosmological parameters, given by e.g. weak lensing shear measurements, depends on contributions to the covariance induced by the non-linear evolution of the cosmic web. As highly non-linear clustering to date has only been described by numerical N-body simulations in a reliable and sufficiently precise way, the necessary computational costs for estimating those covariances at different points in parameter space are tremendous. In this work, we describe the change of the matter covariance and the weak lensing covariance matrix as a function of cosmological parameters by constructing a suitable basis, where we model the contribution to the covariance from non-linear structure formation using Eulerian perturbation theory at third order. We show that our formalism is capable of dealing with large matrices and reproduces expected degeneracies and scaling with cosmological parameters in a reliable way. Comparing our analytical results to numerical simulations, we find that the method describes the variation of the covariance matrix found in the SUNGLASS weak lensing simulation pipeline within the errors at one-loop and tree-level for the spectrum and the trispectrum, respectively, for multipoles up to ℓ ≤ 1300. We show that it is possible to optimize the sampling of parameter space where numerical simulations should be carried out by minimizing interpolation errors and propose a corresponding method to distribute points in parameter space in an economical way.

  20. Caracterisation dielectrique de nanocomposites LLDPE/nano-glaises: Fidelite des techniques et proprietes electriques

    NASA Astrophysics Data System (ADS)

    Daran-Daneau, Cyril

    In order to answer the energetic needs of the future, insulation, which is the central piece of high voltage equipment, has to be reinvented. Nanodielectrics seem to be the promise of a mayor technological breakthrough. Based on nanocomposites with a linear low density polyethylene matrix reinforced by nano-clays and manufactured from a commercial master batch, the present thesis aims to characterise the accuracy of measurement techniques applied on nanodielectrics and also the dielectric properties of these materials. Thus, dielectric spectroscopy accuracy both in frequency and time domain is analysed with a specific emphasis on the impact of gold sputtering of the samples and on the measurements transposition from time domain to frequency domain. Also, when measuring dielectric strength, the significant role of surrounding medium and sample thickness on the variation of the alpha scale factor is shown and analysed in relation with the presence of surface partial discharges. Taking into account these limits and for different nanoparticles composition, complex permittivity as a function of frequency, linearity and conductivity as a function of applied electric field is studied with respect to the role that seems to play nanometrics interfaces. Similarly, dielectric strength variation as a function of nano-clays content is investigated with respect to the partial discharge resistance improvement that seems be induced by nanoparticle addition. Finally, an opening towards nanostructuration of underground cables' insulation is proposed considering on one hand the dielectric characterisation of polyethylene matrix reinforced by nano-clays or nano-silica nanodielectrics and on the other hand a succinct cost analysis. Keywords: nanodielectric, linear low density polyethylene, nanoclays, dielectric spectroscopy, dielectric breakdown

  1. Direct recovery of regional tracer kinetics from temporally inconsistent dynamic ECT projections using dimension-reduced time-activity basis

    NASA Astrophysics Data System (ADS)

    Maltz, Jonathan S.

    2000-11-01

    We present an algorithm of reduced computational cost which is able to estimate kinetic model parameters directly from dynamic ECT sinograms made up of temporally inconsistent projections. The algorithm exploits the extreme degree of parameter redundancy inherent in linear combinations of the exponential functions which represent the modes of first-order compartmental systems. The singular value decomposition is employed to find a small set of orthogonal functions, the linear combinations of which are able to accurately represent all modes within the physiologically anticipated range in a given study. The reduced-dimension basis is formed as the convolution of this orthogonal set with a measured input function. The Moore-Penrose pseudoinverse is used to find coefficients of this basis. Algorithm performance is evaluated at realistic count rates using MCAT phantom and clinical 99mTc-teboroxime myocardial study data. Phantom data are modelled as originating from a Poisson process. For estimates recovered from a single slice projection set containing 2.5×105 total counts, recovered tissue responses compare favourably with those obtained using more computationally intensive methods. The corresponding kinetic parameter estimates (coefficients of the new basis) exhibit negligible bias, while parameter variances are low, falling within 30% of the Cramér-Rao lower bound.

  2. Excess costs of social anxiety disorder in Germany.

    PubMed

    Dams, Judith; König, Hans-Helmut; Bleibler, Florian; Hoyer, Jürgen; Wiltink, Jörg; Beutel, Manfred E; Salzer, Simone; Herpertz, Stephan; Willutzki, Ulrike; Strauß, Bernhard; Leibing, Eric; Leichsenring, Falk; Konnopka, Alexander

    2017-04-15

    Social anxiety disorder is one of the most frequent mental disorders. It is often associated with mental comorbidities and causes a high economic burden. The aim of our analysis was to estimate the excess costs of patients with social anxiety disorder compared to persons without anxiety disorder in Germany. Excess costs of social anxiety disorder were determined by comparing two data sets. Patient data came from the SOPHO-NET study A1 (n=495), whereas data of persons without anxiety disorder originated from a representative phone survey (n=3213) of the general German population. Missing data were handled by "Multiple Imputation by Chained Equations". Both data sets were matched using "Entropy Balancing". Excess costs were calculated from a societal perspective for the year 2014 using general linear regression with a gamma distribution and log-link function. Analyses considered direct costs (in- and outpatient treatment, rehabilitation, and professional and informal care) and indirect costs due to absenteeism from work. Total six-month excess costs amounted to 451€ (95% CI: 199€-703€). Excess costs were mainly caused by indirect excess costs due to absenteeism from work of 317€ (95% CI: 172€-461€), whereas direct excess costs amounted to 134€ (95% CI: 110€-159€). Costs for medication, unemployment and disability pension was not evaluated. Social anxiety disorder was associated with statistically significant excess costs, in particular due to indirect costs. As patients in general are often unaware of their disorder or its severity, awareness should be strengthened. Prevention and early treatment might reduce long-term indirect costs. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Parallel spatial direct numerical simulations on the Intel iPSC/860 hypercube

    NASA Technical Reports Server (NTRS)

    Joslin, Ronald D.; Zubair, Mohammad

    1993-01-01

    The implementation and performance of a parallel spatial direct numerical simulation (PSDNS) approach on the Intel iPSC/860 hypercube is documented. The direct numerical simulation approach is used to compute spatially evolving disturbances associated with the laminar-to-turbulent transition in boundary-layer flows. The feasibility of using the PSDNS on the hypercube to perform transition studies is examined. The results indicate that the direct numerical simulation approach can effectively be parallelized on a distributed-memory parallel machine. By increasing the number of processors nearly ideal linear speedups are achieved with nonoptimized routines; slower than linear speedups are achieved with optimized (machine dependent library) routines. This slower than linear speedup results because the Fast Fourier Transform (FFT) routine dominates the computational cost and because the routine indicates less than ideal speedups. However with the machine-dependent routines the total computational cost decreases by a factor of 4 to 5 compared with standard FORTRAN routines. The computational cost increases linearly with spanwise wall-normal and streamwise grid refinements. The hypercube with 32 processors was estimated to require approximately twice the amount of Cray supercomputer single processor time to complete a comparable simulation; however it is estimated that a subgrid-scale model which reduces the required number of grid points and becomes a large-eddy simulation (PSLES) would reduce the computational cost and memory requirements by a factor of 10 over the PSDNS. This PSLES implementation would enable transition simulations on the hypercube at a reasonable computational cost.

  4. Improving the performance of extreme learning machine for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Li, Jiaojiao; Du, Qian; Li, Wei; Li, Yunsong

    2015-05-01

    Extreme learning machine (ELM) and kernel ELM (KELM) can offer comparable performance as the standard powerful classifier―support vector machine (SVM), but with much lower computational cost due to extremely simple training step. However, their performance may be sensitive to several parameters, such as the number of hidden neurons. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a relationship can be easily estimated with two small training sets and extended to large training sets so as to greatly reduce computational cost. Other parameters, such as the steepness parameter in the sigmodal activation function and regularization parameter in the KELM, are also investigated. The experimental results show that classification performance is sensitive to these parameters; fortunately, simple selections will result in suboptimal performance.

  5. Stabilization for sampled-data neural-network-based control systems.

    PubMed

    Zhu, Xun-Lin; Wang, Youyi

    2011-02-01

    This paper studies the problem of stabilization for sampled-data neural-network-based control systems with an optimal guaranteed cost. Unlike previous works, the resulting closed-loop system with variable uncertain sampling cannot simply be regarded as an ordinary continuous-time system with a fast-varying delay in the state. By defining a novel piecewise Lyapunov functional and using a convex combination technique, the characteristic of sampled-data systems is captured. A new delay-dependent stabilization criterion is established in terms of linear matrix inequalities such that the maximal sampling interval and the minimal guaranteed cost control performance can be obtained. It is shown that the newly proposed approach can lead to less conservative and less complex results than the existing ones. Application examples are given to illustrate the effectiveness and the benefits of the proposed method.

  6. Optimal regulation in systems with stochastic time sampling

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1980-01-01

    An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.

  7. Component Cost Analysis of Large Scale Systems

    NASA Technical Reports Server (NTRS)

    Skelton, R. E.; Yousuff, A.

    1982-01-01

    The ideas of cost decomposition is summarized to aid in the determination of the relative cost (or 'price') of each component of a linear dynamic system using quadratic performance criteria. In addition to the insights into system behavior that are afforded by such a component cost analysis CCA, these CCA ideas naturally lead to a theory for cost-equivalent realizations.

  8. Digital robust control law synthesis using constrained optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivekananda

    1989-01-01

    Development of digital robust control laws for active control of high performance flexible aircraft and large space structures is a research area of significant practical importance. The flexible system is typically modeled by a large order state space system of equations in order to accurately represent the dynamics. The active control law must satisy multiple conflicting design requirements and maintain certain stability margins, yet should be simple enough to be implementable on an onboard digital computer. Described here is an application of a generic digital control law synthesis procedure for such a system, using optimal control theory and constrained optimization technique. A linear quadratic Gaussian type cost function is minimized by updating the free parameters of the digital control law, while trying to satisfy a set of constraints on the design loads, responses and stability margins. Analytical expressions for the gradients of the cost function and the constraints with respect to the control law design variables are used to facilitate rapid numerical convergence. These gradients can be used for sensitivity study and may be integrated into a simultaneous structure and control optimization scheme.

  9. Comparison of variational real-space representations of the kinetic energy operator

    NASA Astrophysics Data System (ADS)

    Skylaris, Chris-Kriton; Diéguez, Oswaldo; Haynes, Peter D.; Payne, Mike C.

    2002-08-01

    We present a comparison of real-space methods based on regular grids for electronic structure calculations that are designed to have basis set variational properties, using as a reference the conventional method of finite differences (a real-space method that is not variational) and the reciprocal-space plane-wave method which is fully variational. We find that a definition of the finite-difference method [P. Maragakis, J. Soler, and E. Kaxiras, Phys. Rev. B 64, 193101 (2001)] satisfies one of the two properties of variational behavior at the cost of larger errors than the conventional finite-difference method. On the other hand, a technique which represents functions in a number of plane waves which is independent of system size closely follows the plane-wave method and therefore also the criteria for variational behavior. Its application is only limited by the requirement of having functions strictly localized in regions of real space, but this is a characteristic of an increasing number of modern real-space methods, as they are designed to have a computational cost that scales linearly with system size.

  10. Auxiliary basis expansions for large-scale electronic structure calculations.

    PubMed

    Jung, Yousung; Sodt, Alex; Gill, Peter M W; Head-Gordon, Martin

    2005-05-10

    One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems.

  11. Classical Testing in Functional Linear Models.

    PubMed

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications.

  12. Classical Testing in Functional Linear Models

    PubMed Central

    Kong, Dehan; Staicu, Ana-Maria; Maity, Arnab

    2016-01-01

    We extend four tests common in classical regression - Wald, score, likelihood ratio and F tests - to functional linear regression, for testing the null hypothesis, that there is no association between a scalar response and a functional covariate. Using functional principal component analysis, we re-express the functional linear model as a standard linear model, where the effect of the functional covariate can be approximated by a finite linear combination of the functional principal component scores. In this setting, we consider application of the four traditional tests. The proposed testing procedures are investigated theoretically for densely observed functional covariates when the number of principal components diverges. Using the theoretical distribution of the tests under the alternative hypothesis, we develop a procedure for sample size calculation in the context of functional linear regression. The four tests are further compared numerically for both densely and sparsely observed noisy functional data in simulation experiments and using two real data applications. PMID:28955155

  13. Use of Linear Programming to Develop Cost-Minimized Nutritionally Adequate Health Promoting Food Baskets

    PubMed Central

    Tetens, Inge; Dejgård Jensen, Jørgen; Smed, Sinne; Gabrijelčič Blenkuš, Mojca; Rayner, Mike; Darmon, Nicole; Robertson, Aileen

    2016-01-01

    Background Food-Based Dietary Guidelines (FBDGs) are developed to promote healthier eating patterns, but increasing food prices may make healthy eating less affordable. The aim of this study was to design a range of cost-minimized nutritionally adequate health-promoting food baskets (FBs) that help prevent both micronutrient inadequacy and diet-related non-communicable diseases at lowest cost. Methods Average prices for 312 foods were collected within the Greater Copenhagen area. The cost and nutrient content of five different cost-minimized FBs for a family of four were calculated per day using linear programming. The FBs were defined using five different constraints: cultural acceptability (CA), or dietary guidelines (DG), or nutrient recommendations (N), or cultural acceptability and nutrient recommendations (CAN), or dietary guidelines and nutrient recommendations (DGN). The variety and number of foods in each of the resulting five baskets was increased through limiting the relative share of individual foods. Results The one-day version of N contained only 12 foods at the minimum cost of DKK 27 (€ 3.6). The CA, DG, and DGN were about twice of this and the CAN cost ~DKK 81 (€ 10.8). The baskets with the greater variety of foods contained from 70 (CAN) to 134 (DGN) foods and cost between DKK 60 (€ 8.1, N) and DKK 125 (€ 16.8, DGN). Ensuring that the food baskets cover both dietary guidelines and nutrient recommendations doubled the cost while cultural acceptability (CAN) tripled it. Conclusion Use of linear programming facilitates the generation of low-cost food baskets that are nutritionally adequate, health promoting, and culturally acceptable. PMID:27760131

  14. A Characterization of a Unified Notion of Mathematical Function: The Case of High School Function and Linear Transformation

    ERIC Educational Resources Information Center

    Zandieh, Michelle; Ellis, Jessica; Rasmussen, Chris

    2017-01-01

    As part of a larger study of student understanding of concepts in linear algebra, we interviewed 10 university linear algebra students as to their conceptions of functions from high school algebra and linear transformation from their study of linear algebra. An overarching goal of this study was to examine how linear algebra students see linear…

  15. Entanglement-assisted quantum feedback control

    NASA Astrophysics Data System (ADS)

    Yamamoto, Naoki; Mikami, Tomoaki

    2017-07-01

    The main advantage of quantum metrology relies on the effective use of entanglement, which indeed allows us to achieve strictly better estimation performance over the standard quantum limit. In this paper, we propose an analogous method utilizing entanglement for the purpose of feedback control. The system considered is a general linear dynamical quantum system, where the control goal can be systematically formulated as a linear quadratic Gaussian control problem based on the quantum Kalman filtering method; in this setting, an entangled input probe field is effectively used to reduce the estimation error and accordingly the control cost function. In particular, we show that, in the problem of cooling an opto-mechanical oscillator, the entanglement-assisted feedback control can lower the stationary occupation number of the oscillator below the limit attainable by the controller with a coherent probe field and furthermore beats the controller with an optimized squeezed probe field.

  16. Design of a portable artificial heart drive system based on efficiency analysis.

    PubMed

    Kitamura, T

    1986-11-01

    This paper discusses a computer simulation of a pneumatic portable piston-type artificial heart drive system with a linear d-c-motor. The purpose of the design is to obtain an artificial heart drive system with high efficiency and small dimensions to enhance portability. The design employs two factors contributing the total efficiency of the drive system. First, the dimensions of the pneumatic actuator were optimized under a cost function of the total efficiency. Second, the motor performance was studied in terms of efficiency. More than 50 percent of the input energy of the actuator with practical loads is consumed in the armature circuit in all linear d-c-motors with brushes. An optimal design is: the piston cross-sectional area of 10.5 cm2 cylinder longitudinal length of 10 cm. The total efficiency could be up to 25 percent by improving the gasket to reduce the frictional force.

  17. Novel On-wafer Radiation Pattern Measurement Technique for MEMS Actuator Based Reconfigurable Patch Antennas

    NASA Technical Reports Server (NTRS)

    Simons, Rainee N.

    2002-01-01

    The paper presents a novel on-wafer, antenna far field pattern measurement technique for microelectromechanical systems (MEMS) based reconfigurable patch antennas. The measurement technique significantly reduces the time and the cost associated with the characterization of printed antennas, fabricated on a semiconductor wafer or dielectric substrate. To measure the radiation patterns, the RF probe station is modified to accommodate an open-ended rectangular waveguide as the rotating linearly polarized sampling antenna. The open-ended waveguide is attached through a coaxial rotary joint to a Plexiglas(Trademark) arm and is driven along an arc by a stepper motor. Thus, the spinning open-ended waveguide can sample the relative field intensity of the patch as a function of the angle from bore sight. The experimental results include the measured linearly polarized and circularly polarized radiation patterns for MEMS-based frequency reconfigurable rectangular and polarization reconfigurable nearly square patch antennas, respectively.

  18. Matching by linear programming and successive convexification.

    PubMed

    Jiang, Hao; Drew, Mark S; Li, Ze-Nian

    2007-06-01

    We present a novel convex programming scheme to solve matching problems, focusing on the challenging problem of matching in a large search range and with cluttered background. Matching is formulated as metric labeling with L1 regularization terms, for which we propose a novel linear programming relaxation method and an efficient successive convexification implementation. The unique feature of the proposed relaxation scheme is that a much smaller set of basis labels is used to represent the original label space. This greatly reduces the size of the searching space. A successive convexification scheme solves the labeling problem in a coarse to fine manner. Importantly, the original cost function is reconvexified at each stage, in the new focus region only, and the focus region is updated so as to refine the searching result. This makes the method well-suited for large label set matching. Experiments demonstrate successful applications of the proposed matching scheme in object detection, motion estimation, and tracking.

  19. Introducing Linear Functions: An Alternative Statistical Approach

    ERIC Educational Resources Information Center

    Nolan, Caroline; Herbert, Sandra

    2015-01-01

    The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be "threshold concepts". There is recognition that linear functions can be taught in context through the exploration of linear…

  20. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  1. Value of botulinum toxin injections preceding a comprehensive rehabilitation period for children with spastic cerebral palsy: A cost-effectiveness study.

    PubMed

    Schasfoort, Fabienne; Dallmeijer, Annet; Pangalila, Robert; Catsman, Coriene; Stam, Henk; Becher, Jules; Steyerberg, Ewout; Polinder, Suzanne; Bussmann, Johannes

    2018-01-10

    Despite the widespread use of botulinum toxin in ambulatory children with spastic cerebral palsy, its value prior to intensive physiotherapy with adjunctive casting/orthoses remains unclear. A pragmatically designed, multi-centre trial, comparing the effectiveness of botulinum toxin + intensive physiotherapy with intensive physiotherapy alone, including economic evaluation. Children with spastic cerebral palsy, age range 4-12 years, cerebral palsy-severity Gross Motor Function Classification System levels I-III, received either botulinum toxin type A + intensive physiotherapy or intensive physiotherapy alone and, if necessary, ankle-foot orthoses and/or casting. Primary outcomes were gross motor func-tion, physical activity levels, and health-related quality-of-life, assessed at baseline, 12 (primary end-point) and 24 weeks (follow-up). Economic outcomes included healthcare and patient costs. Intention-to-treat analyses were performed with linear mixed models. There were 65 participants (37 males), with a mean age of 7.3 years (standard deviation 2.3 years), equally distributed across Gross Motor Function Classification System levels. Forty-one children received botulinum toxin type A plus intensive physio-therapy and 24 received intensive physiotherapy treatment only. At primary end-point, one statistically significant difference was found in favour of intensive physiotherapy alone: objectively measured percentage of sedentary behaviour (-3.42, 95% confidence interval 0.20-6.64, p=0.038). Treatment costs were significantly higher for botulinum toxin type A plus intensive physiotherapy (8,963 vs 6,182 euro, p=0.001). No statistically significant differences were found between groups at follow-up. The addition of botulinum toxin type A to intensive physiotherapy did not improve the effectiveness of rehabilitation for ambulatory children with spastic cerebral palsy and was also not cost-effective. Thus botulinum toxin is not recommended for use in improving gross motor function, activity levels or health-related quality-of-life in this cerebral palsy age- and severity-subgroup.

  2. Interpolation for de-Dopplerisation

    NASA Astrophysics Data System (ADS)

    Graham, W. R.

    2018-05-01

    'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

  3. Smoothing-based compressed state Kalman filter for joint state-parameter estimation: Applications in reservoir characterization and CO2 storage monitoring

    NASA Astrophysics Data System (ADS)

    Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.

    2017-08-01

    The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.

  4. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  5. Trajectories of eGFR decline over a four year period in an Indigenous Australian population at high risk of CKD-the eGFR follow up study.

    PubMed

    Barzi, Federica; Jones, Graham R D; Hughes, Jaquelyne T; Lawton, Paul D; Hoy, Wendy; O'Dea, Kerin; Jerums, George; MacIsaac, Richard J; Cass, Alan; Maple-Brown, Louise J

    2018-03-01

    Being able to estimate kidney decline accurately is particularly important in Indigenous Australians, a population at increased risk of developing chronic kidney disease and end stage kidney disease. The aim of this analysis was to explore the trend of decline in estimated glomerular filtration rate (eGFR) over a four year period using multiple local creatinine measures, compared with estimates derived using centrally-measured enzymatic creatinine and with estimates derived using only two local measures. The eGFR study comprised a cohort of over 600 Aboriginal Australian participants recruited from over twenty sites in urban, regional and remote Australia across five strata of health, diabetes and kidney function. Trajectories of eGFR were explored on 385 participants with at least three local creatinine records using graphical methods that compared the linear trends fitted using linear mixed models with non-linear trends fitted using fractional polynomial equations. Temporal changes of local creatinine were also characterized using group-based modelling. Analyses were stratified by eGFR (<60; 60-89; 90-119 and ≥120ml/min/1.73m 2 ) and albuminuria categories (<3mg/mmol; 3-30mg/mmol; >30mg/mmol). Mean age of the participants was 48years, 64% were female and the median follow-up was 3years. Decline of eGFR was accurately estimated using simple linear regression models and locally measured creatinine was as good as centrally measured creatinine at predicting kidney decline in people with an eGFR<60 and an eGFR 60-90ml/min/1.73m 2 with albuminuria. Analyses showed that one baseline and one follow-up locally measured creatinine may be sufficient to estimate short term (up to four years) kidney function decline. The greatest yearly decline was estimated in those with eGFR 60-90 and macro-albuminuria: -6.21 (-8.20, -4.23) ml/min/1.73m 2 . Short term estimates of kidney function decline can be reliably derived using an easy to implement and simple to interpret linear mixed effect model. Locally measured creatinine did not differ to centrally measured creatinine, thus is an accurate cost-efficient and timely means to monitoring kidney function progression. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  6. Cost Estimating Relationships for U.S. Navy Ships

    DTIC Science & Technology

    1983-09-01

    ORGANIZATION NAME AND ADDRESS Institute for Defense Analyses iBOl North Beauregard Street Alexandria, Virginia 22311 10. PROGRAM ELEMENT, PROJECT ...linear CER also is displayed. In addi- tion. Table S-1 displays the total observed cost, the total estimated cost, and the percent difference...report provided by program year a total end cost for each ship by hull number including outfitting and post delivery costs. This end cost does not

  7. Cognitive behavioral therapy for insomnia in stable heart failure: Protocol for a randomized controlled trial.

    PubMed

    Redeker, Nancy S; Knies, Andrea K; Hollenbeak, Christopher; Klar Yaggi, H; Cline, John; Andrews, Laura; Jacoby, Daniel; Sullivan, Anna; O'Connell, Meghan; Iennaco, Joanne; Finoia, Lisa; Jeon, Sangchoon

    2017-04-01

    Chronic insomnia is associated with disabling symptoms and decrements in functional performance. It may contribute to the development of heart failure (HF) and incident mortality. In our previous work, cognitive-behavioral therapy for insomnia (CBT-I), compared to HF self-management education, provided as an attention control condition, was feasible, acceptable, and had large effects on insomnia and fatigue among HF patients. The purpose of this randomized controlled trial (RCT) is to evaluate the sustained effects of group CBT-I compared with HF self-management education (attention control) on insomnia severity, sleep characteristics, daytime symptoms, symptom clusters, functional performance, and health care utilization among patients with stable HF. We will estimate the cost-effectiveness of CBT-I and explore the effects of CBT-I on event-free survival (EFS). Two hundred participants will be randomized in clusters to a single center parallel group (CBT-I vs. attention control) RCT. Wrist actigraphy and self-report will elicit insomnia, sleep characteristics, symptoms, and functional performance. We will use the psychomotor vigilance test to evaluate sleep loss effects and the Six Minute Walk Test to evaluate effects on daytime function. Medical record review and interviews will elicit health care utilization and EFS. Statistical methods will include general linear mixed models and latent transition analysis. Stochastic cost-effectiveness analysis with a competing risk approach will be employed to conduct the cost-effectiveness analysis. The results will be generalizable to HF patients with chronic comorbid insomnia and pave the way for future research focused on the dissemination and translation of CBT-I into HF settings. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Casemix classification payment for sub-acute and non-acute inpatient care, Thailand.

    PubMed

    Khiaocharoen, Orathai; Pannarunothai, Supasit; Zungsontiporn, Chairoj; Riewpaiboon, Wachara

    2010-07-01

    There is a need to develop other casemix classifications, apart from DRG for sub-acute and non-acute inpatient care payment mechanism in Thailand. To develop a casemix classification for sub-acute and non-acute inpatient service. The study began with developing a classification system, analyzing cost, assigning payment weights, and ended with testing the validity of this new casemix system. Coefficient of variation, reduction in variance, linear regression, and split-half cross-validation were employed. The casemix for sub-acute and non-acute inpatient services contained 98 groups. Two percent of them had a coefficient of variation of the cost of higher than 1.5. The reduction in variance of cost after the classification was 32%. Two classification variables (physical function and the rehabilitation impairment categories) were key determinants of the cost (adjusted R2 = 0.749, p = .001). Validity results of split-half cross-validation of sub-acute and non-acute inpatient service were high. The present study indicated that the casemix for sub-acute and non-acute inpatient services closely predicted the hospital resource use and should be further developed for payment of the inpatients sub-acute and non-acute phase.

  9. Analysis of Anterior Cervical Discectomy and Fusion Healthcare Costs via the Value-Driven Outcomes Tool.

    PubMed

    Reese, Jared C; Karsy, Michael; Twitchell, Spencer; Bisson, Erica F

    2018-04-11

    Examining the costs of single- and multilevel anterior cervical discectomy and fusion (ACDF) is important for the identification of cost drivers and potentially reducing patient costs. A novel tool at our institution provides direct costs for the identification of potential drivers. To assess perioperative healthcare costs for patients undergoing an ACDF. Patients who underwent an elective ACDF between July 2011 and January 2017 were identified retrospectively. Factors adding to total cost were placed into subcategories to identify the most significant contributors, and potential drivers of total cost were evaluated using a multivariable linear regression model. A total of 465 patients (mean, age 53 ± 12 yr, 54% male) met the inclusion criteria for this study. The distribution of total cost was broken down into supplies/implants (39%), facility utilization (37%), physician fees (14%), pharmacy (7%), imaging (2%), and laboratory studies (1%). A multivariable linear regression analysis showed that total cost was significantly affected by the number of levels operated on, operating room time, and length of stay. Costs also showed a narrow distribution with few outliers and did not vary significantly over time. These results suggest that facility utilization and supplies/implants are the predominant cost contributors, accounting for 76% of the total cost of ACDF procedures. Efforts at lowering costs within these categories should make the most impact on providing more cost-effective care.

  10. Advanced control concepts. [for shuttle ascent vehicles

    NASA Technical Reports Server (NTRS)

    Sharp, J. B.; Coppey, J. M.

    1973-01-01

    The problems of excess control devices and insufficient trim control capability on shuttle ascent vehicles were investigated. The trim problem is solved at all time points of interest using Lagrangian multipliers and a Simplex based iterative algorithm developed as a result of the study. This algorithm has the capability to solve any bounded linear problem with physically realizable constraints, and to minimize any piecewise differentiable cost function. Both solution methods also automatically distribute the command torques to the control devices. It is shown that trim requirements are unrealizable if only the orbiter engines and the aerodynamic surfaces are used.

  11. Temperature measurement method using temperature coefficient timing for resistive or capacitive sensors

    DOEpatents

    Britton, C.L. Jr.; Ericson, M.N.

    1999-01-19

    A method and apparatus for temperature measurement especially suited for low cost, low power, moderate accuracy implementation. It uses a sensor whose resistance varies in a known manner, either linearly or nonlinearly, with temperature, and produces a digital output which is proportional to the temperature of the sensor. The method is based on performing a zero-crossing time measurement of a step input signal that is double differentiated using two differentiators functioning as respective first and second time constants; one temperature stable, and the other varying with the sensor temperature. 5 figs.

  12. A High-Sensitivity Hydraulic Load Cell for Small Kitchen Appliances

    PubMed Central

    Pačnik, Roman; Novak, Franc

    2010-01-01

    In this paper we present a hydraulic load cell made from hydroformed metallic bellows. The load cell was designed for a small kitchen appliance with the weighing function integrated into the composite control and protection of the appliance. It is a simple, low-cost solution with small dimensions and represents an alternative to the existing hydraulic load cells in industrial use. A good non-linearity and a small hysteresis were achieved. The influence of temperature leads to an error of 7.5%, which can be compensated for by software to meet the requirements of the target application. PMID:22163665

  13. A high-sensitivity hydraulic load cell for small kitchen appliances.

    PubMed

    Pačnik, Roman; Novak, Franc

    2010-01-01

    In this paper we present a hydraulic load cell made from hydroformed metallic bellows. The load cell was designed for a small kitchen appliance with the weighing function integrated into the composite control and protection of the appliance. It is a simple, low-cost solution with small dimensions and represents an alternative to the existing hydraulic load cells in industrial use. A good non-linearity and a small hysteresis were achieved. The influence of temperature leads to an error of 7.5%, which can be compensated for by software to meet the requirements of the target application.

  14. Quantum Approach to Cournot-type Competition

    NASA Astrophysics Data System (ADS)

    Frąckiewicz, Piotr

    2018-02-01

    The aim of this paper is to investigate Cournot-type competition in the quantum domain with the use of the Li-Du-Massar scheme for continuous-variable quantum games. We derive a formula which, in a simple way, determines a unique Nash equilibrium. The result concerns a large class of Cournot duopoly problems including the competition, where the demand and cost functions are not necessary linear. Further, we show that the Nash equilibrium converges to a Pareto-optimal strategy profile as the quantum correlation increases. In addition to illustrating how the formula works, we provide the readers with two examples.

  15. Optimal investments in digital communication systems in primary exchange area

    NASA Astrophysics Data System (ADS)

    Garcia, R.; Hornung, R.

    1980-11-01

    Integer linear optimization theory, following Gomory's method, was applied to the model planning of telecommunication networks in which all future investments are made in digital systems only. The integer decision variables are the number of digital systems set up on cable or radiorelay links that can be installed. The objective function is the total cost of the extension of the existing line capacity to meet the demand between primary and local exchanges. Traffic volume constraints and flow conservation in transit nodes complete the model. Results indicating computing time and method efficiency are illustrated by an example.

  16. Reliability analysis using an exponential power model with bathtub-shaped failure rate function: a Bayes study.

    PubMed

    Shehla, Romana; Khan, Athar Ali

    2016-01-01

    Models with bathtub-shaped hazard function have been widely accepted in the field of reliability and medicine and are particularly useful in reliability related decision making and cost analysis. In this paper, the exponential power model capable of assuming increasing as well as bathtub-shape, is studied. This article makes a Bayesian study of the same model and simultaneously shows how posterior simulations based on Markov chain Monte Carlo algorithms can be straightforward and routine in R. The study is carried out for complete as well as censored data, under the assumption of weakly-informative priors for the parameters. In addition to this, inference interest focuses on the posterior distribution of non-linear functions of the parameters. Also, the model has been extended to include continuous explanatory variables and R-codes are well illustrated. Two real data sets are considered for illustrative purposes.

  17. Maximum likelihood method for estimating airplane stability and control parameters from flight data in frequency domain

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1980-01-01

    A frequency domain maximum likelihood method is developed for the estimation of airplane stability and control parameters from measured data. The model of an airplane is represented by a discrete-type steady state Kalman filter with time variables replaced by their Fourier series expansions. The likelihood function of innovations is formulated, and by its maximization with respect to unknown parameters the estimation algorithm is obtained. This algorithm is then simplified to the output error estimation method with the data in the form of transformed time histories, frequency response curves, or spectral and cross-spectral densities. The development is followed by a discussion on the equivalence of the cost function in the time and frequency domains, and on advantages and disadvantages of the frequency domain approach. The algorithm developed is applied in four examples to the estimation of longitudinal parameters of a general aviation airplane using computer generated and measured data in turbulent and still air. The cost functions in the time and frequency domains are shown to be equivalent; therefore, both approaches are complementary and not contradictory. Despite some computational advantages of parameter estimation in the frequency domain, this approach is limited to linear equations of motion with constant coefficients.

  18. Robustness of neuroprosthetic decoding algorithms.

    PubMed

    Serruya, Mijail; Hatsopoulos, Nicholas; Fellows, Matthew; Paninski, Liam; Donoghue, John

    2003-03-01

    We assessed the ability of two algorithms to predict hand kinematics from neural activity as a function of the amount of data used to determine the algorithm parameters. Using chronically implanted intracortical arrays, single- and multineuron discharge was recorded during trained step tracking and slow continuous tracking tasks in macaque monkeys. The effect of increasing the amount of data used to build a neural decoding model on the ability of that model to predict hand kinematics accurately was examined. We evaluated how well a maximum-likelihood model classified discrete reaching directions and how well a linear filter model reconstructed continuous hand positions over time within and across days. For each of these two models we asked two questions: (1) How does classification performance change as the amount of data the model is built upon increases? (2) How does varying the time interval between the data used to build the model and the data used to test the model affect reconstruction? Less than 1 min of data for the discrete task (8 to 13 neurons) and less than 3 min (8 to 18 neurons) for the continuous task were required to build optimal models. Optimal performance was defined by a cost function we derived that reflects both the ability of the model to predict kinematics accurately and the cost of taking more time to build such models. For both the maximum-likelihood classifier and the linear filter model, increasing the duration between the time of building and testing the model within a day did not cause any significant trend of degradation or improvement in performance. Linear filters built on one day and tested on neural data on a subsequent day generated error-measure distributions that were not significantly different from those generated when the linear filters were tested on neural data from the initial day (p<0.05, Kolmogorov-Smirnov test). These data show that only a small amount of data from a limited number of cortical neurons appears to be necessary to construct robust models to predict kinematic parameters for the subsequent hours. Motor-control signals derived from neurons in motor cortex can be reliably acquired for use in neural prosthetic devices. Adequate decoding models can be built rapidly from small numbers of cells and maintained with daily calibration sessions.

  19. Deep Neural Network Emulation of a High-Order, WENO-Limited, Space-Time Reconstruction

    NASA Astrophysics Data System (ADS)

    Norman, M. R.; Hall, D. M.

    2017-12-01

    Deep Neural Networks (DNNs) have been used to emulate a number of processes in atmospheric models, including radiation and even so-called super-parameterization of moist convection. In each scenario, the DNN provides a good representation of the process even for inputs that have not been encountered before. More notably, they provide an emulation at a fraction of the cost of the original routine, giving speed-ups of 30× and even up to 200× compared to the runtime costs of the original routines. However, to our knowledge there has not been an investigation into using DNNs to emulate the dynamics. The most likely reason for this is that dynamics operators are typically both linear and low cost, meaning they cannot be sped up by a non-linear DNN emulation. However, there exist high-cost non-linear space-time dynamics operators that significantly reduce the number of parallel data transfers necessary to complete an atmospheric simulation. The WENO-limited Finite-Volume method with ADER-DT time integration is a prime example of this - needing only two parallel communications per large, fully limited time step. However, it comes at a high cost in terms of computation, which is why many would hesitate to use it. This talk investigates DNN emulation of the WENO-limited space-time finite-volume reconstruction procedure - the most expensive portion of this method, which densely clusters a large amount of non-linear computation. Different training techniques and network architectures are tested, and the accuracy and speed-up of each is given.

  20. Spatial firm competition in two dimensions with linear transportation costs: simulations and analytical results

    NASA Astrophysics Data System (ADS)

    Roncoroni, Alan; Medo, Matus

    2016-12-01

    Models of spatial firm competition assume that customers are distributed in space and transportation costs are associated with their purchases of products from a small number of firms that are also placed at definite locations. It has been long known that the competition equilibrium is not guaranteed to exist if the most straightforward linear transportation costs are assumed. We show by simulations and also analytically that if periodic boundary conditions in a plane are assumed, the equilibrium exists for a pair of firms at any distance. When a larger number of firms is considered, we find that their total equilibrium profit is inversely proportional to the square root of the number of firms. We end with a numerical investigation of the system's behavior for a general transportation cost exponent.

  1. Direct Costs of Very Old Persons with Subsyndromal Depression: A 5-Year Prospective Study.

    PubMed

    Ludvigsson, Mikael; Bernfort, Lars; Marcusson, Jan; Wressle, Ewa; Milberg, Anna

    2018-03-15

    This study aimed to compare, over a 5-year period, the prospective direct healthcare costs and service utilization of persons with subsyndromal depression (SSD) and non-depressive persons (ND), in a population of very old persons. A second aim was to develop a model that predicts direct healthcare costs in very old persons with SSD. A prospective population-based study was undertaken on 85-year-old persons in Sweden. Depressiveness was screened with the Geriatric Depression Scale at baseline and at 1-year follow-up, and the results were classified into ND, SSD, and syndromal depression. Data on individual healthcare costs and service use from a 5-year period were derived from national database registers. Direct costs were compared between categories using Mann-Whitney U tests, and a prediction model was identified with linear regression. For persons with SSD, the direct healthcare costs per month of survival exceeded those of persons with ND by a ratio 1.45 (€634 versus €436), a difference that was significant even after controlling for somatic multimorbidity. The final regression model consisted of five independent variables predicting direct healthcare costs: male sex, activities of daily living functions, loneliness, presence of SSD, and somatic multimorbidity. SSD among very old persons is associated with increased direct healthcare costs independently of somatic multimorbidity. The associations between SSD, somatic multimorbidity, and healthcare costs in the very old need to be analyzed further in order to better guide allocation of resources in health policy. Copyright © 2018 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  2. Two Legendre-Dual-Petrov-Galerkin Algorithms for Solving the Integrated Forms of High Odd-Order Boundary Value Problems

    PubMed Central

    Abd-Elhameed, Waleed M.; Doha, Eid H.; Bassuony, Mahmoud A.

    2014-01-01

    Two numerical algorithms based on dual-Petrov-Galerkin method are developed for solving the integrated forms of high odd-order boundary value problems (BVPs) governed by homogeneous and nonhomogeneous boundary conditions. Two different choices of trial functions and test functions which satisfy the underlying boundary conditions of the differential equations and the dual boundary conditions are used for this purpose. These choices lead to linear systems with specially structured matrices that can be efficiently inverted, hence greatly reducing the cost. The various matrix systems resulting from these discretizations are carefully investigated, especially their complexities and their condition numbers. Numerical results are given to illustrate the efficiency of the proposed algorithms, and some comparisons with some other methods are made. PMID:24616620

  3. Dual control and prevention of the turn-off phenomenon in a class of mimo systems

    NASA Technical Reports Server (NTRS)

    Mookerjee, P.; Bar-Shalom, Y.; Molusis, J. A.

    1985-01-01

    A recently developed methodology of adaptive dual control based upon sensitivity functions is applied here to a multivariable input-output model. The plant has constant but unknown parameters. It represents a simplified linear version of the relationship between the vibration output and the higher harmonic control input for a helicopter. The cautious and the new dual controller are examined. In many instances, the cautious controller is seen to turn off. The new dual controller modifies the cautious control design by numerator and denominator correction terms which depend upon the sensitivity functions of the expected future cost and avoids the turn-off and burst phenomena. Monte Carlo simulations and statistical tests of significance indicate the superiority of the dual controller over the cautious and the heuristic certainity equivalence controllers.

  4. Flexible Al-doped ZnO films grown on PET substrates using linear facing target sputtering for flexible OLEDs

    NASA Astrophysics Data System (ADS)

    Jeong, Jin-A.; Shin, Hyun-Su; Choi, Kwang-Hyuk; Kim, Han-Ki

    2010-11-01

    We report the characteristics of flexible Al-doped zinc oxide (AZO) films prepared by a plasma damage-free linear facing target sputtering (LFTS) system on PET substrates for use as a flexible transparent conducting electrode in flexible organic light-emitting diodes (OLEDs). The electrical, optical and structural properties of LFTS-grown flexible AZO electrodes were investigated as a function of dc power. We obtained a flexible AZO film with a sheet resistance of 39 Ω/squ and an average transmittance of 84.86% in the visible range although it was sputtered at room temperature without activation of the Al dopant. Due to the effective confinement of the high-density plasma between the facing AZO targets, the AZO film was deposited on the PET substrate without plasma damage and substrate heating caused by bombardment of energy particles. Moreover, the flexible OLED fabricated on the AZO/PET substrate showed performance similar to the OLED fabricated on a ITO/PET substrate in spite of a lower work function. This indicates that LFTS is a promising plasma damage-free and low-temperature sputtering technique for deposition of flexible and indium-free AZO electrodes for use in cost-efficient flexible OLEDs.

  5. A coarse-grid-projection acceleration method for finite-element incompressible flow computations

    NASA Astrophysics Data System (ADS)

    Kashefi, Ali; Staples, Anne; FiN Lab Team

    2015-11-01

    Coarse grid projection (CGP) methodology provides a framework for accelerating computations by performing some part of the computation on a coarsened grid. We apply the CGP to pressure projection methods for finite element-based incompressible flow simulations. Based on it, the predicted velocity field data is restricted to a coarsened grid, the pressure is determined by solving the Poisson equation on the coarse grid, and the resulting data are prolonged to the preset fine grid. The contributions of the CGP method to the pressure correction technique are twofold: first, it substantially lessens the computational cost devoted to the Poisson equation, which is the most time-consuming part of the simulation process. Second, it preserves the accuracy of the velocity field. The velocity and pressure spaces are approximated by Galerkin spectral element using piecewise linear basis functions. A restriction operator is designed so that fine data are directly injected into the coarse grid. The Laplacian and divergence matrices are driven by taking inner products of coarse grid shape functions. Linear interpolation is implemented to construct a prolongation operator. A study of the data accuracy and the CPU time for the CGP-based versus non-CGP computations is presented. Laboratory for Fluid Dynamics in Nature.

  6. Use of generalized linear models and digital data in a forest inventory of Northern Utah

    USGS Publications Warehouse

    Moisen, Gretchen G.; Edwards, Thomas C.

    1999-01-01

    Forest inventories, like those conducted by the Forest Service's Forest Inventory and Analysis Program (FIA) in the Rocky Mountain Region, are under increased pressure to produce better information at reduced costs. Here we describe our efforts in Utah to merge satellite-based information with forest inventory data for the purposes of reducing the costs of estimates of forest population totals and providing spatial depiction of forest resources. We illustrate how generalized linear models can be used to construct approximately unbiased and efficient estimates of population totals while providing a mechanism for prediction in space for mapping of forest structure. We model forest type and timber volume of five tree species groups as functions of a variety of predictor variables in the northern Utah mountains. Predictor variables include elevation, aspect, slope, geographic coordinates, as well as vegetation cover types based on satellite data from both the Advanced Very High Resolution Radiometer (AVHRR) and Thematic Mapper (TM) platforms. We examine the relative precision of estimates of area by forest type and mean cubic-foot volumes under six different models, including the traditional double sampling for stratification strategy. Only very small gains in precision were realized through the use of expensive photointerpreted or TM-based data for stratification, while models based on topography and spatial coordinates alone were competitive. We also compare the predictive capability of the models through various map accuracy measures. The models including the TM-based vegetation performed best overall, while topography and spatial coordinates alone provided substantial information at very low cost.

  7. Passive dendrites enable single neurons to compute linearly non-separable functions.

    PubMed

    Cazé, Romain Daniel; Humphries, Mark; Gutkin, Boris

    2013-01-01

    Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic sub-units, or only passive dendrites where input summation is purely sub-linear, and where dendritic sub-units are only saturating. To determine if such neurons can also compute linearly non-separable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear sub-unit and either a single spiking or a saturating dendritic sub-unit. We then analytically generalize these numerical results to an arbitrary number of non-linear sub-units. First, we show that a single non-linear dendritic sub-unit, in addition to the somatic non-linearity, is sufficient to compute linearly non-separable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic sub-units, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly non-separable functions can be implemented with at least two strategies: one where a dendritic sub-unit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic sub-units. We formally prove that implementing the latter architecture is possible with both types of dendritic sub-units whereas the former is only possible with spiking dendrites. Finally, we show how linearly non-separable functions can be computed by a generic two-compartment biophysical model and a realistic neuron model of the cerebellar stellate cell interneuron. Taken together our results demonstrate that passive dendrites are sufficient to enable neurons to compute linearly non-separable functions.

  8. Passive Dendrites Enable Single Neurons to Compute Linearly Non-separable Functions

    PubMed Central

    Cazé, Romain Daniel; Humphries, Mark; Gutkin, Boris

    2013-01-01

    Local supra-linear summation of excitatory inputs occurring in pyramidal cell dendrites, the so-called dendritic spikes, results in independent spiking dendritic sub-units, which turn pyramidal neurons into two-layer neural networks capable of computing linearly non-separable functions, such as the exclusive OR. Other neuron classes, such as interneurons, may possess only a few independent dendritic sub-units, or only passive dendrites where input summation is purely sub-linear, and where dendritic sub-units are only saturating. To determine if such neurons can also compute linearly non-separable functions, we enumerate, for a given parameter range, the Boolean functions implementable by a binary neuron model with a linear sub-unit and either a single spiking or a saturating dendritic sub-unit. We then analytically generalize these numerical results to an arbitrary number of non-linear sub-units. First, we show that a single non-linear dendritic sub-unit, in addition to the somatic non-linearity, is sufficient to compute linearly non-separable functions. Second, we analytically prove that, with a sufficient number of saturating dendritic sub-units, a neuron can compute all functions computable with purely excitatory inputs. Third, we show that these linearly non-separable functions can be implemented with at least two strategies: one where a dendritic sub-unit is sufficient to trigger a somatic spike; another where somatic spiking requires the cooperation of multiple dendritic sub-units. We formally prove that implementing the latter architecture is possible with both types of dendritic sub-units whereas the former is only possible with spiking dendrites. Finally, we show how linearly non-separable functions can be computed by a generic two-compartment biophysical model and a realistic neuron model of the cerebellar stellate cell interneuron. Taken together our results demonstrate that passive dendrites are sufficient to enable neurons to compute linearly non-separable functions. PMID:23468600

  9. Cooperative Solutions in Multi-Person Quadratic Decision Problems: Finite-Horizon and State-Feedback Cost-Cumulant Control Paradigm

    DTIC Science & Technology

    2007-01-01

    CONTRACT NUMBER Problems: Finite -Horizon and State-Feedback Cost-Cumulant Control Paradigm (PREPRINT) 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...cooperative cost-cumulant control regime for the class of multi-person single-objective decision problems characterized by quadratic random costs and... finite -horizon integral quadratic cost associated with a linear stochastic system . Since this problem formation is parameterized by the number of cost

  10. Optimized Quasi-Interpolators for Image Reconstruction.

    PubMed

    Sacht, Leonardo; Nehab, Diego

    2015-12-01

    We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.

  11. Virasoro constraints and polynomial recursion for the linear Hodge integrals

    NASA Astrophysics Data System (ADS)

    Guo, Shuai; Wang, Gehao

    2017-04-01

    The Hodge tau-function is a generating function for the linear Hodge integrals. It is also a tau-function of the KP hierarchy. In this paper, we first present the Virasoro constraints for the Hodge tau-function in the explicit form of the Virasoro equations. The expression of our Virasoro constraints is simply a linear combination of the Virasoro operators, where the coefficients are restored from a power series for the Lambert W function. Then, using this result, we deduce a simple version of the Virasoro constraints for the linear Hodge partition function, where the coefficients are restored from the Gamma function. Finally, we establish the equivalence relation between the Virasoro constraints and polynomial recursion formula for the linear Hodge integrals.

  12. Poster — Thur Eve — 44: Linearization of Compartmental Models for More Robust Estimates of Regional Hemodynamic, Metabolic and Functional Parameters using DCE-CT/PET Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blais, AR; Dekaban, M; Lee, T-Y

    2014-08-15

    Quantitative analysis of dynamic positron emission tomography (PET) data usually involves minimizing a cost function with nonlinear regression, wherein the choice of starting parameter values and the presence of local minima affect the bias and variability of the estimated kinetic parameters. These nonlinear methods can also require lengthy computation time, making them unsuitable for use in clinical settings. Kinetic modeling of PET aims to estimate the rate parameter k{sub 3}, which is the binding affinity of the tracer to a biological process of interest and is highly susceptible to noise inherent in PET image acquisition. We have developed linearized kineticmore » models for kinetic analysis of dynamic contrast enhanced computed tomography (DCE-CT)/PET imaging, including a 2-compartment model for DCE-CT and a 3-compartment model for PET. Use of kinetic parameters estimated from DCE-CT can stabilize the kinetic analysis of dynamic PET data, allowing for more robust estimation of k{sub 3}. Furthermore, these linearized models are solved with a non-negative least squares algorithm and together they provide other advantages including: 1) only one possible solution and they do not require a choice of starting parameter values, 2) parameter estimates are comparable in accuracy to those from nonlinear models, 3) significantly reduced computational time. Our simulated data show that when blood volume and permeability are estimated with DCE-CT, the bias of k{sub 3} estimation with our linearized model is 1.97 ± 38.5% for 1,000 runs with a signal-to-noise ratio of 10. In summary, we have developed a computationally efficient technique for accurate estimation of k{sub 3} from noisy dynamic PET data.« less

  13. Development of WRF-CO2 4DVAR Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Zheng, T.; French, N. H. F.

    2016-12-01

    Four dimensional variational (4DVar) assimilation systems have been widely used for CO2 inverse modeling at global scale. At regional scale, however, 4DVar assimilation systems have been lacking. At present, most regional CO2 inverse models use Lagrangian particle backward trajectory tools to compute influence function in an analytical/synthesis framework. To provide a 4DVar based alternative, we developed WRF-CO2 4DVAR based on Weather Research and Forecasting (WRF), its chemistry extension (WRF-Chem), and its data assimilation system (WRFDA/WRFPLUS). Different from WRFDA, WRF-CO2 4DVAR does not optimize meteorology initial condition, instead it solves for the optimized CO2 surface fluxes (sources/sink) constrained by atmospheric CO2 observations. Based on WRFPLUS, we developed tangent linear and adjoint code for CO2 emission, advection, vertical mixing in boundary layer, and convective transport. Furthermore, we implemented an incremental algorithm to solve for optimized CO2 emission scaling factors by iteratively minimizing the cost function in a Bayes framework. The model sensitivity (of atmospheric CO2 with respect to emission scaling factor) calculated by tangent linear and adjoint model agrees well with that calculated by finite difference, indicating the validity of the newly developed code. The effectiveness of WRF-CO2 4DVar for inverse modeling is tested using forward-model generated pseudo-observation data in two experiments: first-guess CO2 fluxes has a 50% overestimation in the first case and 50% underestimation in the second. In both cases, WRF-CO2 4DVar reduces cost function to less than 10-4 of its initial values in less than 20 iterations and successfully recovers the true values of emission scaling factors. We expect future applications of WRF-CO2 4DVar with satellite observations will provide insights for CO2 regional inverse modeling, including the impacts of model transport error in vertical mixing.

  14. Computation of Sensitivity Derivatives of Navier-Stokes Equations using Complex Variables

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.

    2004-01-01

    Accurate computation of sensitivity derivatives is becoming an important item in Computational Fluid Dynamics (CFD) because of recent emphasis on using nonlinear CFD methods in aerodynamic design, optimization, stability and control related problems. Several techniques are available to compute gradients or sensitivity derivatives of desired flow quantities or cost functions with respect to selected independent (design) variables. Perhaps the most common and oldest method is to use straightforward finite-differences for the evaluation of sensitivity derivatives. Although very simple, this method is prone to errors associated with choice of step sizes and can be cumbersome for geometric variables. The cost per design variable for computing sensitivity derivatives with central differencing is at least equal to the cost of three full analyses, but is usually much larger in practice due to difficulty in choosing step sizes. Another approach gaining popularity is the use of Automatic Differentiation software (such as ADIFOR) to process the source code, which in turn can be used to evaluate the sensitivity derivatives of preselected functions with respect to chosen design variables. In principle, this approach is also very straightforward and quite promising. The main drawback is the large memory requirement because memory use increases linearly with the number of design variables. ADIFOR software can also be cumber-some for large CFD codes and has not yet reached a full maturity level for production codes, especially in parallel computing environments.

  15. Streptavidin-functionalized capillary immune microreactor for highly efficient chemiluminescent immunoassay.

    PubMed

    Yang, Zhanjun; Zong, Chen; Ju, Huangxian; Yan, Feng

    2011-11-07

    A streptavidin functionalized capillary immune microreactor was designed for highly efficient flow-through chemiluminescent (CL) immunoassay. The functionalized capillary could be used as both a support for highly efficient immobilization of antibody and a flow cell for flow-through immunoassay. The functionalized inner wall and the capture process were characterized using scanning electron microscopy. Compared to conventional packed tube or thin-layer cell immunoreactor, the proposed microreactor showed remarkable properties such as lower cost, simpler fabrication, better practicality and wider dynamic range for fast CL immunoassay with good reproducibility and stability. Using α-fetoprotein as model analyte, the highly efficient CL flow-through immunoassay system showed a linear range of 3 orders of magnitude from 0.5 to 200 ng mL(-1) and a low detection limit of 0.1 ng mL(-1). The capillary immune microreactor could make up the shortcoming of conventional CL immunoreactors and provided a promising alternative for highly efficient flow-injection immunoassay. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Three-dimensional habitat structure and landscape genetics: a step forward in estimating functional connectivity.

    PubMed

    Milanesi, P; Holderegger, R; Bollmann, K; Gugerli, F; Zellweger, F

    2017-02-01

    Estimating connectivity among fragmented habitat patches is crucial for evaluating the functionality of ecological networks. However, current estimates of landscape resistance to animal movement and dispersal lack landscape-level data on local habitat structure. Here, we used a landscape genetics approach to show that high-fidelity habitat structure maps derived from Light Detection and Ranging (LiDAR) data critically improve functional connectivity estimates compared to conventional land cover data. We related pairwise genetic distances of 128 Capercaillie (Tetrao urogallus) genotypes to least-cost path distances at multiple scales derived from land cover data. Resulting β values of linear mixed effects models ranged from 0.372 to 0.495, while those derived from LiDAR ranged from 0.558 to 0.758. The identification and conservation of functional ecological networks suffering from habitat fragmentation and homogenization will thus benefit from the growing availability of detailed and contiguous data on three-dimensional habitat structure and associated habitat quality. © 2016 by the Ecological Society of America.

  17. Sick of our loans: Student borrowing and the mental health of young adults in the United States.

    PubMed

    Walsemann, Katrina M; Gee, Gilbert C; Gentile, Danielle

    2015-01-01

    Student loans are increasingly important and commonplace, especially among recent cohorts of young adults in the United States. These loans facilitate the acquisition of human capital in the form of education, but may also lead to stress and worries related to repayment. This study investigated two questions: 1) what is the association between the cumulative amount of student loans borrowed over the course of schooling and psychological functioning when individuals are 25-31 years old; and 2) what is the association between annual student loan borrowing and psychological functioning among currently enrolled college students? We also examined whether these relationships varied by parental wealth, college enrollment history (e.g. 2-year versus 4-year college), and educational attainment (for cumulative student loans only). We analyzed data from the National Longitudinal Survey of Youth 1997 (NLSY97), a nationally representative sample of young adults in the United States. Analyses employed multivariate linear regression and within-person fixed-effects models. Student loans were associated with poorer psychological functioning, adjusting for covariates, in both the multivariate linear regression and the within-person fixed effects models. This association varied by level of parental wealth in the multivariate linear regression models only, and did not vary by college enrollment history or educational attainment. The present findings raise novel questions for further research regarding student loan debt and the possible spillover effects on other life circumstances, such as occupational trajectories and health inequities. The study of student loans is even more timely and significant given the ongoing rise in the costs of higher education. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Discriminative analysis of non-linear brain connectivity for leukoaraiosis with resting-state fMRI

    NASA Astrophysics Data System (ADS)

    Lai, Youzhi; Xu, Lele; Yao, Li; Wu, Xia

    2015-03-01

    Leukoaraiosis (LA) describes diffuse white matter abnormalities on CT or MR brain scans, often seen in the normal elderly and in association with vascular risk factors such as hypertension, or in the context of cognitive impairment. The mechanism of cognitive dysfunction is still unclear. The recent clinical studies have revealed that the severity of LA was not corresponding to the cognitive level, and functional connectivity analysis is an appropriate method to detect the relation between LA and cognitive decline. However, existing functional connectivity analyses of LA have been mostly limited to linear associations. In this investigation, a novel measure utilizing the extended maximal information coefficient (eMIC) was applied to construct non-linear functional connectivity in 44 LA subjects (9 dementia, 25 mild cognitive impairment (MCI) and 10 cognitively normal (CN)). The strength of non-linear functional connections for the first 1% of discriminative power increased in MCI compared with CN and dementia, which was opposed to its linear counterpart. Further functional network analysis revealed that the changes of the non-linear and linear connectivity have similar but not completely the same spatial distribution in human brain. In the multivariate pattern analysis with multiple classifiers, the non-linear functional connectivity mostly identified dementia, MCI and CN from LA with a relatively higher accuracy rate than the linear measure. Our findings revealed the non-linear functional connectivity provided useful discriminative power in classification of LA, and the spatial distributed changes between the non-linear and linear measure may indicate the underlying mechanism of cognitive dysfunction in LA.

  19. Development of a superconducting claw-pole linear test-rig

    NASA Astrophysics Data System (ADS)

    Radyjowski, Patryk; Keysan, Ozan; Burchell, Joseph; Mueller, Markus

    2016-04-01

    Superconducting generators can help to reduce the cost of energy for large offshore wind turbines, where the size and mass of the generator have a direct effect on the installation cost. However, existing superconducting generators are not as reliable as the alternative technologies. In this paper, a linear test prototype for a novel superconducting claw-pole topology, which has a stationary superconducting coil that eliminates the cryocooler coupler will be presented. The issues related to mechanical, electromagnetic and thermal aspects of the prototype will be presented.

  20. Implementing the water framework directive: contract design and the cost of measures to reduce nitrogen pollution from agriculture.

    PubMed

    Bartolini, Fabio; Gallerani, Vittorio; Raggi, Meri; Viaggi, Davide

    2007-10-01

    The performance of different policy design strategies is a key issue in evaluating programmes for water quality improvement under the Water Framework Directive (60/2000). This issue is emphasised by information asymmetries between regulator and agents. Using an economic model under asymmetric information, the aim of this paper is to compare the cost-effectiveness of selected methods of designing payments to farmers in order to reduce nitrogen pollution in agriculture. A principal-agent model is used, based on profit functions generated through farm-level linear programming. This allows a comparison of flat rate payments and a menu of contracts developed through mechanism design. The model is tested in an area of Emilia Romagna (Italy) in two policy contexts: Agenda 2000 and the 2003 Common Agricultural Policy (CAP) reform. The results show that different policy design options lead to differences in policy costs as great as 200-400%, with clear advantages for the menu of contracts. However, different policy scenarios may strongly affect such differences. Hence, the paper calls for greater attention to the interplay between CAP scenarios and water quality measures.

  1. Implementing the Water Framework Directive: Contract Design and the Cost of Measures to Reduce Nitrogen Pollution from Agriculture

    NASA Astrophysics Data System (ADS)

    Bartolini, Fabio; Gallerani, Vittorio; Raggi, Meri; Viaggi, Davide

    2007-10-01

    The performance of different policy design strategies is a key issue in evaluating programmes for water quality improvement under the Water Framework Directive (60/2000). This issue is emphasised by information asymmetries between regulator and agents. Using an economic model under asymmetric information, the aim of this paper is to compare the cost-effectiveness of selected methods of designing payments to farmers in order to reduce nitrogen pollution in agriculture. A principal-agent model is used, based on profit functions generated through farm-level linear programming. This allows a comparison of flat rate payments and a menu of contracts developed through mechanism design. The model is tested in an area of Emilia Romagna (Italy) in two policy contexts: Agenda 2000 and the 2003 Common Agricultural Policy (CAP) reform. The results show that different policy design options lead to differences in policy costs as great as 200-400%, with clear advantages for the menu of contracts. However, different policy scenarios may strongly affect such differences. Hence, the paper calls for greater attention to the interplay between CAP scenarios and water quality measures.

  2. A fast method to compute Three-Dimensional Infrared Radiative Transfer in non scattering medium

    NASA Astrophysics Data System (ADS)

    Makke, Laurent; Musson-Genon, Luc; Carissimo, Bertrand

    2014-05-01

    The Atmospheric Radiation field has seen the development of more accurate and faster methods to take into account absoprtion in participating media. Radiative fog appears with clear sky condition due to a significant cooling during the night, so scattering is left out. Fog formation modelling requires accurate enough method to compute cooling rates. Thanks to High Performance Computing, multi-spectral approach of Radiative Transfer Equation resolution is most often used. Nevertheless, the coupling of three-dimensionnal radiative transfer with fluid dynamics is very detrimental to the computational cost. To reduce the time spent in radiation calculations, the following method uses analytical absorption functions fitted by Sasamori (1968) on Yamamoto's charts (Yamamoto,1956) to compute a local linear absorption coefficient. By averaging radiative properties, this method eliminates the spectral integration. For an isothermal atmosphere, analytical calculations lead to an explicit formula between emissivities functions and linear absorption coefficient. In the case of cooling to space approximation, this analytical expression gives very accurate results compared to correlated k-distribution. For non homogeneous paths, we propose a two steps algorithm. One-dimensional radiative quantities and linear absorption coefficient are computed by a two-flux method. Then, three-dimensional RTE under the grey medium assumption is solved with the DOM. Comparisons with measurements of radiative quantities during ParisFOG field (2006) shows the cability of this method to handle strong vertical variations of pressure/temperature and gases concentrations.

  3. Battle for Climate and Scarcity Rents: Beyond the Linear-Quadratic Case.

    PubMed

    Kagan, Mark; van der Ploeg, Frederick; Withagen, Cees

    Industria imports oil, produces final goods and wishes to mitigate global warming. Oilrabia exports oil and buys final goods from the other country. Industria uses the carbon tax to impose an import tariff on oil and steal some of Oilrabia's scarcity rent. Conversely, Oilrabia has monopoly power and sets the oil price to steal some of Industria's climate rent. We analyze the relative speeds of oil extraction and carbon accumulation under these strategic interactions for various production function specifications and compare these with the efficient and competitive outcomes. We prove that for the class of HARA production functions, the oil price is initially higher and subsequently lower in the open-loop Nash equilibrium than in the efficient outcome. The oil extraction rate is thus initially too low and in later stages too high. The HARA class includes linear, loglinear and semi-loglinear demand functions as special cases. For non-HARA production functions, Oilrabia may in the open-loop Nash equilibrium initially price oil lower than the efficient level, thus resulting in more oil extraction and climate damages. We also contrast the open-loop Nash and efficient outcomes numerically with the feedback Nash outcomes. We find that the optimal carbon tax path in the feedback Nash equilibrium is flatter than in the open-loop Nash equilibrium. It turns out that for certain demand functions using the carbon tax as an import tariff may hurt consumers' welfare as the resulting user cost of oil is so high that the fall in welfare wipes out the gain from higher tariff revenues.

  4. Cost Considerations in Nonlinear Finite-Element Computing

    NASA Technical Reports Server (NTRS)

    Utku, S.; Melosh, R. J.; Islam, M.; Salama, M.

    1985-01-01

    Conference paper discusses computational requirements for finiteelement analysis using quasi-linear approach to nonlinear problems. Paper evaluates computational efficiency of different computer architecturtural types in terms of relative cost and computing time.

  5. Task-switching cost and repetition priming: two overlooked confounds in the first-set procedure of the Sternberg paradigm and how they affect memory set-size effects.

    PubMed

    Jou, Jerwen

    2014-10-01

    Subjects performed Sternberg-type memory recognition tasks (Sternberg paradigm) in four experiments. Category-instance names were used as learning and testing materials. Sternberg's original experiments demonstrated a linear relation between reaction time (RT) and memory-set size (MSS). A few later studies found no relation, and other studies found a nonlinear relation (logarithmic) between the two variables. These deviations were used as evidence undermining Sternberg's serial scan theory. This study identified two confounding variables in the fixed-set procedure of the paradigm (where multiple probes are presented at test for a learned memory set) that could generate a MSS RT function that was either flat or logarithmic rather than linearly increasing. These two confounding variables were task-switching cost and repetition priming. The former factor worked against smaller memory sets and in favour of larger sets whereas the latter factor worked in the opposite way. Results demonstrated that a null or a logarithmic RT-to-MSS relation could be the artefact of the combined effects of these two variables. The Sternberg paradigm has been used widely in memory research, and a thorough understanding of the subtle methodological pitfalls is crucial. It is suggested that a varied-set procedure (where only one probe is presented at test for a learned memory set) is a more contamination-free procedure for measuring the MSS effects, and that if a fixed-set procedure is used, it is worthwhile examining the RT function of the very first trials across the MSSs, which are presumably relatively free of contamination by the subsequent trials.

  6. Bottom friction optimization for a better barotropic tide modelling

    NASA Astrophysics Data System (ADS)

    Boutet, Martial; Lathuilière, Cyril; Son Hoang, Hong; Baraille, Rémy

    2015-04-01

    At a regional scale, barotropic tides are the dominant source of variability of currents and water heights. A precise representation of these processes is essential because of their great impacts on human activities (submersion risks, marine renewable energies, ...). Identified sources of error for tide modelling at a regional scale are the followings: bathymetry, boundary forcing and dissipation due to bottom friction. Nevertheless, bathymetric databases are nowadays known with a good accuracy, especially over shelves, and global tide models performances are better than ever. The most promising improvement is thus the bottom friction representation. The method used to estimate bottom friction is the simultaneous perturbation stochastic approximation (SPSA) which consists in the approximation of the gradient based on a fixed number of cost function measurements, regardless of the dimension of the vector to be estimated. Indeed, each cost function measurement is obtained by randomly perturbing every component of the parameter vector. An important feature of SPSA is its relative ease of implementation. In particular, the method does not require the development of tangent linear and adjoint version of the circulation model. Experiments are carried out to estimate bottom friction with the HYbrid Coordinate Ocean Model (HYCOM) in barotropic mode (one isopycnal layer). The study area is the Northeastern Atlantic margin which is characterized by strong currents and an intense dissipation. Bottom friction is parameterized with a quadratic term and friction coefficient is computed with the water height and the bottom roughness. The latter parameter is the one to be estimated. Assimilated data are the available tide gauge observations. First, the bottom roughness is estimated taking into account bottom sediment natures and bathymetric ranges. Then, it is estimated with geographical degrees of freedom. Finally, the impact of the estimation of a mixed quadratic/linear friction is evaluated.

  7. Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.

    PubMed

    Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen

    2016-07-07

    Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.

  8. Economic analysis and assessment of syngas production using a modeling approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hakkwan; Parajuli, Prem B.; Yu, Fei

    Economic analysis and modeling are essential and important issues for the development of current feedstock and process technology for bio-gasification. The objective of this study was to develop an economic model and apply to predict the unit cost of syngas production from a micro-scale bio-gasification facility. An economic model was programmed in C++ computer programming language and developed using a parametric cost approach, which included processes to calculate the total capital costs and the total operating costs. The model used measured economic data from the bio-gasification facility at Mississippi State University. The modeling results showed that the unit cost ofmore » syngas production was $1.217 for a 60 Nm-3 h-1 capacity bio-gasifier. The operating cost was the major part of the total production cost. The equipment purchase cost and the labor cost were the largest part of the total capital cost and the total operating cost, respectively. Sensitivity analysis indicated that labor costs rank the top as followed by equipment cost, loan life, feedstock cost, interest rate, utility cost, and waste treatment cost. The unit cost of syngas production increased with the increase of all parameters with exception of loan life. The annual cost regarding equipment, labor, feedstock, waste treatment, and utility cost showed a linear relationship with percent changes, while loan life and annual interest rate showed a non-linear relationship. This study provides the useful information for economic analysis and assessment of the syngas production using a modeling approach.« less

  9. Antiretroviral drug costs and prescription patterns in British Columbia, Canada: 1996-2011.

    PubMed

    Nosyk, Bohdan; Montaner, Julio S G; Yip, Benita; Lima, Viviane D; Hogg, Robert S

    2014-04-01

    Treatment options and therapeutic guidelines have evolved substantially since highly active antiretroviral treatment (HAART) became the standard of HIV care in 1996. We conducted the present population-based analysis to characterize the determinants of direct costs of HAART over time in British Columbia, Canada. We considered individuals ever receiving HAART in British Columbia from 1996 to 2011. Linear mixed-effects regression models were constructed to determine the effects of demographic indicators, clinical stage, and treatment characteristics on quarterly costs of HAART (in 2010$CDN) among individuals initiating in different temporal periods. The least-square mean values were estimated by CD4 category and over time for each temporal cohort. Longitudinal data on HAART recipients (N = 9601, 17.6% female, mean age at initiation = 40.5) were analyzed. Multiple regression analyses identified demographics, treatment adherence, and pharmacological class to be independently associated with quarterly HAART costs. Higher CD4 cell counts were associated with modestly lower costs among pre-HAART initiators [least-square means (95% confidence interval), CD4 > 500: 4674 (4632-4716); CD4: 350-499: 4765 (4721-4809) CD4: 200-349: 4826 (4780-4871); CD4 <200: 4809 (4759-4859)]; however these differences were not significant among post-2003 HAART initiators. Population-level mean costs increased through 2006 and stabilized post-2003 HAART initiators incurred quarterly costs up to 23% lower than pre-2000 HAART initiators in 2010. Our results highlight the magnitude of the temporal changes in HAART costs, and disparities between recent and pre-HAART initiators. This methodology can improve the precision of economic modeling efforts by using detailed cost functions for annual, population-level medication costs according to the distribution of clients by clinical stage and era of treatment initiation.

  10. The One-Year Attributable Cost of Post-Stroke Dysphagia

    PubMed Central

    Bonilha, Heather Shaw; Simpson, Annie N.; Ellis, Charles; Mauldin, Patrick; Martin-Harris, Bonnie; Simpson, Kit

    2014-01-01

    With the recent emphasis on evidence-based practice and healthcare reform, understanding the cost of dysphagia management has never been more important. It is helpful for clinicians to understand and objectively report the costs associated with dysphagia when they advocate for their services in this economy. Having carefully estimated cost of illness, inputs are needed for cost-effectiveness analyses that help support the value of treatments. This study sought to address this issue by examining the 1-year cost associated with a diagnosis of dysphagia post-stroke in South Carolina. Furthermore, this study investigated whether ethnicity and residence differences exist in the cost of dysphagia post-stroke. Data on 3,200 patients in the South Carolina Medicare database from 2004 who had ICD-9 codes for ischemic stroke, 434 and 436, were retrospectively included in this study. Differences between persons with and without dysphagia post-stroke were compared with respect to age, gender, ethnicity, mortality, length of stay, comorbidity, rurality, discharge disposition, and cost to Medicare. Univariate analyses and a gamma-distributed generalized linear multivariable model with a log link function were completed. We found that the 1-year cost to Medicare for persons with dysphagia post ischemic stroke was $4,510 higher than that for persons without dysphagia post ischemic stroke when controlling for age, comorbidities, ethnicity, and proportion of time alive. Univariate analysis revealed that rurality, ethnicity, and gender were not statistically significantly different in comparisons of individuals with or without dysphagia post-stroke. Post-stroke dysphagia significantly increases post-stroke medical expenses. Understanding the expenditures associated with post-stroke dysphagia is helpful for optimal allocation and use of resources. Such information is needed to conduct cost-effectiveness studies. PMID:24948438

  11. Introduction of DRG-based reimbursement in inpatient psychosomatics--an examination of cost homogeneity and cost predictors in the treatment of patients with eating disorders.

    PubMed

    Haas, Laura; Stargardt, Tom; Schreyoegg, Jonas; Schlösser, Rico; Hofmann, Tobias; Danzer, Gerhard; Klapp, Burghard F

    2012-11-01

    Various western countries are focusing on the introduction of reimbursement based on diagnosis-related groups (DRG) in inpatient mental health. The aim of this study was to analyze if psychosomatic inpatients treated for eating disorders could be reimbursed by a common per diem rate. Inclusion criteria for patient selection (n=256) were (1) a main diagnosis of anorexia nervosa (AN), bulimia nervosa (BN) or eating disorder-related obesity (OB), (2) minimum length of hospital stay of 2 days, (3) and treatment at Charité Universitaetsmedizin Berlin, Germany during the years 2006-2009. Cost calculation was executed from the hospital's perspective, mainly using micro-costing. Generalized linear models with Gamma error distribution and log link function were estimated with per diem costs as dependent variable, clinical and patient variables as well as treatment year as independent variables. Mean costs/case for AN amounted to 5,251€, 95% CI [4407-6095], for BN to 3,265€, 95% CI [2921-3610] and for OB to 3,722€, 95% CI [4407-6095]. Mean costs/day over all patients amounted to 208€, 95% CI [198-218]. The diagnosis AN predicted higher costs in comparison to OB (p=.0009). A co-morbid personality disorder (p=.0442), every one-unit increase in BMI in OB patients (p=.0256), every one-unit decrease in BMI in AN patients (p=.0002) and every additional life year in BN patients (p=.0455) predicted increased costs. We see a need for refinements to take into account considerable variations in treatment costs between patients with eating disorders due to diagnosis, BMI, co-morbid personality disorder and age. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. The one-year attributable cost of post-stroke dysphagia.

    PubMed

    Bonilha, Heather Shaw; Simpson, Annie N; Ellis, Charles; Mauldin, Patrick; Martin-Harris, Bonnie; Simpson, Kit

    2014-10-01

    With the recent emphasis on evidence-based practice and healthcare reform, understanding the cost of dysphagia management has never been more important. It is helpful for clinicians to understand and objectively report the costs associated with dysphagia when they advocate for their services in this economy. Having carefully estimated cost of illness, inputs are needed for cost-effectiveness analyses that help support the value of treatments. This study sought to address this issue by examining the 1-year cost associated with a diagnosis of dysphagia post-stroke in South Carolina. Furthermore, this study investigated whether ethnicity and residence differences exist in the cost of dysphagia post-stroke. Data on 3,200 patients in the South Carolina Medicare database from 2004 who had ICD-9 codes for ischemic stroke, 434 and 436, were retrospectively included in this study. Differences between persons with and without dysphagia post-stroke were compared with respect to age, gender, ethnicity, mortality, length of stay, comorbidity, rurality, discharge disposition, and cost to Medicare. Univariate analyses and a gamma-distributed generalized linear multivariable model with a log link function were completed. We found that the 1-year cost to Medicare for persons with dysphagia post ischemic stroke was $4,510 higher than that for persons without dysphagia post ischemic stroke when controlling for age, comorbidities, ethnicity, and proportion of time alive. Univariate analysis revealed that rurality, ethnicity, and gender were not statistically significantly different in comparisons of individuals with or without dysphagia post-stroke. Post-stroke dysphagia significantly increases post-stroke medical expenses. Understanding the expenditures associated with post-stroke dysphagia is helpful for optimal allocation and use of resources. Such information is needed to conduct cost-effectiveness studies.

  13. Costs Attributable to Overweight and Obesity in Working Asthma Patients in the United States

    PubMed Central

    Chang, Chongwon; Lee, Seung-Mi; Choi, Byoung-Whui; Song, Jong-hwa; Song, Hee; Jung, Sujin; Bai, Yoon Kyeong; Park, Haedong; Jeung, Seungwon

    2017-01-01

    Purpose To estimate annual health care and productivity loss costs attributable to overweight or obesity in working asthmatic patients. Materials and Methods This study was conducted using the 2003–2013 Medical Expenditure Panel Survey (MEPS) in the United States. Patients aged 18 to 64 years with asthma were identified via self-reported diagnosis, a Clinical Classification Code of 128, or a ICD-9-CM code of 493.xx. All-cause health care costs were estimated using a generalized linear model with a log function and a gamma distribution. Productivity loss costs were estimated in relation to hourly wages and missed work days, and a two-part model was used to adjust for patients with zero costs. To estimate the costs attributable to overweight or obesity in asthma patients, costs were estimated by the recycled prediction method. Results Among 11670 working patients with a diagnosis of asthma, 4428 (35.2%) were obese and 3761 (33.0%) were overweight. The health care costs attributable to obesity and overweight in working asthma patients were estimated to be $878 [95% confidence interval (CI): $861–$895] and $257 (95% CI: $251–$262) per person per year, respectively, from 2003 to 2013. The productivity loss costs attributable to obesity and overweight among working asthma patients were $256 (95% CI: $253–$260) and $26 (95% CI: $26–$27) per person per year, respectively. Conclusion Health care and productivity loss costs attributable to overweight and obesity in asthma patients are substantial. This study's results highlight the importance of effective public health and educational initiatives targeted at reducing overweight and obesity among patients with asthma, which may help lower the economic burden of asthma. PMID:27873513

  14. Costs Attributable to Overweight and Obesity in Working Asthma Patients in the United States.

    PubMed

    Chang, Chongwon; Lee, Seung Mi; Choi, Byoung Whui; Song, Jong Hwa; Song, Hee; Jung, Sujin; Bai, Yoon Kyeong; Park, Haedong; Jeung, Seungwon; Suh, Dong Churl

    2017-01-01

    To estimate annual health care and productivity loss costs attributable to overweight or obesity in working asthmatic patients. This study was conducted using the 2003-2013 Medical Expenditure Panel Survey (MEPS) in the United States. Patients aged 18 to 64 years with asthma were identified via self-reported diagnosis, a Clinical Classification Code of 128, or a ICD-9-CM code of 493.xx. All-cause health care costs were estimated using a generalized linear model with a log function and a gamma distribution. Productivity loss costs were estimated in relation to hourly wages and missed work days, and a two-part model was used to adjust for patients with zero costs. To estimate the costs attributable to overweight or obesity in asthma patients, costs were estimated by the recycled prediction method. Among 11670 working patients with a diagnosis of asthma, 4428 (35.2%) were obese and 3761 (33.0%) were overweight. The health care costs attributable to obesity and overweight in working asthma patients were estimated to be $878 [95% confidence interval (CI): $861-$895] and $257 (95% CI: $251-$262) per person per year, respectively, from 2003 to 2013. The productivity loss costs attributable to obesity and overweight among working asthma patients were $256 (95% CI: $253-$260) and $26 (95% CI: $26-$27) per person per year, respectively. Health care and productivity loss costs attributable to overweight and obesity in asthma patients are substantial. This study's results highlight the importance of effective public health and educational initiatives targeted at reducing overweight and obesity among patients with asthma, which may help lower the economic burden of asthma.

  15. VENVAL : a plywood mill cost accounting program

    Treesearch

    Henry Spelter

    1991-01-01

    This report documents a package of computer programs called VENVAL. These programs prepare plywood mill data for a linear programming (LP) model that, in turn, calculates the optimum mix of products to make, given a set of technologies and market prices. (The software to solve a linear program is not provided and must be obtained separately.) Linear programming finds...

  16. Two algorithms for neural-network design and training with application to channel equalization.

    PubMed

    Sweatman, C Z; Mulgrew, B; Gibson, G J

    1998-01-01

    We describe two algorithms for designing and training neural-network classifiers. The first, the linear programming slab algorithm (LPSA), is motivated by the problem of reconstructing digital signals corrupted by passage through a dispersive channel and by additive noise. It constructs a multilayer perceptron (MLP) to separate two disjoint sets by using linear programming methods to identify network parameters. The second, the perceptron learning slab algorithm (PLSA), avoids the computational costs of linear programming by using an error-correction approach to identify parameters. Both algorithms operate in highly constrained parameter spaces and are able to exploit symmetry in the classification problem. Using these algorithms, we develop a number of procedures for the adaptive equalization of a complex linear 4-quadrature amplitude modulation (QAM) channel, and compare their performance in a simulation study. Results are given for both stationary and time-varying channels, the latter based on the COST 207 GSM propagation model.

  17. Assessment of disk MHD generators for a base load powerplant

    NASA Technical Reports Server (NTRS)

    Chubb, D. L.; Retallick, F. D.; Lu, C. L.; Stella, M.; Teare, J. D.; Loubsky, W. J.; Louis, J. F.; Misra, B.

    1981-01-01

    Results from a study of the disk MHD generator are presented. Both open and closed cycle disk systems were investigated. Costing of the open cycle disk components (nozzle, channel, diffuser, radiant boiler, magnet and power management) was done. However, no detailed costing was done for the closed cycle systems. Preliminary plant design for the open cycle systems was also completed. Based on the system study results, an economic assessment of the open cycle systems is presented. Costs of the open cycle disk conponents are less than comparable linear generator components. Also, costs of electricity for the open cycle disk systems are competitive with comparable linear systems. Advantages of the disk design simplicity are considered. Improvements in the channel availability or a reduction in the channel lifetime requirement are possible as a result of the disk design.

  18. Predictive models reduce talent development costs in female gymnastics.

    PubMed

    Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle

    2017-04-01

    This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.

  19. New Results on the Linear Equating Methods for the Non-Equivalent-Groups Design

    ERIC Educational Resources Information Center

    von Davier, Alina A.

    2008-01-01

    The two most common observed-score equating functions are the linear and equipercentile functions. These are often seen as different methods, but von Davier, Holland, and Thayer showed that any equipercentile equating function can be decomposed into linear and nonlinear parts. They emphasized the dominant role of the linear part of the nonlinear…

  20. Fixed-point image orthorectification algorithms for reduced computational cost

    NASA Astrophysics Data System (ADS)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.

  1. Optimizing basin-scale coupled water quantity and water quality man-agement with stochastic dynamic programming

    NASA Astrophysics Data System (ADS)

    Davidsen, Claus; Liu, Suxia; Mo, Xingguo; Engelund Holm, Peter; Trapp, Stefan; Rosbjerg, Dan; Bauer-Gottwein, Peter

    2015-04-01

    Few studies address water quality in hydro-economic models, which often focus primarily on optimal allocation of water quantities. Water quality and water quantity are closely coupled, and optimal management with focus solely on either quantity or quality may cause large costs in terms of the oth-er component. In this study, we couple water quality and water quantity in a joint hydro-economic catchment-scale optimization problem. Stochastic dynamic programming (SDP) is used to minimize the basin-wide total costs arising from water allocation, water curtailment and water treatment. The simple water quality module can handle conservative pollutants, first order depletion and non-linear reactions. For demonstration purposes, we model pollutant releases as biochemical oxygen demand (BOD) and use the Streeter-Phelps equation for oxygen deficit to compute the resulting min-imum dissolved oxygen concentrations. Inelastic water demands, fixed water allocation curtailment costs and fixed wastewater treatment costs (before and after use) are estimated for the water users (agriculture, industry and domestic). If the BOD concentration exceeds a given user pollution thresh-old, the user will need to pay for pre-treatment of the water before use. Similarly, treatment of the return flow can reduce the BOD load to the river. A traditional SDP approach is used to solve one-step-ahead sub-problems for all combinations of discrete reservoir storage, Markov Chain inflow clas-ses and monthly time steps. Pollution concentration nodes are introduced for each user group and untreated return flow from the users contribute to increased BOD concentrations in the river. The pollutant concentrations in each node depend on multiple decision variables (allocation and wastewater treatment) rendering the objective function non-linear. Therefore, the pollution concen-tration decisions are outsourced to a genetic algorithm, which calls a linear program to determine the remainder of the decision variables. This hybrid formulation keeps the optimization problem computationally feasible and represents a flexible and customizable method. The method has been applied to the Ziya River basin, an economic hotspot located on the North China Plain in Northern China. The basin is subject to severe water scarcity, and the rivers are heavily polluted with wastewater and nutrients from diffuse sources. The coupled hydro-economic optimiza-tion model can be used to assess costs of meeting additional constraints such as minimum water qual-ity or to economically prioritize investments in waste water treatment facilities based on economic criteria.

  2. Is the level of institutionalisation found in psychiatric housing services associated with the severity of illness and the functional impairment of the patients? A patient record analysis.

    PubMed

    Valdes-Stauber, Juan; Kilian, Reinhold

    2015-09-14

    In this cross-sectional study, we investigated whether clinical, social, financial, and care variables were associated with different accommodation settings for individuals suffering from severe and persistent mental disorders. Electronic record data of 250 patients who fulfilled the criteria for persistent and severe mental illness were used. Multiple linear regression models were applied to analyse associations between the types and the costs of housing services and the patients' severity of illness, their functional impairment, and their socio-demographic characteristics. We identified 50 patients living at home without need for additional housing support who were receiving outpatient treatment, 41 patients living in the community with outpatient housing support, 23 patients living with foster families for adults, 45 patients living in group homes with 12-h staff cover, 10 patients living in group homes with 24-h staff, and 81 patients living in psychiatric nursing homes. While this housing differed largely in the level of institutionalisation and also in the costs of accommodation, these differences were not related to a patient's severity of disease or in their functional impairment. In particular, patients living in nursing homes had a slightly higher level of functioning compared to those living in the community without welfare housing services. Only where patients were subject to guardianship was there a significant association with an increased level of institutionalisation. Our study suggests that the level of institutionalisation and the associated costs of welfare housing services do not accurately reflect the severity of illness or the level of functional impairment of the patients there are designed to support. The limitations of the study design and the data do not allow for conclusions about causal relationships or generalisation of the findings to other regions. Therefore, further prospective studies are needed to assess the adequacy of the setting assignment of patients with persistent severe mental illness into different types of housing settings with appropriate (also welfare) services.

  3. A mechanical energy harvested magnetorheological damper with linear-rotary motion converter

    NASA Astrophysics Data System (ADS)

    Chu, Ki Sum; Zou, Li; Liao, Wei-Hsin

    2016-04-01

    Magnetorheological (MR) dampers are promising to substitute traditional oil dampers because of adaptive properties of MR fluids. During vibration, significant energy is wasted due to the energy dissipation in the damper. Meanwhile, for conventional MR damping systems, extra power supply is needed. In this paper, a new energy harvester is designed in an MR damper that integrates controllable damping and energy harvesting functions into one device. The energy harvesting part of this MR damper has a unique mechanism converting linear motion to rotary motion that would be more stable and cost effective when compared to other mechanical transmissions. A Maxon motor is used as a power generator to convert the mechanical energy into electrical energy to supply power for the MR damping system. Compared to conventional approaches, there are several advantages in such an integrated device, including weight reduction, ease in installation with less maintenance. A mechanical energy harvested MR damper with linear-rotary motion converter and motion rectifier is designed, fabricated, and tested. Experimental studies on controllable damping force and harvested energy are performed with different transmissions. This energy harvesting MR damper would be suitable to vehicle suspensions, civil structures, and smart prostheses.

  4. BPF-type region-of-interest reconstruction for parallel translational computed tomography.

    PubMed

    Wu, Weiwen; Yu, Hengyong; Wang, Shaoyu; Liu, Fenglin

    2017-01-01

    The objective of this study is to present and test a new ultra-low-cost linear scan based tomography architecture. Similar to linear tomosynthesis, the source and detector are translated in opposite directions and the data acquisition system targets on a region-of-interest (ROI) to acquire data for image reconstruction. This kind of tomographic architecture was named parallel translational computed tomography (PTCT). In previous studies, filtered backprojection (FBP)-type algorithms were developed to reconstruct images from PTCT. However, the reconstructed ROI images from truncated projections have severe truncation artefact. In order to overcome this limitation, we in this study proposed two backprojection filtering (BPF)-type algorithms named MP-BPF and MZ-BPF to reconstruct ROI images from truncated PTCT data. A weight function is constructed to deal with data redundancy for multi-linear translations modes. Extensive numerical simulations are performed to evaluate the proposed MP-BPF and MZ-BPF algorithms for PTCT in fan-beam geometry. Qualitative and quantitative results demonstrate that the proposed BPF-type algorithms cannot only more accurately reconstruct ROI images from truncated projections but also generate high-quality images for the entire image support in some circumstances.

  5. Auxiliary basis expansions for large-scale electronic structure calculations

    PubMed Central

    Jung, Yousung; Sodt, Alex; Gill, Peter M. W.; Head-Gordon, Martin

    2005-01-01

    One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems. PMID:15845767

  6. Inter and intra-modal deformable registration: continuous deformations meet efficient optimal linear programming.

    PubMed

    Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir

    2007-01-01

    In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach.

  7. Life cycle cost optimization of biofuel supply chains under uncertainties based on interval linear programming.

    PubMed

    Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun

    2015-01-01

    The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. An architecture for efficient gravitational wave parameter estimation with multimodal linear surrogate models

    NASA Astrophysics Data System (ADS)

    O'Shaughnessy, Richard; Blackman, Jonathan; Field, Scott E.

    2017-07-01

    The recent direct observation of gravitational waves has further emphasized the desire for fast, low-cost, and accurate methods to infer the parameters of gravitational wave sources. Due to expense in waveform generation and data handling, the cost of evaluating the likelihood function limits the computational performance of these calculations. Building on recently developed surrogate models and a novel parameter estimation pipeline, we show how to quickly generate the likelihood function as an analytic, closed-form expression. Using a straightforward variant of a production-scale parameter estimation code, we demonstrate our method using surrogate models of effective-one-body and numerical relativity waveforms. Our study is the first time these models have been used for parameter estimation and one of the first ever parameter estimation calculations with multi-modal numerical relativity waveforms, which include all \\ell ≤slant 4 modes. Our grid-free method enables rapid parameter estimation for any waveform with a suitable reduced-order model. The methods described in this paper may also find use in other data analysis studies, such as vetting coincident events or the computation of the coalescing-compact-binary detection statistic.

  9. Construction of siRNA/miRNA expression vectors based on a one-step PCR process

    PubMed Central

    Xu, Jun; Zeng, Jie Qiong; Wan, Gang; Hu, Gui Bin; Yan, Hong; Ma, Li Xin

    2009-01-01

    Background RNA interference (RNAi) has become a powerful means for silencing target gene expression in mammalian cells and is envisioned to be useful in therapeutic approaches to human disease. In recent years, high-throughput, genome-wide screening of siRNA/miRNA libraries has emerged as a desirable approach. Current methods for constructing siRNA/miRNA expression vectors require the synthesis of long oligonucleotides, which is costly and suffers from mutation problems. Results Here we report an ingenious method to solve traditional problems associated with construction of siRNA/miRNA expression vectors. We synthesized shorter primers (< 50 nucleotides) to generate a linear expression structure by PCR. The PCR products were directly transformed into chemically competent E. coli and converted to functional vectors in vivo via homologous recombination. The positive clones could be easily screened under UV light. Using this method we successfully constructed over 500 functional siRNA/miRNA expression vectors. Sequencing of the vectors confirmed a high accuracy rate. Conclusion This novel, convenient, low-cost and highly efficient approach may be useful for high-throughput assays of RNAi libraries. PMID:19490634

  10. CUDAICA: GPU Optimization of Infomax-ICA EEG Analysis

    PubMed Central

    Raimondo, Federico; Kamienkowski, Juan E.; Sigman, Mariano; Fernandez Slezak, Diego

    2012-01-01

    In recent years, Independent Component Analysis (ICA) has become a standard to identify relevant dimensions of the data in neuroscience. ICA is a very reliable method to analyze data but it is, computationally, very costly. The use of ICA for online analysis of the data, used in brain computing interfaces, results are almost completely prohibitive. We show an increase with almost no cost (a rapid video card) of speed of ICA by about 25 fold. The EEG data, which is a repetition of many independent signals in multiple channels, is very suitable for processing using the vector processors included in the graphical units. We profiled the implementation of this algorithm and detected two main types of operations responsible of the processing bottleneck and taking almost 80% of computing time: vector-matrix and matrix-matrix multiplications. By replacing function calls to basic linear algebra functions to the standard CUBLAS routines provided by GPU manufacturers, it does not increase performance due to CUDA kernel launch overhead. Instead, we developed a GPU-based solution that, comparing with the original BLAS and CUBLAS versions, obtains a 25x increase of performance for the ICA calculation. PMID:22811699

  11. Quasi-dynamic Earthquake Cycle Simulation in a Viscoelastic Medium with Memory Variables

    NASA Astrophysics Data System (ADS)

    Hirahara, K.; Ohtani, M.; Shikakura, Y.

    2011-12-01

    Earthquake cycle simulations based on rate and state friction laws have successfully reproduced the observed complex earthquake cycles at subduction zones. Most of simulations have assumed elastic media. The lower crust and the upper mantle have, however, viscoelastic properties, which cause postseismic stress relaxation. Hence the slip evolution on the plate interfaces or the faults in long earthquake cycles is different from that in elastic media. Especially, the viscoelasticity plays an important role in the interactive occurrence of inland and great interplate earthquakes. In viscoelastic media, the stress is usually calculated by the temporal convolution of the slip response function matrix and the slip deficit rate vector, which needs the past history of slip rates at all cells. Even if properly truncating the convolution, it requires huge computations. This is why few simulation studies have considered viscoelastic media so far. In this study, we examine the method using memory variables or anelastic functions, which has been developed for the time-domain finite-difference calculation of seismic waves in a dissipative medium (e.g., Emmerich and Korn,1987; Moczo and Kristek, 2005). The procedure for stress calculation with memory variables is as follows. First, we approximate the time-domain slip response function calculated in a viscoelastic medium with a series of relaxation functions with coefficients and relaxation times derived from a generalized Maxell body model. Then we can define the time-domain material-independent memory variable or anelastic function for each relaxation mechanism. Each time-domain memory variable satisfies the first-order differential equation. As a result, we can calculate the stress simply by the product of the unrelaxed modulus and the slip deficit subtracted from the sum of memory variables without temporal convolution. With respect to computational cost, we can summarize as in the followings. Dividing the plate interface into N cells, in elastic media, the stress at all cells is calculated by the product of the slip response function matrix and the slip deficit vector. The computational cost is O(N**2). With H-matrices method, we can reduce this to O(N)-O(NlogN) (Ohtani et al. 2011). The memory size is also reduced from O(N**2) to O(N). In viscoelastic media, the product of the unrelaxed modulus matrix and the vector of the slip deficit subtracted from the sum of memory variables costs O(N) with H-matrices method, which is the same as in elastic ones. If we use m relaxation functions, m x N differential equations are additionally solved at a time. The increase in memory size is (4m+1) x N**2. For approximation of slip response function, we need to estimate coefficients and relaxation times for m relaxation functions non-linearly with constraints. Because it is difficult to execute the non-linear least square estimation with constraints, we consider only m=2 with satisfying constraints. Test calculations in a layered or 3-D heterogeneous viscoelastic structure show this gives the satisfactory approximation. As an example, we report a 2-D earthquake cycle simulation for the 2011 giant Tohoku earthquake in a layered viscoelastic medium.

  12. Performance of statistical models to predict mental health and substance abuse cost.

    PubMed

    Montez-Rath, Maria; Christiansen, Cindy L; Ettner, Susan L; Loveland, Susan; Rosen, Amy K

    2006-10-26

    Providers use risk-adjustment systems to help manage healthcare costs. Typically, ordinary least squares (OLS) models on either untransformed or log-transformed cost are used. We examine the predictive ability of several statistical models, demonstrate how model choice depends on the goal for the predictive model, and examine whether building models on samples of the data affects model choice. Our sample consisted of 525,620 Veterans Health Administration patients with mental health (MH) or substance abuse (SA) diagnoses who incurred costs during fiscal year 1999. We tested two models on a transformation of cost: a Log Normal model and a Square-root Normal model, and three generalized linear models on untransformed cost, defined by distributional assumption and link function: Normal with identity link (OLS); Gamma with log link; and Gamma with square-root link. Risk-adjusters included age, sex, and 12 MH/SA categories. To determine the best model among the entire dataset, predictive ability was evaluated using root mean square error (RMSE), mean absolute prediction error (MAPE), and predictive ratios of predicted to observed cost (PR) among deciles of predicted cost, by comparing point estimates and 95% bias-corrected bootstrap confidence intervals. To study the effect of analyzing a random sample of the population on model choice, we re-computed these statistics using random samples beginning with 5,000 patients and ending with the entire sample. The Square-root Normal model had the lowest estimates of the RMSE and MAPE, with bootstrap confidence intervals that were always lower than those for the other models. The Gamma with square-root link was best as measured by the PRs. The choice of best model could vary if smaller samples were used and the Gamma with square-root link model had convergence problems with small samples. Models with square-root transformation or link fit the data best. This function (whether used as transformation or as a link) seems to help deal with the high comorbidity of this population by introducing a form of interaction. The Gamma distribution helps with the long tail of the distribution. However, the Normal distribution is suitable if the correct transformation of the outcome is used.

  13. The cost of colorectal cancer according to the TNM stage.

    PubMed

    Mar, Javier; Errasti, Jose; Soto-Gordoa, Myriam; Mar-Barrutia, Gilen; Martinez-Llorente, José Miguel; Domínguez, Severina; García-Albás, Juan José; Arrospide, Arantzazu

    2017-02-01

    The aim of this study was to measure the cost of treatment of colorectal cancer in the Basque public health system according to the clinical stage. We retrospectively collected demographic data, clinical data and resource use of a sample of 529 patients. For stagesi toiii the initial and follow-up costs were measured. The calculation of cost for stageiv combined generalized linear models to relate the cost to the duration of follow-up based on parametric survival analysis. Unit costs were obtained from the analytical accounting system of the Basque Health Service. The sample included 110 patients with stagei, 171 with stageii, 158 with stageiii and 90 with stageiv colorectal cancer. The initial total cost per patient was 8,644€ for stagei, 12,675€ for stageii and 13,034€ for stageiii. The main component was hospitalization cost. Calculated by extrapolation for stageiv mean survival was 1.27years. Its average annual cost was 22,403€, and 24,509€ to death. The total annual cost for colorectal cancer extrapolated to the whole Spanish health system was 623.9million€. The economic burden of colorectal cancer is important and should be taken into account in decision-making. The combination of generalized linear models and survival analysis allows estimation of the cost of metastatic stage. Copyright © 2017 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.

  14. Model, analysis, and evaluation of the effects of analog VLSI arithmetic on linear subspace-based image recognition.

    PubMed

    Carvajal, Gonzalo; Figueroa, Miguel

    2014-07-01

    Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. An intuitionistic fuzzy multi-objective non-linear programming model for sustainable irrigation water allocation under the combination of dry and wet conditions

    NASA Astrophysics Data System (ADS)

    Li, Mo; Fu, Qiang; Singh, Vijay P.; Ma, Mingwei; Liu, Xiao

    2017-12-01

    Water scarcity causes conflicts among natural resources, society and economy and reinforces the need for optimal allocation of irrigation water resources in a sustainable way. Uncertainties caused by natural conditions and human activities make optimal allocation more complex. An intuitionistic fuzzy multi-objective non-linear programming (IFMONLP) model for irrigation water allocation under the combination of dry and wet conditions is developed to help decision makers mitigate water scarcity. The model is capable of quantitatively solving multiple problems including crop yield increase, blue water saving, and water supply cost reduction to obtain a balanced water allocation scheme using a multi-objective non-linear programming technique. Moreover, it can deal with uncertainty as well as hesitation based on the introduction of intuitionistic fuzzy numbers. Consideration of the combination of dry and wet conditions for water availability and precipitation makes it possible to gain insights into the various irrigation water allocations, and joint probabilities based on copula functions provide decision makers an average standard for irrigation. A case study on optimally allocating both surface water and groundwater to different growth periods of rice in different subareas in Heping irrigation area, Qing'an County, northeast China shows the potential and applicability of the developed model. Results show that the crop yield increase target especially in tillering and elongation stages is a prevailing concern when more water is available, and trading schemes can mitigate water supply cost and save water with an increased grain output. Results also reveal that the water allocation schemes are sensitive to the variation of water availability and precipitation with uncertain characteristics. The IFMONLP model is applicable for most irrigation areas with limited water supplies to determine irrigation water strategies under a fuzzy environment.

  16. LINEARIZATION OF EMPIRICAL RHEOLOGICAL DATA FOR USE IN COMPOSITION CONTROL OF MULTICOMPONENT FOODSTUFFS.

    PubMed

    Drake, Birger; Nádai, Béla

    1970-03-01

    An empirical measure of viscosity, which is often far from being a linear function of composition, was used together with refractive index to build up a function which bears a linear relationship to the composition of tomato paste-water-sucrose mixtures. The new function can be used directly for rapid composition control by linear vector-vector transformation.

  17. Closed-loop control of boundary layer streaks induced by free-stream turbulence

    NASA Astrophysics Data System (ADS)

    Papadakis, George; Lu, Liang; Ricco, Pierre

    2016-08-01

    The central aim of the paper is to carry out a theoretical and numerical study of active wall transpiration control of streaks generated within an incompressible boundary layer by free-stream turbulence. The disturbance flow model is based on the linearized unsteady boundary-region (LUBR) equations, studied by Leib, Wundrow, and Goldstein [J. Fluid Mech. 380, 169 (1999), 10.1017/S0022112098003504], which are the rigorous asymptotic limit of the Navier-Stokes equations for low-frequency and long-streamwise wavelength. The mathematical formulation of the problem directly incorporates the random forcing into the equations in a consistent way. Due to linearity, this forcing is factored out and appears as a multiplicative factor. It is shown that the cost function (integral of kinetic energy in the domain) is properly defined as the expectation of a random quadratic function only after integration in wave number space. This operation naturally introduces the free-stream turbulence spectral tensor into the cost function. The controller gains for each wave number are independent of the spectral tensor and, in that sense, universal. Asymptotic matching of the LUBR equations with the free-stream conditions results in an additional forcing term in the state-space system whose presence necessitates the reformulation of the control problem and the rederivation of its solution. It is proved that the solution can be obtained analytically using an extension of the sweep method used in control theory to obtain the standard Riccati equation. The control signal consists of two components, a feedback part and a feed-forward part (that depends explicitly on the forcing term). Explicit recursive equations that provide these two components are derived. It is shown that the feed-forward part makes a negligible contribution to the control signal. We also derive an explicit expression that a priori (i.e., before solving the control problem) leads to the minimum of the objective cost function (i.e., the fundamental performance limit), based only on the system matrices and the initial and free-stream boundary conditions. The adjoint equations admit a self-similar solution for large spanwise wave numbers with a scaling which is different from that of the LUBR equations. The controlled flow field also has a self-similar solution if the weighting matrices of the objective function are chosen appropriately. The code developed to implement this algorithm is efficient and has modest memory requirements. Computations show the significant reduction of energy for each wave number. The control of the full spectrum streaks, for conditions corresponding to a realistic experimental case, shows that the root-mean-square of the streamwise velocity is strongly suppressed in the whole domain and for all the frequency ranges examined.

  18. A life cycle cost economics model for automation projects with uniformly varying operating costs. [applied to Deep Space Network and Air Force Systems Command

    NASA Technical Reports Server (NTRS)

    Remer, D. S.

    1977-01-01

    The described mathematical model calculates life-cycle costs for projects with operating costs increasing or decreasing linearly with time. The cost factors involved in the life-cycle cost are considered, and the errors resulting from the assumption of constant rather than uniformly varying operating costs are examined. Parameters in the study range from 2 to 30 years, for project life; 0 to 15% per year, for interest rate; and 5 to 90% of the initial operating cost, for the operating cost gradient. A numerical example is presented.

  19. 36 CFR 14.22 - Reimbursement of costs.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... paragraph (a)(3)(i) of this section (e.g., for communication sites, reservoir sites, plant sites, and other non-linear facilities)—$250 for each 40 acres or fraction thereof. (iii) If a project has the features... applicant an estimate, based on the best available cost information, of the costs which would be incurred by...

  20. 36 CFR 14.22 - Reimbursement of costs.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... paragraph (a)(3)(i) of this section (e.g., for communication sites, reservoir sites, plant sites, and other non-linear facilities)—$250 for each 40 acres or fraction thereof. (iii) If a project has the features... applicant an estimate, based on the best available cost information, of the costs which would be incurred by...

  1. 36 CFR 14.22 - Reimbursement of costs.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... paragraph (a)(3)(i) of this section (e.g., for communication sites, reservoir sites, plant sites, and other non-linear facilities)—$250 for each 40 acres or fraction thereof. (iii) If a project has the features... applicant an estimate, based on the best available cost information, of the costs which would be incurred by...

  2. 36 CFR 14.22 - Reimbursement of costs.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... paragraph (a)(3)(i) of this section (e.g., for communication sites, reservoir sites, plant sites, and other non-linear facilities)—$250 for each 40 acres or fraction thereof. (iii) If a project has the features... applicant an estimate, based on the best available cost information, of the costs which would be incurred by...

  3. A Linear City Model with Asymmetric Consumer Distribution

    PubMed Central

    Azar, Ofer H.

    2015-01-01

    The article analyzes a linear-city model where the consumer distribution can be asymmetric, which is important because in real markets this distribution is often asymmetric. The model yields equilibrium price differences, even though the firms’ costs are equal and their locations are symmetric (at the two endpoints of the city). The equilibrium price difference is proportional to the transportation cost parameter and does not depend on the good's cost. The firms' markups are also proportional to the transportation cost. The two firms’ prices will be equal in equilibrium if and only if half of the consumers are located to the left of the city’s midpoint, even if other characteristics of the consumer distribution are highly asymmetric. An extension analyzes what happens when the firms have different costs and how the two sources of asymmetry – the consumer distribution and the cost per unit – interact together. The model can be useful as a tool for further development by other researchers interested in applying this simple yet flexible framework for the analysis of various topics. PMID:26034984

  4. A general framework for regularized, similarity-based image restoration.

    PubMed

    Kheradmand, Amin; Milanfar, Peyman

    2014-12-01

    Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.

  5. Energy aspects of the synchronization of model neurons

    NASA Astrophysics Data System (ADS)

    Torrealdea, F. J.; D'Anjou, A.; Graña, M.; Sarasola, C.

    2006-07-01

    We have deduced an energy function for a Hindmarsh-Rose model neuron and we have used it to evaluate the energy consumption of the neuron during its signaling activity. We investigate the balance of energy in the synchronization of two bidirectional linearly coupled neurons at different values of the coupling strength. We show that when two neurons are coupled there is a specific cost associated to the cooperative behavior. We find that the energy consumption of the neurons is incoherent until very near the threshold of identical synchronization, which suggests that cooperative behaviors without complete synchrony could be energetically more advantageous than those with complete synchrony.

  6. Laser applications and system considerations in ocular imaging

    PubMed Central

    Elsner, Ann E.; Muller, Matthew S.

    2009-01-01

    We review laser applications for primarily in vivo ocular imaging techniques, describing their constraints based on biological tissue properties, safety, and the performance of the imaging system. We discuss the need for cost effective sources with practical wavelength tuning capabilities for spectral studies. Techniques to probe the pathological changes of layers beneath the highly scattering retina and diagnose the onset of various eye diseases are described. The recent development of several optical coherence tomography based systems for functional ocular imaging is reviewed, as well as linear and nonlinear ocular imaging techniques performed with ultrafast lasers, emphasizing recent source developments and methods to enhance imaging contrast. PMID:21052482

  7. Estimating the price elasticity of expenditure for prescription drugs in the presence of non-linear price schedules: an illustration from Quebec, Canada.

    PubMed

    Contoyannis, Paul; Hurley, Jeremiah; Grootendorst, Paul; Jeon, Sung-Hee; Tamblyn, Robyn

    2005-09-01

    The price elasticity of demand for prescription drugs is a crucial parameter of interest in designing pharmaceutical benefit plans. Estimating the elasticity using micro-data, however, is challenging because insurance coverage that includes deductibles, co-insurance provisions and maximum expenditure limits create a non-linear price schedule, making price endogenous (a function of drug consumption). In this paper we exploit an exogenous change in cost-sharing within the Quebec (Canada) public Pharmacare program to estimate the price elasticity of expenditure for drugs using IV methods. This approach corrects for the endogeneity of price and incorporates the concept of a 'rational' consumer who factors into consumption decisions the price they expect to face at the margin given their expected needs. The IV method is adapted from an approach developed in the public finance literature used to estimate income responses to changes in tax schedules. The instrument is based on the price an individual would face under the new cost-sharing policy if their consumption remained at the pre-policy level. Our preferred specification leads to expenditure elasticities that are in the low range of previous estimates (between -0.12 and -0.16). Naïve OLS estimates are between 1 and 4 times these magnitudes. (c) 2005 John Wiley & Sons, Ltd.

  8. High Reliability Prototype Quadrupole for the Next Linear Collider

    NASA Astrophysics Data System (ADS)

    Spencer, C. M.

    2001-01-01

    The Next Linear Collider (NLC) will require over 5600 magnets, each of which must be highly reliable and/or quickly repairable in order that the NLC reach its 85/ overall availability goal. A multidiscipline engineering team was assembled at SLAC to develop a more reliable electromagnet design than historically had been achieved at SLAC. This team carried out a Failure Mode and Effects Analysis (FMEA) on a standard SLAC quadrupole magnet system. They overcame a number of longstanding design prejudices, producing 10 major design changes. This paper describes how a prototype magnet was constructed and the extensive testing carried out on it to prove full functionality with an improvement in reliability. The magnet's fabrication cost will be compared to the cost of a magnet with the same requirements made in the historic SLAC way. The NLC will use over 1600 of these 12.7 mm bore quadrupoles with a range of integrated strengths from 0.6 to 132 Tesla, a maximum gradient of 135 Tesla per meter, an adjustment range of 0 to -20/ and core lengths from 324 mm to 972 mm. The magnetic center must remain stable to within 1 micron during the 20/ adjustment. A magnetic measurement set-up has been developed that can measure sub-micron shifts of a magnetic center. The prototype satisfied the center shift requirement over the full range of integrated strengths.

  9. Hardware Neural Network for a Visual Inspection System

    NASA Astrophysics Data System (ADS)

    Chun, Seungwoo; Hayakawa, Yoshihiro; Nakajima, Koji

    The visual inspection of defects in products is heavily dependent on human experience and instinct. In this situation, it is difficult to reduce the production costs and to shorten the inspection time and hence the total process time. Consequently people involved in this area desire an automatic inspection system. In this paper, we propose a hardware neural network, which is expected to provide high-speed operation for automatic inspection of products. Since neural networks can learn, this is a suitable method for self-adjustment of criteria for classification. To achieve high-speed operation, we use parallel and pipelining techniques. Furthermore, we use a piecewise linear function instead of a conventional activation function in order to save hardware resources. Consequently, our proposed hardware neural network achieved 6GCPS and 2GCUPS, which in our test sample proved to be sufficiently fast.

  10. An efficient formulation and implementation of the analytic energy gradient method to the single and double excitation coupled-cluster wave function - Application to Cl2O2

    NASA Technical Reports Server (NTRS)

    Rendell, Alistair P.; Lee, Timothy J.

    1991-01-01

    The analytic energy gradient for the single and double excitation coupled-cluster (CCSD) wave function has been reformulated and implemented in a new set of programs. The reformulated set of gradient equations have a smaller computational cost than any previously published. The iterative solution of the linear equations and the construction of the effective density matrices are fully vectorized, being based on matrix multiplications. The new method has been used to investigate the Cl2O2 molecule, which has recently been postulated as an important intermediate in the destruction of ozone in the stratosphere. In addition to reporting computational timings, the CCSD equilibrium geometries, harmonic vibrational frequencies, infrared intensities, and relative energetics of three isomers of Cl2O2 are presented.

  11. Complexity study on the Cournot-Bertrand mixed duopoly game model with market share preference

    NASA Astrophysics Data System (ADS)

    Ma, Junhai; Sun, Lijian; Hou, Shunqi; Zhan, Xueli

    2018-02-01

    In this paper, a Cournot-Bertrand duopoly model with market share preference is established. Assume that there is a degree of product difference between the two firms, where one firm takes the price as a decision variable and the other takes the quantity. Both firms are bounded rational, with linear cost functions and demand functions. The stability of the equilibrium points is analyzed, and the effects of some parameters (α, β, d and v1) on the model stability are studied. Basins of attraction are investigated and the evolution process is shown with the increase in the output adjustment speed. The simulation results show that instability will lead to the increase in the average utility of the firm that determines the quantity and reduce the average utility of the firm that determines price.

  12. On reliable control system designs. Ph.D. Thesis; [actuators

    NASA Technical Reports Server (NTRS)

    Birdwell, J. D.

    1978-01-01

    A mathematical model for use in the design of reliable multivariable control systems is discussed with special emphasis on actuator failures and necessary actuator redundancy levels. The model consists of a linear time invariant discrete time dynamical system. Configuration changes in the system dynamics are governed by a Markov chain that includes transition probabilities from one configuration state to another. The performance index is a standard quadratic cost functional, over an infinite time interval. The actual system configuration can be deduced with a one step delay. The calculation of the optimal control law requires the solution of a set of highly coupled Riccati-like matrix difference equations. Results can be used for off-line studies relating the open loop dynamics, required performance, actuator mean time to failure, and functional or identical actuator redundancy, with and without feedback gain reconfiguration strategies.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borowik, Piotr, E-mail: pborow@poczta.onet.pl; Thobel, Jean-Luc, E-mail: jean-luc.thobel@iemn.univ-lille1.fr; Adamowicz, Leszek, E-mail: adamo@if.pw.edu.pl

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron–electron (e–e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport propertiesmore » of degenerate electrons in graphene with e–e interactions. This required adapting the treatment of e–e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.« less

  14. Study of inkjet printing as an ultra-low-cost antenna prototyping method and its application to conformal wraparound antennas for sounding rocket sub-payload

    NASA Astrophysics Data System (ADS)

    Maimaiti, Maimaitirebike

    Inkjet printing is an attractive patterning technology that has received tremendous interest as a mass fabrication method for a variety of electronic devices due to its manufacturing exibility and low-cost feature. However, the printing facilities that are being used, especially the inkjet printer, are very expensive. This thesis introduces an extremely cost-friendly inkjet printing method using a printer that costs less than $100. In order to verify its reliability, linearly and circularly polarized (CPd) planar and conformal microstrip antennas were fabricated using this printing method, and their measurement results were compared with copper microstrip antennas. The result shows that the printed microstrip antennas have similar performances to those of the copper antennas except for lower efficiency. The effects of the conductivity and thickness of the ink layer on the antenna properties were studied, and it is found that the conductivity is the main factor affecting the radiation efficiency, though thicker ink yields more effective antennas. This thesis also presents the detailed antenna design for a sub-payload. The sub-payload is a cylindrical structure with a diameter of six inches and a height of four inches. It has four booms coming out from the surface, which are used to measure the variations of the energy flow into the upper atmosphere in and around the aurora. The sub-payload has two types of antennas: linearly polarized (LPd) S-band antennas and right-hand circularly polarized (RHCPd) GPS antennas. Each type of antenna has various requirements to be fully functional for specific research tasks. The thesis includes the design methods of each type of antenna, challenges that were confronted, and the possible solutions that were proposed. As a practical application, the inkjet printing method was conveniently applied in validating some of the antenna designs.

  15. Tradeoffs between immune function and childhood growth among Amazonian forager-horticulturalists.

    PubMed

    Urlacher, Samuel S; Ellison, Peter T; Sugiyama, Lawrence S; Pontzer, Herman; Eick, Geeta; Liebert, Melissa A; Cepon-Robins, Tara J; Gildner, Theresa E; Snodgrass, J Josh

    2018-04-24

    Immune function is an energetically costly physiological activity that potentially diverts calories away from less immediately essential life tasks. Among developing organisms, the allocation of energy toward immune function may lead to tradeoffs with physical growth, particularly in high-pathogen, low-resource environments. The present study tests this hypothesis across diverse timeframes, branches of immunity, and conditions of energy availability among humans. Using a prospective mixed-longitudinal design, we collected anthropometric and blood immune biomarker data from 261 Amazonian forager-horticulturalist Shuar children (age 4-11 y old). This strategy provided baseline measures of participant stature, s.c. body fat, and humoral and cell-mediated immune activity as well as subsample longitudinal measures of linear growth (1 wk, 3 mo, 20 mo) and acute inflammation. Multilevel analyses demonstrate consistent negative effects of immune function on growth, with children experiencing up to 49% growth reduction during periods of mildly elevated immune activity. The direct energetic nature of these relationships is indicated by ( i ) the manifestation of biomarker-specific negative immune effects only when examining growth over timeframes capturing active competition for energetic resources, ( ii ) the exaggerated impact of particularly costly inflammation on growth, and ( iii ) the ability of children with greater levels of body fat (i.e., energy reserves) to completely avoid the growth-inhibiting effects of acute inflammation. These findings provide evidence for immunologically and temporally diverse body fat-dependent tradeoffs between immune function and growth during childhood. We discuss the implications of this work for understanding human developmental energetics and the biological mechanisms regulating variation in human ontogeny, life history, and health.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Jong-Won; Hirao, Kimihiko, E-mail: hirao@riken.jp

    Since the advent of hybrid functional in 1993, it has become a main quantum chemical tool for the calculation of energies and properties of molecular systems. Following the introduction of long-range corrected hybrid scheme for density functional theory a decade later, the applicability of the hybrid functional has been further amplified due to the resulting increased performance on orbital energy, excitation energy, non-linear optical property, barrier height, and so on. Nevertheless, the high cost associated with the evaluation of Hartree-Fock (HF) exchange integrals remains a bottleneck for the broader and more active applications of hybrid functionals to large molecular andmore » periodic systems. Here, we propose a very simple yet efficient method for the computation of long-range corrected hybrid scheme. It uses a modified two-Gaussian attenuating operator instead of the error function for the long-range HF exchange integral. As a result, the two-Gaussian HF operator, which mimics the shape of the error function operator, reduces computational time dramatically (e.g., about 14 times acceleration in C diamond calculation using periodic boundary condition) and enables lower scaling with system size, while maintaining the improved features of the long-range corrected density functional theory.« less

  17. Associations of renal function at 1-year after kidney transplantation with subsequent return to dialysis, mortality, and healthcare costs.

    PubMed

    Schnitzler, Mark A; Johnston, Karissa; Axelrod, David; Gheorghian, Adrian; Lentine, Krista L

    2011-06-27

    Improved early kidney transplant outcomes limit the contemporary utility of standard clinical endpoints. Quantifying the relationship of renal function at 1 year after transplant with subsequent clinical outcomes and healthcare costs may facilitate cost-benefit evaluations among transplant recipients. Data for Medicare-insured kidney-only transplant recipients (1995-2003) were drawn from the United States Renal Data System. Associations of estimated glomerular filtration rate (eGFR) level at the first transplant anniversary with subsequent death-censored graft failure and patient death in posttransplant years 1 to 3 and 4 to 7 were examined by parametric survival analysis. Associations of eGFR with total health care costs defined by Medicare payments were assessed with multivariate linear regression. Among 38,015 participants, first anniversary eGFR level demonstrated graded associations with subsequent outcomes. Compared with patients with 12-month eGFR more than or equal to 60 mL/min/1.73 m, the adjusted relative risk of death-censored graft failure in years 1 to 3 was 31% greater for eGFR 45 to 59 mL/min/1.73 m (P<0.0001) and 622% greater for eGFR 15 to 30 mL/min/1.73 m (P<0.0001). Associations of first anniversary eGFR level with graft failure and mortality remained significant in years 4 to 7. The proportions of recipients expected to return to dialysis or die attributable to eGFR less than 60 mL/min/1.73 m over 10 years were 23.1% and 9.4%, respectively, and were significantly higher than proportions attributable to delayed graft function or acute rejection. Reduced eGFR was associated with graded and significant increases in health care spending during years 2 and 3 after transplant (P<0.0001). eGFR is strongly associated with clinical and economic outcomes after kidney transplantation.

  18. Multigrid approaches to non-linear diffusion problems on unstructured meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    The efficiency of three multigrid methods for solving highly non-linear diffusion problems on two-dimensional unstructured meshes is examined. The three multigrid methods differ mainly in the manner in which the nonlinearities of the governing equations are handled. These comprise a non-linear full approximation storage (FAS) multigrid method which is used to solve the non-linear equations directly, a linear multigrid method which is used to solve the linear system arising from a Newton linearization of the non-linear system, and a hybrid scheme which is based on a non-linear FAS multigrid scheme, but employs a linear solver on each level as a smoother. Results indicate that all methods are equally effective at converging the non-linear residual in a given number of grid sweeps, but that the linear solver is more efficient in cpu time due to the lower cost of linear versus non-linear grid sweeps.

  19. Costs of services for homeless people with mental illness in 5 Canadian cities: a large prospective follow-up study

    PubMed Central

    Latimer, Eric A.; Rabouin, Daniel; Cao, Zhirong; Ly, Angela; Powell, Guido; Aubry, Tim; Distasio, Jino; Hwang, Stephen W.; Somers, Julian M.; Stergiopoulos, Vicky; Veldhuizen, Scott; Moodie, Erica E.M.; Lesage, Alain; Goering, Paula N.

    2017-01-01

    Background: Limited evidence on the costs of homelessness in Canada is available. We estimated the average annual costs, in total and by cost category, that homeless people with mental illness engender from the perspective of society. We also identified individual characteristics associated with higher costs. Methods: As part of the At Home/Chez Soi trial of Housing First for homeless people with mental illness, 990 participants were assigned to the usual-treatment (control) group in 5 Canadian cities (Vancouver, Winnipeg, Toronto, Montréal and Moncton) between October 2009 and June 2011. They were followed for up to 2 years. Questionnaires ascertained service use and income, and city-specific unit costs were estimated. We adjusted costs for site differences in sample characteristics. We used generalized linear models to identify individual-level characteristics associated with higher costs. Results: Usable data were available for 937 participants (94.6%). Average annual costs (excluding medications) per person in Vancouver, Winnipeg, Toronto, Montréal and Moncton were $53 144 (95% confidence interval [CI] $46 297-$60 095), $45 565 (95% CI $41 039-$50 412), $58 972 (95% CI $52 237-$66 085), $56 406 (95% CI $50 654-$62 456) and $29 610 (95% CI $24 995-$34 480), respectively. Net costs ranged from $15 530 to $341 535. Distributions of costs across categories varied significantly across cities. Lower functioning and a history of psychiatric hospital stays were the most important predictors of higher costs. Interpretation: Homeless people with mental illness generate very high costs for society. Programs are needed to reorient this spending toward more effectively preventing homelessness and toward meeting the health, housing and social service needs of homeless people. PMID:28724726

  20. Drivers of Hospital Costs in the Self-Pay Facelift (Rhytidectomy) Patient: Analysis of Hospital Resource Utilization in 1890 Patients.

    PubMed

    Chattha, Anmol; Bucknor, Alexandra; Chi, David; Ultee, Klaas; Chen, Austin D; Lin, Samuel J

    2018-04-01

    Rhytidectomy is one of the most commonly performed cosmetic procedures by plastic surgeons. Increasing attention to the development of a high-value, low-cost healthcare system is a priority in the USA. This study aims to analyze specific patient and hospital factors affecting the cost of this procedure. We conducted a retrospective cohort study of self-pay patients over the age of 18 who underwent rhytidectomy using the Healthcare Utilization Cost Project National Inpatient Sample database between 2013 and 2014. Mean marginal cost increases patient characteristics, and outcomes were studied. Generalized linear modeling with gamma regression and a log-link function were performed along with estimated marginal means to provide cost estimates. A total of 1890 self-pay patients underwent rhytidectomy. Median cost was $11,767 with an interquartile range of $8907 [$6976-$15,883]. The largest marginal cost increases were associated with postoperative hematoma ($12,651; CI $8181-$17,120), West coast region ($7539; 95% CI $6412-$8666), and combined rhinoplasty ($7824; 95% CI $3808-$11,840). The two risk factors associated with the generation of highest marginal inpatient costs were smoking ($4147; 95% CI $2804-$5490) and diabetes mellitus ($5622; 95% CI $3233-8011). High-volume hospitals had a decreased cost of - $1331 (95% CI - $2032 to - $631). Cost variation for inpatient rhytidectomy procedures is dependent on preoperative risk factors (diabetes and smoking), postoperative complications (hematoma), and regional trends (West region). Rhytidectomy surgery is highly centralized and increasing hospital volume significantly decreases costs. Clinicians and hospitals can use this information to discuss the drivers of cost in patients undergoing rhytidectomy. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  1. Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations

    DOE PAGES

    Fierce, Laura; McGraw, Robert L.

    2017-07-26

    Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less

  2. Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fierce, Laura; McGraw, Robert L.

    Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less

  3. Distributed ultrafast fibre laser

    PubMed Central

    Liu, Xueming; Cui, Yudong; Han, Dongdong; Yao, Xiankun; Sun, Zhipei

    2015-01-01

    A traditional ultrafast fibre laser has a constant cavity length that is independent of the pulse wavelength. The investigation of distributed ultrafast (DUF) lasers is conceptually and technically challenging and of great interest because the laser cavity length and fundamental cavity frequency are changeable based on the wavelength. Here, we propose and demonstrate a DUF fibre laser based on a linearly chirped fibre Bragg grating, where the total cavity length is linearly changeable as a function of the pulse wavelength. The spectral sidebands in DUF lasers are enhanced greatly, including the continuous-wave (CW) and pulse components. We observe that all sidebands of the pulse experience the same round-trip time although they have different round-trip distances and refractive indices. The pulse-shaping of the DUF laser is dominated by the dissipative processes in addition to the phase modulations, which makes our ultrafast laser simple and stable. This laser provides a simple, stable, low-cost, ultrafast-pulsed source with controllable and changeable cavity frequency. PMID:25765454

  4. Extending the accuracy of the SNAP interatomic potential form

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Mitchell A.; Thompson, Aidan P.

    The Spectral Neighbor Analysis Potential (SNAP) is a classical interatomic potential that expresses the energy of each atom as a linear function of selected bispectrum components of the neighbor atoms. An extension of the SNAP form is proposed that includes quadratic terms in the bispectrum components. The extension is shown to provide a large increase in accuracy relative to the linear form, while incurring only a modest increase in computational cost. The mathematical structure of the quadratic SNAP form is similar to the embedded atom method (EAM), with the SNAP bispectrum components serving as counterparts to the two-body density functionsmore » in EAM. It is also argued that the quadratic SNAP form is a special case of an artificial neural network (ANN). The effectiveness of the new form is demonstrated using an extensive set of training data for tantalum structures. Similarly to ANN potentials, the quadratic SNAP form requires substantially more training data in order to prevent overfitting, as measured by cross-validation analysis.« less

  5. Extending the accuracy of the SNAP interatomic potential form

    DOE PAGES

    Wood, Mitchell A.; Thompson, Aidan P.

    2018-03-28

    The Spectral Neighbor Analysis Potential (SNAP) is a classical interatomic potential that expresses the energy of each atom as a linear function of selected bispectrum components of the neighbor atoms. An extension of the SNAP form is proposed that includes quadratic terms in the bispectrum components. The extension is shown to provide a large increase in accuracy relative to the linear form, while incurring only a modest increase in computational cost. The mathematical structure of the quadratic SNAP form is similar to the embedded atom method (EAM), with the SNAP bispectrum components serving as counterparts to the two-body density functionsmore » in EAM. It is also argued that the quadratic SNAP form is a special case of an artificial neural network (ANN). The effectiveness of the new form is demonstrated using an extensive set of training data for tantalum structures. Similarly to ANN potentials, the quadratic SNAP form requires substantially more training data in order to prevent overfitting, as measured by cross-validation analysis.« less

  6. Linear model for fast background subtraction in oligonucleotide microarrays.

    PubMed

    Kroll, K Myriam; Barkema, Gerard T; Carlon, Enrico

    2009-11-16

    One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry.

  7. Advantages and pitfalls in the application of mixed-model association methods.

    PubMed

    Yang, Jian; Zaitlen, Noah A; Goddard, Michael E; Visscher, Peter M; Price, Alkes L

    2014-02-01

    Mixed linear models are emerging as a method of choice for conducting genetic association studies in humans and other organisms. The advantages of the mixed-linear-model association (MLMA) method include the prevention of false positive associations due to population or relatedness structure and an increase in power obtained through the application of a correction that is specific to this structure. An underappreciated point is that MLMA can also increase power in studies without sample structure by implicitly conditioning on associated loci other than the candidate locus. Numerous variations on the standard MLMA approach have recently been published, with a focus on reducing computational cost. These advances provide researchers applying MLMA methods with many options to choose from, but we caution that MLMA methods are still subject to potential pitfalls. Here we describe and quantify the advantages and pitfalls of MLMA methods as a function of study design and provide recommendations for the application of these methods in practical settings.

  8. Carbon monoxide poisoning is prevented by the energy costs of conformational changes in gas-binding haemproteins.

    PubMed

    Antonyuk, Svetlana V; Rustage, Neil; Petersen, Christine A; Arnst, Jamie L; Heyes, Derren J; Sharma, Raman; Berry, Neil G; Scrutton, Nigel S; Eady, Robert R; Andrew, Colin R; Hasnain, S Samar

    2011-09-20

    Carbon monoxide (CO) is a product of haem metabolism and organisms must evolve strategies to prevent endogenous CO poisoning of haemoproteins. We show that energy costs associated with conformational changes play a key role in preventing irreversible CO binding. AxCYTcp is a member of a family of haem proteins that form stable 5c-NO and 6c-CO complexes but do not form O(2) complexes. Structure of the AxCYTcp-CO complex at 1.25 Å resolution shows that CO binds in two conformations moderated by the extent of displacement of the distal residue Leu16 toward the haem 7-propionate. The presence of two CO conformations is confirmed by cryogenic resonance Raman data. The preferred linear Fe-C-O arrangement (170 ± 8°) is accompanied by a flip of the propionate from the distal to proximal face of the haem. In the second conformation, the Fe-C-O unit is bent (158 ± 8°) with no flip of propionate. The energetic cost of the CO-induced Leu-propionate movements is reflected in a 600 mV (57.9 kJ mol(-1)) decrease in haem potential, a value in good agreement with density functional theory calculations. Substitution of Leu by Ala or Gly (structures determined at 1.03 and 1.04 Å resolutions) resulted in a haem site that binds CO in the linear mode only and where no significant change in redox potential is observed. Remarkably, these variants were isolated as ferrous 6c-CO complexes, attributable to the observed eight orders of magnitude increase in affinity for CO, including an approximately 10,000-fold decrease in the rate of dissociation. These new findings have wide implications for preventing CO poisoning of gas-binding haem proteins.

  9. A cost constraint alone has adverse effects on food selection and nutrient density: an analysis of human diets by linear programming.

    PubMed

    Darmon, Nicole; Ferguson, Elaine L; Briend, André

    2002-12-01

    Economic constraints may contribute to the unhealthy food choices observed among low socioeconomic groups in industrialized countries. The objective of the present study was to predict the food choices a rational individual would make to reduce his or her food budget, while retaining a diet as close as possible to the average population diet. Isoenergetic diets were modeled by linear programming. To ensure these diets were consistent with habitual food consumption patterns, departure from the average French diet was minimized and constraints that limited portion size and the amount of energy from food groups were introduced into the models. A cost constraint was introduced and progressively strengthened to assess the effect of cost on the selection of foods by the program. Strengthening the cost constraint reduced the proportion of energy contributed by fruits and vegetables, meat and dairy products and increased the proportion from cereals, sweets and added fats, a pattern similar to that observed among low socioeconomic groups. This decreased the nutritional quality of modeled diets, notably the lowest cost linear programming diets had lower vitamin C and beta-carotene densities than the mean French adult diet (i.e., <25% and 10% of the mean density, respectively). These results indicate that a simple cost constraint can decrease the nutrient densities of diets and influence food selection in ways that reproduce the food intake patterns observed among low socioeconomic groups. They suggest that economic measures will be needed to effectively improve the nutritional quality of diets consumed by these populations.

  10. Evaluation of alternative snow plow cutting edges.

    DOT National Transportation Integrated Search

    2009-05-01

    With approximately 450 snow plow trucks, the Maine Department of Transportation (MaineDOT) uses in : excess of 10,000 linear feet of plow cutting edges each winter season. Using the 2008-2009 cost per linear : foot of $48.32, the Departments total co...

  11. An Expert System for the Evaluation of Cost Models

    DTIC Science & Technology

    1990-09-01

    contrast to the condition of equal error variance, called homoscedasticity. (Reference: Applied Linear Regression Models by John Neter - page 423...normal. (Reference: Applied Linear Regression Models by John Neter - page 125) Click Here to continue -> Autocorrelation Click Here for the index - Index...over time. Error terms correlated over time are said to be autocorrelated or serially correlated. (REFERENCE: Applied Linear Regression Models by John

  12. Improved Linear-Ion-Trap Frequency Standard

    NASA Technical Reports Server (NTRS)

    Prestage, John D.

    1995-01-01

    Improved design concept for linear-ion-trap (LIT) frequency-standard apparatus proposed. Apparatus contains lengthened linear ion trap, and ions processed alternately in two regions: ions prepared in upper region of trap, then transported to lower region for exposure to microwave radiation, then returned to upper region for optical interrogation. Improved design intended to increase long-term frequency stability of apparatus while reducing size, mass, and cost.

  13. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    PubMed

    Dung, Van Than; Tjahjowidodo, Tegoeh

    2017-01-01

    B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE) area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  14. Integrated Logistics Support Analysis of the International Space Station Alpha, Background and Summary of Mathematical Modeling and Failure Density Distributions Pertaining to Maintenance Time Dependent Parameters

    NASA Technical Reports Server (NTRS)

    Sepehry-Fard, F.; Coulthard, Maurice H.

    1995-01-01

    The process of predicting the values of maintenance time dependent variable parameters such as mean time between failures (MTBF) over time must be one that will not in turn introduce uncontrolled deviation in the results of the ILS analysis such as life cycle costs, spares calculation, etc. A minor deviation in the values of the maintenance time dependent variable parameters such as MTBF over time will have a significant impact on the logistics resources demands, International Space Station availability and maintenance support costs. There are two types of parameters in the logistics and maintenance world: a. Fixed; b. Variable Fixed parameters, such as cost per man hour, are relatively easy to predict and forecast. These parameters normally follow a linear path and they do not change randomly. However, the variable parameters subject to the study in this report such as MTBF do not follow a linear path and they normally fall within the distribution curves which are discussed in this publication. The very challenging task then becomes the utilization of statistical techniques to accurately forecast the future non-linear time dependent variable arisings and events with a high confidence level. This, in turn, shall translate in tremendous cost savings and improved availability all around.

  15. Relationships and the social brain: integrating psychological and evolutionary perspectives.

    PubMed

    Sutcliffe, Alistair; Dunbar, Robin; Binder, Jens; Arrow, Holly

    2012-05-01

    Psychological studies of relationships tend to focus on specific types of close personal relationships (romantic, parent-offspring, friendship) and examine characteristics of both the individuals and the dyad. This paper looks more broadly at the wider range of relationships that constitute an individual's personal social world. Recent work on the composition of personal social networks suggests that they consist of a series of layers that differ in the quality and quantity of relationships involved. Each layer increases relationship numbers by an approximate multiple of 3 (5-15-50-150) but decreasing levels of intimacy (strong, medium, and weak ties) and frequency of interaction. To account for these regularities, we draw on both social and evolutionary psychology to argue that relationships at different layers serve different functions and have different cost-benefit profiles. At each layer, the benefits are asymptotic but the costs of maintaining a relationship at that level (most obviously, the time that has to be invested in servicing it) are roughly linear with the number of relationships. The trade-off between costs and benefits at a given level, and across the different types of demands and resources typical of different levels, gives rise to a distribution of social effort that generates and maintains a hierarchy of layered sets of relationships within social networks. We suggest that, psychologically, these trade-offs are related to the level of trust in a relationship, and that this is itself a function of the time invested in the relationship. ©2011 The British Psychological Society.

  16. Linear accelerator: a reproducible, efficacious and cost effective alternative for blood irradiation.

    PubMed

    Shastry, Shamee; Ramya, B; Ninan, Jefy; Srinidhi, G C; Bhat, Sudha S; Fernandes, Donald J

    2013-12-01

    The dedicated devices for blood irradiation are available only at a few centers in developing countries thus the irradiation remains a service with limited availability due to prohibitive cost. To implement a blood irradiation program at our center using linear accelerator. The study is performed detailing the specific operational and quality assurance measures employed in providing a blood component-irradiation service at tertiary care hospital. X-rays generated from linear accelerator were used to irradiate the blood components. To facilitate and standardize the blood component irradiation, a blood irradiator box was designed and fabricated in acrylic. Using Elekta Precise Linear Accelerator, a dose of 25 Gy was delivered at the centre of the irradiation box. Standardization was done using five units of blood obtained from healthy voluntary blood donors. Each unit was divided to two parts. One aliquot was subjected to irradiation. Biochemical and hematological parameters were analyzed on various days of storage. Cost incurred was analyzed. Progressive increase in plasma hemoglobin, potassium and lactate dehydrogenase was noted in the irradiated units but all the parameters were within the acceptable range indicating the suitability of the product for transfusion. The irradiation process was completed in less than 30 min. Validation of the radiation dose done using TLD showed less than ± 3% variation. This study shows that that the blood component irradiation is within the scope of most of the hospitals in developing countries even in the absence of dedicated blood irradiators at affordable cost. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Nanostructured high-energy cathode materials for advanced lithium batteries

    NASA Astrophysics Data System (ADS)

    Sun, Yang-Kook; Chen, Zonghai; Noh, Hyung-Joo; Lee, Dong-Ju; Jung, Hun-Gi; Ren, Yang; Wang, Steve; Yoon, Chong Seung; Myung, Seung-Taek; Amine, Khalil

    2012-11-01

    Nickel-rich layered lithium transition-metal oxides, LiNi1-xMxO2 (M = transition metal), have been under intense investigation as high-energy cathode materials for rechargeable lithium batteries because of their high specific capacity and relatively low cost. However, the commercial deployment of nickel-rich oxides has been severely hindered by their intrinsic poor thermal stability at the fully charged state and insufficient cycle life, especially at elevated temperatures. Here, we report a nickel-rich lithium transition-metal oxide with a very high capacity (215 mA h g-1), where the nickel concentration decreases linearly whereas the manganese concentration increases linearly from the centre to the outer layer of each particle. Using this nano-functional full-gradient approach, we are able to harness the high energy density of the nickel-rich core and the high thermal stability and long life of the manganese-rich outer layers. Moreover, the micrometre-size secondary particles of this cathode material are composed of aligned needle-like nanosize primary particles, resulting in a high rate capability. The experimental results suggest that this nano-functional full-gradient cathode material is promising for applications that require high energy, long calendar life and excellent abuse tolerance such as electric vehicles.

  18. Nanostructured high-energy cathode materials for advanced lithium batteries.

    PubMed

    Sun, Yang-Kook; Chen, Zonghai; Noh, Hyung-Joo; Lee, Dong-Ju; Jung, Hun-Gi; Ren, Yang; Wang, Steve; Yoon, Chong Seung; Myung, Seung-Taek; Amine, Khalil

    2012-11-01

    Nickel-rich layered lithium transition-metal oxides, LiNi(1-x)M(x)O(2) (M = transition metal), have been under intense investigation as high-energy cathode materials for rechargeable lithium batteries because of their high specific capacity and relatively low cost. However, the commercial deployment of nickel-rich oxides has been severely hindered by their intrinsic poor thermal stability at the fully charged state and insufficient cycle life, especially at elevated temperatures. Here, we report a nickel-rich lithium transition-metal oxide with a very high capacity (215 mA h g(-1)), where the nickel concentration decreases linearly whereas the manganese concentration increases linearly from the centre to the outer layer of each particle. Using this nano-functional full-gradient approach, we are able to harness the high energy density of the nickel-rich core and the high thermal stability and long life of the manganese-rich outer layers. Moreover, the micrometre-size secondary particles of this cathode material are composed of aligned needle-like nanosize primary particles, resulting in a high rate capability. The experimental results suggest that this nano-functional full-gradient cathode material is promising for applications that require high energy, long calendar life and excellent abuse tolerance such as electric vehicles.

  19. Introducing linear functions: an alternative statistical approach

    NASA Astrophysics Data System (ADS)

    Nolan, Caroline; Herbert, Sandra

    2015-12-01

    The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be `threshold concepts'. There is recognition that linear functions can be taught in context through the exploration of linear modelling examples, but this has its limitations. Currently, statistical data is easily attainable, and graphics or computer algebra system (CAS) calculators are common in many classrooms. The use of this technology provides ease of access to different representations of linear functions as well as the ability to fit a least-squares line for real-life data. This means these calculators could support a possible alternative approach to the introduction of linear functions. This study compares the results of an end-of-topic test for two classes of Australian middle secondary students at a regional school to determine if such an alternative approach is feasible. In this study, test questions were grouped by concept and subjected to concept by concept analysis of the means of test results of the two classes. This analysis revealed that the students following the alternative approach demonstrated greater competence with non-standard questions.

  20. Cost Effective Persistent Regional Surveillance with Reconfigurable Satellite Constellations

    DTIC Science & Technology

    2015-04-24

    region where both models show the most agreement and therefore the blended curves (in the bottom plot) are fairly smooth. Additionally, a learning ...payload cost Cpay. Cpay = 38000D1.6 + 60615D2.67 ($k FY2010) (11) Satellite cost is modeled by blending the output from the Small Satellite Cost Model...SSCM was used for Md ≤ 400kg and the USCM8 cost model was used for Md ≥ 200kg, and linear blending was used to smooth out the transition between models

  1. Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions

    NASA Astrophysics Data System (ADS)

    Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.

    2016-09-01

    Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.

  2. Multicomponent Time-Dependent Density Functional Theory: Proton and Electron Excitation Energies.

    PubMed

    Yang, Yang; Culpitt, Tanner; Hammes-Schiffer, Sharon

    2018-04-05

    The quantum mechanical treatment of both electrons and protons in the calculation of excited state properties is critical for describing nonadiabatic processes such as photoinduced proton-coupled electron transfer. Multicomponent density functional theory enables the consistent quantum mechanical treatment of more than one type of particle and has been implemented previously for studying ground state molecular properties within the nuclear-electronic orbital (NEO) framework, where all electrons and specified protons are treated quantum mechanically. To enable the study of excited state molecular properties, herein the linear response multicomponent time-dependent density functional theory (TDDFT) is derived and implemented within the NEO framework. Initial applications to FHF - and HCN illustrate that NEO-TDDFT provides accurate proton and electron excitation energies within a single calculation. As its computational cost is similar to that of conventional electronic TDDFT, the NEO-TDDFT approach is promising for diverse applications, particularly nonadiabatic proton transfer reactions, which may exhibit mixed electron-proton vibronic excitations.

  3. The quasi-optimality criterion in the linear functional strategy

    NASA Astrophysics Data System (ADS)

    Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey

    2018-07-01

    The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.

  4. Long-term effectiveness and cost-effectiveness of high versus low-to-moderate intensity resistance and endurance exercise interventions among cancer survivors.

    PubMed

    Kampshoff, C S; van Dongen, J M; van Mechelen, W; Schep, G; Vreugdenhil, A; Twisk, J W R; Bosmans, J E; Brug, J; Chinapaw, M J M; Buffart, Laurien M

    2018-06-01

    This study aimed to evaluate the long-term effectiveness and cost-effectiveness of high intensity (HI) versus low-to-moderate intensity (LMI) exercise on physical fitness, fatigue, and health-related quality of life (HRQoL) in cancer survivors. Two hundred seventy-seven cancer survivors participated in the Resistance and Endurance exercise After ChemoTherapy (REACT) study and were randomized to 12 weeks of HI (n = 139) or LMI exercise (n = 138) that had similar exercise types, durations, and frequencies, but different intensities. Measurements were performed at baseline (4-6 weeks after primary treatment), and 12 (i.e., short term) and 64 (i.e., longer term) weeks later. Outcomes included cardiorespiratory fitness, muscle strength, self-reported fatigue, HRQoL, quality-adjusted life years (QALYs) and societal costs. Linear mixed models were conducted to study (a) differences in effects between HI and LMI exercise at longer term, (b) within-group changes from short term to longer term, and (c) the cost-effectiveness from a societal perspective. At longer term, intervention effects on role (β = 5.9, 95% CI = 0.5; 11.3) and social functioning (β = 5.7, 95%CI = 1.7; 9.6) were larger for HI compared to those for LMI exercise. No significant between-group differences were found for physical fitness and fatigue. Intervention-induced improvements in cardiorespiratory fitness and HRQoL were maintained between weeks 12 and 64, but not for fatigue. From a societal perspective, the probability that HI was cost-effective compared to LMI exercise was 0.91 at 20,000€/QALY and 0.95 at 52,000€/QALY gained, mostly due to significant lower healthcare costs in HI exrcise. At longer term, we found larger intervention effects on role and social functioning for HI than for LMI exercise. Furthermore, HI exercise was cost-effective with regard to QALYs compared to LMI exercise. This study is registered at the Netherlands Trial Register [NTR2153 [ http://www.trialregister.nl/trialreg/admin/rctview.asp?TC=2153

  5. Tight-binding calculation of single-band and generalized Wannier functions of graphene

    NASA Astrophysics Data System (ADS)

    Ribeiro, Allan Victor; Bruno-Alfonso, Alexys

    Recent work has shown that a tight-binding approach associated with Wannier functions (WFs) provides an intuitive physical image of the electronic structure of graphene. Regarding the case of graphene, Marzari et al. displayed the calculated WFs and presented a comparison between the Wannier-interpolated bands and the bands generated by using the density-functional code. Jung and MacDonald provided a tight-binding model for the π-bands of graphene that involves maximally localized Wannier functions (MLWFs). The mixing of the bands yields better localized WFs. In the present work, the MLWFs of graphene are calculated by combining the Quantum-ESPRESSO code and tight-binding approach. The MLWFs of graphene are calculated from the Bloch functions obtained through a tight binding approach that includes interactions and overlapping obtained by partially fitting the DFT bands. The phase of the Bloch functions of each band is appropriately chosen to produce MLWFs. The same thing applies to the coefficients of their linear combination in the generalized case. The method allows for an intuitive understanding of the maximally localized WFs of graphene and shows excellent agreement with the literature. Moreover, it provides accurate results at reduced computational cost.

  6. Minimization of transmission cost in decentralized control systems

    NASA Technical Reports Server (NTRS)

    Wang, S.-H.; Davison, E. J.

    1978-01-01

    This paper considers the problem of stabilizing a linear time-invariant multivariable system by using local feedback controllers and some limited information exchange among local stations. The problem of achieving a given degree of stability with minimum transmission cost is solved.

  7. Cost-effectiveness of exercise therapy versus general practitioner care for osteoarthritis of the hip: design of a randomised clinical trial.

    PubMed

    van Es, Pauline P; Luijsterburg, Pim A J; Dekker, Joost; Koopmanschap, Marc A; Bohnen, Arthur M; Verhaar, Jan A N; Koes, Bart W; Bierma-Zeinstra, Sita M A

    2011-10-12

    Osteoarthritis (OA) is the most common joint disease, causing pain and functional impairments. According to international guidelines, exercise therapy has a short-term effect in reducing pain/functional impairments in knee OA and is therefore also generally recommended for hip OA. Because of its high prevalence and clinical implications, OA is associated with considerable (healthcare) costs. However, studies evaluating cost-effectiveness of common exercise therapy in hip OA are lacking. Therefore, this randomised controlled trial is designed to investigate the cost-effectiveness of exercise therapy in conjunction with the general practitioner's (GP) care, compared to GP care alone, for patients with hip OA. Patients aged ≥ 45 years with OA of the hip, who consulted the GP during the past year for hip complaints and who comply with the American College of Rheumatology criteria, are included. Patients are randomly assigned to either exercise therapy in addition to GP care, or to GP care alone. Exercise therapy consists of (maximally) 12 treatment sessions with a physiotherapist, and home exercises. These are followed by three additional treatment sessions in the 5th, 7th and 9th month after the first treatment session. GP care consists of usual care for hip OA, such as general advice or prescribing pain medication. Primary outcomes are hip pain and hip-related activity limitations (measured with the Hip disability Osteoarthritis Outcome Score [HOOS]), direct costs, and productivity costs (measured with the PROductivity and DISease Questionnaire). These parameters are measured at baseline, at 6 weeks, and at 3, 6, 9 and 12 months follow-up. To detect a 25% clinical difference in the HOOS pain score, with a power of 80% and an alpha 5%, 210 patients are required. Data are analysed according to the intention-to-treat principle. Effectiveness is evaluated using linear regression models with repeated measurements. An incremental cost-effectiveness analysis and an incremental cost-utility analysis will also be performed. The results of this trial will provide insight into the cost-effectiveness of adding exercise therapy to GPs' care in the treatment of OA of the hip. This trial is registered in the Dutch trial registry http://www.trialregister.nl: trial number NTR1462.

  8. All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement.

    PubMed

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi

    2016-01-30

    This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of -20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system.

  9. All-Digital Time-Domain CMOS Smart Temperature Sensor with On-Chip Linearity Enhancement

    PubMed Central

    Chen, Chun-Chi; Chen, Chao-Lieh; Lin, Yi

    2016-01-01

    This paper proposes the first all-digital on-chip linearity enhancement technique for improving the accuracy of the time-domain complementary metal-oxide semiconductor (CMOS) smart temperature sensor. To facilitate on-chip application and intellectual property reuse, an all-digital time-domain smart temperature sensor was implemented using 90 nm Field Programmable Gate Arrays (FPGAs). Although the inverter-based temperature sensor has a smaller circuit area and lower complexity, two-point calibration must be used to achieve an acceptable inaccuracy. With the help of a calibration circuit, the influence of process variations was reduced greatly for one-point calibration support, reducing the test costs and time. However, the sensor response still exhibited a large curvature, which substantially affected the accuracy of the sensor. Thus, an on-chip linearity-enhanced circuit is proposed to linearize the curve and achieve a new linearity-enhanced output. The sensor was implemented on eight different Xilinx FPGA using 118 slices per sensor in each FPGA to demonstrate the benefits of the linearization. Compared with the unlinearized version, the maximal inaccuracy of the linearized version decreased from 5 °C to 2.5 °C after one-point calibration in a range of −20 °C to 100 °C. The sensor consumed 95 μW using 1 kSa/s. The proposed linearity enhancement technique significantly improves temperature sensing accuracy, avoiding costly curvature compensation while it is fully synthesizable for future Very Large Scale Integration (VLSI) system. PMID:26840316

  10. Understanding Linear Functions and Their Representations

    ERIC Educational Resources Information Center

    Wells, Pamela J.

    2015-01-01

    Linear functions are an important part of the middle school mathematics curriculum. Students in the middle grades gain fluency by working with linear functions in a variety of representations (NCTM 2001). Presented in this article is an activity that was used with five eighth-grade classes at three different schools. The activity contains 15 cards…

  11. Localized overlap algorithm for unexpanded dispersion energies

    NASA Astrophysics Data System (ADS)

    Rob, Fazle; Misquitta, Alston J.; Podeszwa, Rafał; Szalewicz, Krzysztof

    2014-03-01

    First-principles-based, linearly scaling algorithm has been developed for calculations of dispersion energies from frequency-dependent density susceptibility (FDDS) functions with account of charge-overlap effects. The transition densities in FDDSs are fitted by a set of auxiliary atom-centered functions. The terms in the dispersion energy expression involving products of such functions are computed using either the unexpanded (exact) formula or from inexpensive asymptotic expansions, depending on the location of these functions relative to the dimer configuration. This approach leads to significant savings of computational resources. In particular, for a dimer consisting of two elongated monomers with 81 atoms each in a head-to-head configuration, the most favorable case for our algorithm, a 43-fold speedup has been achieved while the approximate dispersion energy differs by less than 1% from that computed using the standard unexpanded approach. In contrast, the dispersion energy computed from the distributed asymptotic expansion differs by dozens of percent in the van der Waals minimum region. A further increase of the size of each monomer would result in only small increased costs since all the additional terms would be computed from the asymptotic expansion.

  12. Monitoring Moving Queries inside a Safe Region

    PubMed Central

    Al-Khalidi, Haidar; Taniar, David; Alamri, Sultan

    2014-01-01

    With mobile moving range queries, there is a need to recalculate the relevant surrounding objects of interest whenever the query moves. Therefore, monitoring the moving query is very costly. The safe region is one method that has been proposed to minimise the communication and computation cost of continuously monitoring a moving range query. Inside the safe region the set of objects of interest to the query do not change; thus there is no need to update the query while it is inside its safe region. However, when the query leaves its safe region the mobile device has to reevaluate the query, necessitating communication with the server. Knowing when and where the mobile device will leave a safe region is widely known as a difficult problem. To solve this problem, we propose a novel method to monitor the position of the query over time using a linear function based on the direction of the query obtained by periodic monitoring of its position. Periodic monitoring ensures that the query is aware of its location all the time. This method reduces the costs associated with communications in client-server architecture. Computational results show that our method is successful in handling moving query patterns. PMID:24696652

  13. Economic Analysis and Optimal Sizing for behind-the-meter Battery Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Di; Kintner-Meyer, Michael CW; Yang, Tao

    This paper proposes methods to estimate the potential benefits and determine the optimal energy and power capacity for behind-the-meter BSS. In the proposed method, a linear programming is first formulated only using typical load profiles, energy/demand charge rates, and a set of battery parameters to determine the maximum saving in electric energy cost. The optimization formulation is then adapted to include battery cost as a function of its power and energy capacity in order to capture the trade-off between benefits and cost, and therefore to determine the most economic battery size. Using the proposed methods, economic analysis and optimal sizingmore » have been performed for a few commercial buildings and utility rate structures that are representative of those found in the various regions of the Continental United States. The key factors that affect the economic benefits and optimal size have been identified. The proposed methods and case study results cannot only help commercial and industrial customers or battery vendors to evaluate and size the storage system for behind-the-meter application, but can also assist utilities and policy makers to design electricity rate or subsidies to promote the development of energy storage.« less

  14. Calculating distance by wireless ethernet signal strength for global positioning method

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Yong; Kim, Jeehong; Lee, Chang-goo

    2005-12-01

    This paper investigated mobile robot localization by using wireless Ethernet for global localization and INS for relative localization. For relative localization, the low-cost INS features self-contained was adopted. Low-cost MEMS-based INS has a short-period response and acceptable performance. Generally, variety sensor was used for mobile robot localization. In spite of precise modeling of the sensor, it leads inevitably to the accumulation of errors. The IEEE802.11b wireless Ethernet standard has been deployed in office building, museums, hospitals, shopping centers and other indoor environments. Many mobile robots already make use of wireless networking for communication. So location sensing with wireless Ethernet might be very useful for a low-cost robot. This research used wireless Ethernet card for compensation the accumulation of errors. So the mobile robot can use that for global localization through the installed many IEEE802.11b wireless Ethernets in indoor environments. The chief difficulty in localization with wireless Ethernet is predicting signal strength. As a sensor, RF signal strength measured indoors is non-linear with distance. So, there made the profiles of signal strength for points and used that. We wrote using function between signal strength profile and distance from the wireless Ethernet point.

  15. Working With Socially and Medically Complex Patients: When Care Transitions Are Circular, Overlapping, and Continual Rather Than Linear and Finite.

    PubMed

    Roberts, Shauna R; Crigler, Jane; Ramirez, Cristina; Sisco, Deborah; Early, Gerald L

    2015-01-01

    The care coordination program described here evolved from 5 years of trial and learning related to how to best serve our high-cost, high-utilizing, chronically ill, urban core patient population. In addition to medical complexity, they have daily challenges characteristic of persons served by Safety-Net health systems. Many have unstable health insurance status. Others have insecure housing. A number of patients have a history of substance use and mental illness. Many have fractured social supports. Although some of the best-known care transition models have been successful in reducing rehospitalizations and cost among patients studied, these models were developed for a relatively high functioning patient population with social support. We describe a successful approach targeted at working with patients who require a more intense and lengthy care coordination intervention to self-manage and reduce the cost of caring for their medical conditions. Using a diverse team and a set of replicable processes, we have demonstrated statistically significant reduction in the use of hospital and emergency services. Our intervention leverages the strengths and resilience of patients, focuses on trust and self-management, and targets heterogeneous "high-utilizer" patients with medical and social complexity.

  16. Lower operational costs in heat treatment and process engineering through improved temperature measurement

    NASA Astrophysics Data System (ADS)

    Furniss, C. P.

    New metal-sheathed thermocouple systems are described which have lowered operational heat treatment costs and process engineering. The improvements which these thermocouples represent over conventional ones with regard to chemical composition, thermomechanical properties, oxidation resistance, weldability, and coefficient of linear expansion are pointed out. Experimentally determined cost savings for a variety of applications are reported.

  17. Worker productivity and herbicide usage for pine release with manual application methods

    Treesearch

    James H. Miller; G.R. Glover

    1993-01-01

    Abstract. Productivity, herbicide usage, and costs of manually-applied pine release treatments were examined with linear regression analysis and compared. Data came from a replicated study in a 3-year-old loblolly pine plantation in Alabama’s Piedmont. Brush sawing had the highest labor costs but lowest total treatment costs. While of the...

  18. Control design based on a linear state function observer

    NASA Technical Reports Server (NTRS)

    Su, Tzu-Jeng; Craig, Roy R., Jr.

    1992-01-01

    An approach to the design of low-order controllers for large scale systems is proposed. The method is derived from the theory of linear state function observers. First, the realization of a state feedback control law is interpreted as the observation of a linear function of the state vector. The linear state function to be reconstructed is the given control law. Then, based on the derivation for linear state function observers, the observer design is formulated as a parameter optimization problem. The optimization objective is to generate a matrix that is close to the given feedback gain matrix. Based on that matrix, the form of the observer and a new control law can be determined. A four-disk system and a lightly damped beam are presented as examples to demonstrate the applicability and efficacy of the proposed method.

  19. A Bayesian Framework for Coupled Estimation of Key Unknown Parameters of Land Water and Energy Balance Equations

    NASA Astrophysics Data System (ADS)

    Farhadi, L.; Abdolghafoorian, A.

    2015-12-01

    The land surface is a key component of climate system. It controls the partitioning of available energy at the surface between sensible and latent heat, and partitioning of available water between evaporation and runoff. Water and energy cycle are intrinsically coupled through evaporation, which represents a heat exchange as latent heat flux. Accurate estimation of fluxes of heat and moisture are of significant importance in many fields such as hydrology, climatology and meteorology. In this study we develop and apply a Bayesian framework for estimating the key unknown parameters of terrestrial water and energy balance equations (i.e. moisture and heat diffusion) and their uncertainty in land surface models. These equations are coupled through flux of evaporation. The estimation system is based on the adjoint method for solving a least-squares optimization problem. The cost function consists of aggregated errors on state (i.e. moisture and temperature) with respect to observation and parameters estimation with respect to prior values over the entire assimilation period. This cost function is minimized with respect to parameters to identify models of sensible heat, latent heat/evaporation and drainage and runoff. Inverse of Hessian of the cost function is an approximation of the posterior uncertainty of parameter estimates. Uncertainty of estimated fluxes is estimated by propagating the uncertainty for linear and nonlinear function of key parameters through the method of First Order Second Moment (FOSM). Uncertainty analysis is used in this method to guide the formulation of a well-posed estimation problem. Accuracy of the method is assessed at point scale using surface energy and water fluxes generated by the Simultaneous Heat and Water (SHAW) model at the selected AmeriFlux stations. This method can be applied to diverse climates and land surface conditions with different spatial scales, using remotely sensed measurements of surface moisture and temperature states

  20. A generic method for projecting and valuing domestic water uses, application to the Mediterranean basin at the 2050 horizon.

    NASA Astrophysics Data System (ADS)

    Neverre, Noémie; Dumas, Patrice

    2014-05-01

    The aim is to be able to assess future domestic water demands in a region with heterogeneous levels of economic development. This work offers an original combination of a quantitative projection of demands (similar to WaterGAP methodology) and an estimation of the marginal benefit of water. This method is applicable to different levels of economic development and usable for large-scale hydroeconomic modelling. The global method consists in building demand functions taking into account the impact of both the price of water and the level of equipment, proxied by economic development, on domestic water demand. Our basis is a 3-blocks inverse demand function: the first block consists of essential water requirements for food and hygiene; the second block matches intermediate needs; and the last block corresponds to additional water consumption, such as outdoor uses, which are the least valued. The volume of the first block is fixed to match recommended basic water requirements from the literature, but we assume that the volume limits of blocks 2 and 3 depend on the level of household equipment and therefore evolve with the level of GDP per capita (structural change), with a saturation. For blocks 1 and 2 we determine the value of water from elasticity, price and quantity data from the literature, using the point-extension method. For block 3, we use a hypothetical zero-cost demand and maximal demand with actual water costs to linearly interpolate the inverse demand function. These functions are calibrated on the 24 countries part of the Mediterranean basin using data from SIMEDD, and are used for the projection and valuation of domestic water demands at the 2050 horizon. They enable to project total water demand, and also the respective shares of the different categories of demand (basic demand, intermediate demand and additional uses). These projections are performed under different combined scenarios of population, GDP and water costs.

  1. Impact of a cost constraint on nutritionally adequate food choices for French women: an analysis by linear programming.

    PubMed

    Darmon, Nicole; Ferguson, Elaine L; Briend, André

    2006-01-01

    To predict, for French women, the impact of a cost constraint on the food choices required to provide a nutritionally adequate diet. Isocaloric daily diets fulfilling both palatability and nutritional constraints were modeled in linear programming, using different cost constraint levels. For each modeled diet, total departure from an observed French population's average food group pattern ("mean observed diet") was minimized. To achieve the nutritional recommendations without a cost constraint, the modeled diet provided more energy from fish, fresh fruits and green vegetables and less energy from animal fats and cheese than the "mean observed diet." Introducing and strengthening a cost constraint decreased the energy provided by meat, fresh vegetables, fresh fruits, vegetable fat, and yogurts and increased the energy from processed meat, eggs, offal, and milk. For the lowest cost diet (ie, 3.18 euros/d), marked changes from the "mean observed diet" were required, including a marked reduction in the amount of energy from fresh fruits (-85%) and green vegetables (-70%), and an increase in the amount of energy from nuts, dried fruits, roots, legumes, and fruit juices. Nutrition education for low-income French women must emphasize these affordable food choices.

  2. Cost-effectiveness analysis of the diarrhea alleviation through zinc and oral rehydration therapy (DAZT) program in rural Gujarat India: an application of the net-benefit regression framework.

    PubMed

    Shillcutt, Samuel D; LeFevre, Amnesty E; Fischer-Walker, Christa L; Taneja, Sunita; Black, Robert E; Mazumder, Sarmila

    2017-01-01

    This study evaluates the cost-effectiveness of the DAZT program for scaling up treatment of acute child diarrhea in Gujarat India using a net-benefit regression framework. Costs were calculated from societal and caregivers' perspectives and effectiveness was assessed in terms of coverage of zinc and both zinc and Oral Rehydration Salt. Regression models were tested in simple linear regression, with a specified set of covariates, and with a specified set of covariates and interaction terms using linear regression with endogenous treatment effects was used as the reference case. The DAZT program was cost-effective with over 95% certainty above $5.50 and $7.50 per appropriately treated child in the unadjusted and adjusted models respectively, with specifications including interaction terms being cost-effective with 85-97% certainty. Findings from this study should be combined with other evidence when considering decisions to scale up programs such as the DAZT program to promote the use of ORS and zinc to treat child diarrhea.

  3. Finite element analyses of continuous filament ties for masonry applications : final report for the Arquin Corporation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinones, Armando, Sr.; Bibeau, Tiffany A.; Ho, Clifford Kuofei

    2008-08-01

    Finite-element analyses were performed to simulate the response of a hypothetical vertical masonry wall subject to different lateral loads with and without continuous horizontal filament ties laid between rows of concrete blocks. A static loading analysis and cost comparison were also performed to evaluate optimal materials and designs for the spacers affixed to the filaments. Results showed that polypropylene, ABS, and polyethylene (high density) were suitable materials for the spacers based on performance and cost, and the short T-spacer design was optimal based on its performance and functionality. Simulations of vertical walls subject to static loads representing 100 mph windsmore » (0.2 psi) and a seismic event (0.66 psi) showed that the simulated walls performed similarly and adequately when subject to these loads with and without the ties. Additional simulations and tests are required to assess the performance of actual walls with and without the ties under greater loads and more realistic conditions (e.g., cracks, non-linear response).« less

  4. A high-throughput screening approach for the optoelectronic properties of conjugated polymers.

    PubMed

    Wilbraham, Liam; Berardo, Enrico; Turcani, Lukas; Jelfs, Kim E; Zwijnenburg, Martijn A

    2018-06-25

    We propose a general high-throughput virtual screening approach for the optical and electronic properties of conjugated polymers. This approach makes use of the recently developed xTB family of low-computational-cost density functional tight-binding methods from Grimme and co-workers, calibrated here to (TD-)DFT data computed for a representative diverse set of (co-)polymers. Parameters drawn from the resulting calibration using a linear model can then be applied to the xTB derived results for new polymers, thus generating near DFT-quality data with orders of magnitude reduction in computational cost. As a result, after an initial computational investment for calibration, this approach can be used to quickly and accurately screen on the order of thousands of polymers for target applications. We also demonstrate that the (opto)electronic properties of the conjugated polymers show only a very minor variation when considering different conformers and that the results of high-throughput screening are therefore expected to be relatively insensitive with respect to the conformer search methodology applied.

  5. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids

    PubMed Central

    Hesford, Andrew J.; Waag, Robert C.

    2010-01-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased. PMID:20835366

  6. The fast multipole method and Fourier convolution for the solution of acoustic scattering on regular volumetric grids

    NASA Astrophysics Data System (ADS)

    Hesford, Andrew J.; Waag, Robert C.

    2010-10-01

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  7. The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids.

    PubMed

    Hesford, Andrew J; Waag, Robert C

    2010-10-20

    The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.

  8. Fast marching methods for the continuous traveling salesman problem.

    PubMed

    Andrews, June; Sethian, J A

    2007-01-23

    We consider a problem in which we are given a domain, a cost function which depends on position at each point in the domain, and a subset of points ("cities") in the domain. The goal is to determine the cheapest closed path that visits each city in the domain once. This can be thought of as a version of the traveling salesman problem, in which an underlying known metric determines the cost of moving through each point of the domain, but in which the actual shortest path between cities is unknown at the outset. We describe algorithms for both a heuristic and an optimal solution to this problem. The complexity of the heuristic algorithm is at worst case M.N log N, where M is the number of cities, and N the size of the computational mesh used to approximate the solutions to the shortest paths problems. The average runtime of the heuristic algorithm is linear in the number of cities and O(N log N) in the size N of the mesh.

  9. Clinical laboratory: bigger is not always better.

    PubMed

    Plebani, Mario

    2018-06-27

    Laboratory services around the world are undergoing substantial consolidation and changes through mechanisms ranging from mergers, acquisitions and outsourcing, primarily based on expectations to improve efficiency, increasing volumes and reducing the cost per test. However, the relationship between volume and costs is not linear and numerous variables influence the end cost per test. In particular, the relationship between volumes and costs does not span the entire platter of clinical laboratories: high costs are associated with low volumes up to a threshold of 1 million test per year. Over this threshold, there is no linear association between volumes and costs, as laboratory organization rather than test volume more significantly affects the final costs. Currently, data on laboratory errors and associated diagnostic errors and risk for patient harm emphasize the need for a paradigmatic shift: from a focus on volumes and efficiency to a patient-centered vision restoring the nature of laboratory services as an integral part of the diagnostic and therapy process. Process and outcome quality indicators are effective tools to measure and improve laboratory services, by stimulating a competition based on intra- and extra-analytical performance specifications, intermediate outcomes and customer satisfaction. Rather than competing with economic value, clinical laboratories should adopt a strategy based on a set of harmonized quality indicators and performance specifications, active laboratory stewardship, and improved patient safety.

  10. A virtual reality system for arm and hand rehabilitation

    NASA Astrophysics Data System (ADS)

    Luo, Zhiqiang; Lim, Chee Kian; Chen, I.-Ming; Yeo, Song Huat

    2011-03-01

    This paper presents a virtual reality (VR) system for upper limb rehabilitation. The system incorporates two motion track components, the Arm Suit and the Smart Glove which are composed of a range of the optical linear encoders (OLE) and the inertial measurement units (IMU), and two interactive practice applications designed for driving users to perform the required functional and non-functional motor recovery tasks. We describe the technique details about the two motion track components and the rational to design two practice applications. The experiment results show that, compared with the marker-based tracking system, the Arm Suit can accurately track the elbow and wrist positions. The repeatability of the Smart Glove on measuring the five fingers' movement can be satisfied. Given the low cost, high accuracy and easy installation, the system thus promises to be a valuable complement to conventional therapeutic programs offered in rehabilitation clinics and at home.

  11. How much spare capacity is necessary for the security of resource networks?

    NASA Astrophysics Data System (ADS)

    Zhao, Qian-Chuan; Jia, Qing-Shan; Cao, Yang

    2007-01-01

    The balance between the supply and demand of some kind of resource is critical for the functionality and security of many complex networks. Local contingencies that break this balance can cause a global collapse. These contingencies are usually dealt with by spare capacity, which is costly especially when the network capacity (the total amount of the resource generated/consumed in the network) grows. This paper studies the relationship between the spare capacity and the collapse probability under separation contingencies when the network capacity grows. Our results are obtained based on the analysis of the existence probability of balanced partitions, which is a measure of network security when network splitting is unavoidable. We find that a network with growing capacity will inevitably collapse after a separation contingency if the spare capacity in each island increases slower than a linear function of the network capacity and there is no suitable global coordinator.

  12. Incentive schemes in development of socio-economic systems

    NASA Astrophysics Data System (ADS)

    Grachev, V. V.; Ivushkin, K. A.; Myshlyaev, L. P.

    2018-05-01

    The paper is devoted to the study of incentive schemes when developing socio-economic systems. The article analyzes the existing incentive schemes. It is established that the traditional incentive mechanisms do not fully take into account the specifics of the creation of each socio-economic system and, as a rule, are difficult to implement. The incentive schemes based on the full-scale simulation approach, which allow the most complete information from the existing projects of creation of socio-economic systems to be extracted, are proposed. The statement of the problem is given, the method and algorithm of the full-scale simulation study of the efficiency of incentive functions is developed. The results of the study are presented. It is shown that the use of quadratic and piecewise linear functions of incentive allows the time and costs for creating social and economic systems to be reduced by 10%-15%.

  13. Manifold Learning by Preserving Distance Orders.

    PubMed

    Ataer-Cansizoglu, Esra; Akcakaya, Murat; Orhan, Umut; Erdogmus, Deniz

    2014-03-01

    Nonlinear dimensionality reduction is essential for the analysis and the interpretation of high dimensional data sets. In this manuscript, we propose a distance order preserving manifold learning algorithm that extends the basic mean-squared error cost function used mainly in multidimensional scaling (MDS)-based methods. We develop a constrained optimization problem by assuming explicit constraints on the order of distances in the low-dimensional space. In this optimization problem, as a generalization of MDS, instead of forcing a linear relationship between the distances in the high-dimensional original and low-dimensional projection space, we learn a non-decreasing relation approximated by radial basis functions. We compare the proposed method with existing manifold learning algorithms using synthetic datasets based on the commonly used residual variance and proposed percentage of violated distance orders metrics. We also perform experiments on a retinal image dataset used in Retinopathy of Prematurity (ROP) diagnosis.

  14. Modified Monte Carlo method for study of electron transport in degenerate electron gas in the presence of electron-electron interactions, application to graphene

    NASA Astrophysics Data System (ADS)

    Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek

    2017-07-01

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.

  15. Some single-machine scheduling problems with learning effects and two competing agents.

    PubMed

    Li, Hongjie; Li, Zeyuan; Yin, Yunqiang

    2014-01-01

    This study considers a scheduling environment in which there are two agents and a set of jobs, each of which belongs to one of the two agents and its actual processing time is defined as a decreasing linear function of its starting time. Each of the two agents competes to process its respective jobs on a single machine and has its own scheduling objective to optimize. The objective is to assign the jobs so that the resulting schedule performs well with respect to the objectives of both agents. The objective functions addressed in this study include the maximum cost, the total weighted completion time, and the discounted total weighted completion time. We investigate three problems arising from different combinations of the objectives of the two agents. The computational complexity of the problems is discussed and solution algorithms where possible are presented.

  16. Bidirectional Elastic Image Registration Using B-Spline Affine Transformation

    PubMed Central

    Gu, Suicheng; Meng, Xin; Sciurba, Frank C.; Wang, Chen; Kaminski, Naftali; Pu, Jiantao

    2014-01-01

    A registration scheme termed as B-spline affine transformation (BSAT) is presented in this study to elastically align two images. We define an affine transformation instead of the traditional translation at each control point. Mathematically, BSAT is a generalized form of the affine transformation and the traditional B-Spline transformation (BST). In order to improve the performance of the iterative closest point (ICP) method in registering two homologous shapes but with large deformation, a bi-directional instead of the traditional unidirectional objective / cost function is proposed. In implementation, the objective function is formulated as a sparse linear equation problem, and a sub-division strategy is used to achieve a reasonable efficiency in registration. The performance of the developed scheme was assessed using both two-dimensional (2D) synthesized dataset and three-dimensional (3D) volumetric computed tomography (CT) data. Our experiments showed that the proposed B-spline affine model could obtain reasonable registration accuracy. PMID:24530210

  17. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids by Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  18. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  19. Low-memory iterative density fitting.

    PubMed

    Grajciar, Lukáš

    2015-07-30

    A new low-memory modification of the density fitting approximation based on a combination of a continuous fast multipole method (CFMM) and a preconditioned conjugate gradient solver is presented. Iterative conjugate gradient solver uses preconditioners formed from blocks of the Coulomb metric matrix that decrease the number of iterations needed for convergence by up to one order of magnitude. The matrix-vector products needed within the iterative algorithm are calculated using CFMM, which evaluates them with the linear scaling memory requirements only. Compared with the standard density fitting implementation, up to 15-fold reduction of the memory requirements is achieved for the most efficient preconditioner at a cost of only 25% increase in computational time. The potential of the method is demonstrated by performing density functional theory calculations for zeolite fragment with 2592 atoms and 121,248 auxiliary basis functions on a single 12-core CPU workstation. © 2015 Wiley Periodicals, Inc.

  20. Long-range corrected density functional theory with accelerated Hartree-Fock exchange integration using a two-Gaussian operator [LC-ωPBE(2Gau)].

    PubMed

    Song, Jong-Won; Hirao, Kimihiko

    2015-10-14

    Since the advent of hybrid functional in 1993, it has become a main quantum chemical tool for the calculation of energies and properties of molecular systems. Following the introduction of long-range corrected hybrid scheme for density functional theory a decade later, the applicability of the hybrid functional has been further amplified due to the resulting increased performance on orbital energy, excitation energy, non-linear optical property, barrier height, and so on. Nevertheless, the high cost associated with the evaluation of Hartree-Fock (HF) exchange integrals remains a bottleneck for the broader and more active applications of hybrid functionals to large molecular and periodic systems. Here, we propose a very simple yet efficient method for the computation of long-range corrected hybrid scheme. It uses a modified two-Gaussian attenuating operator instead of the error function for the long-range HF exchange integral. As a result, the two-Gaussian HF operator, which mimics the shape of the error function operator, reduces computational time dramatically (e.g., about 14 times acceleration in C diamond calculation using periodic boundary condition) and enables lower scaling with system size, while maintaining the improved features of the long-range corrected density functional theory.

  1. Messaging with Cost-Optimized Interstellar Beacons

    NASA Technical Reports Server (NTRS)

    Benford, James; Benford, Gregory; Benford, Dominic

    2010-01-01

    On Earth, how would we build galactic-scale beacons to attract the attention of extraterrestrials, as some have suggested we should do? From the point of view of expense to a builder on Earth, experience shows an optimum trade-off. This emerges by minimizing the cost of producing a desired power density at long range, which determines the maximum range of detectability of a transmitted signal. We derive general relations for cost-optimal aperture and power. For linear dependence of capital cost on transmitter power and antenna area, minimum capital cost occurs when the cost is equally divided between antenna gain and radiated power. For nonlinear power-law dependence, a similar simple division occurs. This is validated in cost data for many systems; industry uses this cost optimum as a rule of thumb. Costs of pulsed cost-efficient transmitters are estimated from these relations by using current cost parameters ($/W, $/sq m) as a basis. We show the scaling and give examples of such beacons. Galactic-scale beacons can be built for a few billion dollars with our present technology. Such beacons have narrow "searchlight" beams and short "dwell times" when the beacon would be seen by an alien observer in their sky. More-powerful beacons are more efficient and have economies of scale: cost scales only linearly with range R, not as R(exp 2), so number of stars radiated to increases as the square of cost. On a cost basis, they will likely transmit at higher microwave frequencies, -10 GHz. The natural corridor to broadcast is along the galactic radius or along the local spiral galactic arm we are in. A companion paper asks "If someone like us were to produce a beacon, how should we look for it?"

  2. On the Validity of the Streaming Model for the Redshift-Space Correlation Function in the Linear Regime

    NASA Astrophysics Data System (ADS)

    Fisher, Karl B.

    1995-08-01

    The relation between the galaxy correlation functions in real-space and redshift-space is derived in the linear regime by an appropriate averaging of the joint probability distribution of density and velocity. The derivation recovers the familiar linear theory result on large scales but has the advantage of clearly revealing the dependence of the redshift distortions on the underlying peculiar velocity field; streaming motions give rise to distortions of θ(Ω0.6/b) while variations in the anisotropic velocity dispersion yield terms of order θ(Ω1.2/b2). This probabilistic derivation of the redshift-space correlation function is similar in spirit to the derivation of the commonly used "streaming" model, in which the distortions are given by a convolution of the real-space correlation function with a velocity distribution function. The streaming model is often used to model the redshift-space correlation function on small, highly nonlinear, scales. There have been claims in the literature, however, that the streaming model is not valid in the linear regime. Our analysis confirms this claim, but we show that the streaming model can be made consistent with linear theory provided that the model for the streaming has the functional form predicted by linear theory and that the velocity distribution is chosen to be a Gaussian with the correct linear theory dispersion.

  3. Cost function approach for estimating derived demand for composite wood products

    Treesearch

    T. C. Marcin

    1991-01-01

    A cost function approach was examined for using the concept of duality between production and input factor demands. A translog cost function was used to represent residential construction costs and derived conditional factor demand equations. Alternative models were derived from the translog cost function by imposing parameter restrictions.

  4. Ultra-Low-Dropout Linear Regulator

    NASA Technical Reports Server (NTRS)

    Thornton, Trevor; Lepkowski, William; Wilk, Seth

    2011-01-01

    A radiation-tolerant, ultra-low-dropout linear regulator can operate between -150 and 150 C. Prototype components were demonstrated to be performing well after a total ionizing dose of 1 Mrad (Si). Unlike existing components, the linear regulator developed during this activity is unconditionally stable over all operating regimes without the need for an external compensation capacitor. The absence of an external capacitor reduces overall system mass/volume, increases reliability, and lowers cost. Linear regulators generate a precisely controlled voltage for electronic circuits regardless of fluctuations in the load current that the circuit draws from the regulator.

  5. A variational data assimilation system for the range dependent acoustic model using the representer method: Theoretical derivations.

    PubMed

    Ngodock, Hans; Carrier, Matthew; Fabre, Josette; Zingarelli, Robert; Souopgui, Innocent

    2017-07-01

    This study presents the theoretical framework for variational data assimilation of acoustic pressure observations into an acoustic propagation model, namely, the range dependent acoustic model (RAM). RAM uses the split-step Padé algorithm to solve the parabolic equation. The assimilation consists of minimizing a weighted least squares cost function that includes discrepancies between the model solution and the observations. The minimization process, which uses the principle of variations, requires the derivation of the tangent linear and adjoint models of the RAM. The mathematical derivations are presented here, and, for the sake of brevity, a companion study presents the numerical implementation and results from the assimilation simulated acoustic pressure observations.

  6. Fixed gain and adaptive techniques for rotorcraft vibration control

    NASA Technical Reports Server (NTRS)

    Roy, R. H.; Saberi, H. A.; Walker, R. A.

    1985-01-01

    The results of an analysis effort performed to demonstrate the feasibility of employing approximate dynamical models and frequency shaped cost functional control law desgin techniques for helicopter vibration suppression are presented. Both fixed gain and adaptive control designs based on linear second order dynamical models were implemented in a detailed Rotor Systems Research Aircraft (RSRA) simulation to validate these active vibration suppression control laws. Approximate models of fuselage flexibility were included in the RSRA simulation in order to more accurately characterize the structural dynamics. The results for both the fixed gain and adaptive approaches are promising and provide a foundation for pursuing further validation in more extensive simulation studies and in wind tunnel and/or flight tests.

  7. A semi-implicit level set method for multiphase flows and fluid-structure interaction problems

    NASA Astrophysics Data System (ADS)

    Cottet, Georges-Henri; Maitre, Emmanuel

    2016-06-01

    In this paper we present a novel semi-implicit time-discretization of the level set method introduced in [8] for fluid-structure interaction problems. The idea stems from a linear stability analysis derived on a simplified one-dimensional problem. The semi-implicit scheme relies on a simple filter operating as a pre-processing on the level set function. It applies to multiphase flows driven by surface tension as well as to fluid-structure interaction problems. The semi-implicit scheme avoids the stability constraints that explicit scheme need to satisfy and reduces significantly the computational cost. It is validated through comparisons with the original explicit scheme and refinement studies on two-dimensional benchmarks.

  8. Linear-quadratic-Gaussian synthesis with reduced parameter sensitivity

    NASA Technical Reports Server (NTRS)

    Lin, J. Y.; Mingori, D. L.

    1992-01-01

    We present a method for improving the tolerance of a conventional LQG controller to parameter errors in the plant model. The improvement is achieved by introducing additional terms reflecting the structure of the parameter errors into the LQR cost function, and also the process and measurement noise models. Adjusting the sizes of these additional terms permits a trade-off between robustness and nominal performance. Manipulation of some of the additional terms leads to high gain controllers while other terms lead to low gain controllers. Conditions are developed under which the high-gain approach asymptotically recovers the robustness of the corresponding full-state feedback design, and the low-gain approach makes the closed-loop poles asymptotically insensitive to parameter errors.

  9. Bayesian integration and non-linear feedback control in a full-body motor task.

    PubMed

    Stevenson, Ian H; Fernandes, Hugo L; Vilares, Iris; Wei, Kunlin; Körding, Konrad P

    2009-12-01

    A large number of experiments have asked to what degree human reaching movements can be understood as being close to optimal in a statistical sense. However, little is known about whether these principles are relevant for other classes of movements. Here we analyzed movement in a task that is similar to surfing or snowboarding. Human subjects stand on a force plate that measures their center of pressure. This center of pressure affects the acceleration of a cursor that is displayed in a noisy fashion (as a cloud of dots) on a projection screen while the subject is incentivized to keep the cursor close to a fixed position. We find that salient aspects of observed behavior are well-described by optimal control models where a Bayesian estimation model (Kalman filter) is combined with an optimal controller (either a Linear-Quadratic-Regulator or Bang-bang controller). We find evidence that subjects integrate information over time taking into account uncertainty. However, behavior in this continuous steering task appears to be a highly non-linear function of the visual feedback. While the nervous system appears to implement Bayes-like mechanisms for a full-body, dynamic task, it may additionally take into account the specific costs and constraints of the task.

  10. Linear programming: an alternative approach for developing formulations for emergency food products.

    PubMed

    Sheibani, Ershad; Dabbagh Moghaddam, Arasb; Sharifan, Anousheh; Afshari, Zahra

    2018-03-01

    To minimize the mortality rates of individuals affected by disasters, providing high-quality food relief during the initial stages of an emergency is crucial. The goal of this study was to develop a formulation for a high-energy, nutrient-dense prototype using linear programming (LP) model as a novel method for developing formulations for food products. The model consisted of the objective function and the decision variables, which were the formulation costs and weights of the selected commodities, respectively. The LP constraints were the Institute of Medicine and the World Health Organization specifications of the content of nutrients in the product. Other constraints related to the product's sensory properties were also introduced to the model. Nonlinear constraints for energy ratios of nutrients were linearized to allow their use in the LP. Three focus group studies were conducted to evaluate the palatability and other aspects of the optimized formulation. New constraints were introduced to the LP model based on the focus group evaluations to improve the formulation. LP is an appropriate tool for designing formulations of food products to meet a set of nutritional requirements. This method is an excellent alternative to the traditional 'trial and error' method in designing formulations. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  11. The challenges of transitioning from linear to high-order overlay control in advanced lithography

    NASA Astrophysics Data System (ADS)

    Adel, M.; Izikson, P.; Tien, D.; Huang, C. K.; Robinson, J. C.; Eichelberger, B.

    2008-03-01

    In the lithography section of the ITRS 2006 update, at the top of the list of difficult challenges appears the text "overlay of multiple exposures including mask image placement". This is a reflection of the fact that today overlay is becoming a major yield risk factor in semiconductor manufacturing. Historically, lithographers have achieved sufficient alignment accuracy and hence layer to layer overlay control by relying on models which define overlay as a linear function of the field and wafer coordinates. These linear terms were easily translated to correctibles in the available exposure tool degrees of freedom on the wafer and reticle stages. However, as the 45 nm half pitch node reaches production, exposure tool vendors have begun to make available, and lithographers have begun to utilize so called high order wafer and field control, in which either look up table or high order polynomial models are modified on a product by product basis. In this paper, the major challenges of this transition will be described. It will include characterization of the sources of variation which need to be controlled by these new models and the overlay and alignment sampling optimization problem which needs to be addressed, while maintaining the ever tightening demands on productivity and cost of ownership.

  12. Techno-economic analysis of a transient plant-based platform for monoclonal antibody production

    PubMed Central

    Nandi, Somen; Kwong, Aaron T.; Holtz, Barry R.; Erwin, Robert L.; Marcel, Sylvain; McDonald, Karen A.

    2016-01-01

    ABSTRACT Plant-based biomanufacturing of therapeutic proteins is a relatively new platform with a small number of commercial-scale facilities, but offers advantages of linear scalability, reduced upstream complexity, reduced time to market, and potentially lower capital and operating costs. In this study we present a detailed process simulation model for a large-scale new “greenfield” biomanufacturing facility that uses transient agroinfiltration of Nicotiana benthamiana plants grown hydroponically indoors under light-emitting diode lighting for the production of a monoclonal antibody. The model was used to evaluate the total capital investment, annual operating cost, and cost of goods sold as a function of mAb expression level in the plant (g mAb/kg fresh weight of the plant) and production capacity (kg mAb/year). For the Base Case design scenario (300 kg mAb/year, 1 g mAb/kg fresh weight, and 65% recovery in downstream processing), the model predicts a total capital investment of $122 million dollars and cost of goods sold of $121/g including depreciation. Compared with traditional biomanufacturing platforms that use mammalian cells grown in bioreactors, the model predicts significant reductions in capital investment and >50% reduction in cost of goods compared with published values at similar production scales. The simulation model can be modified or adapted by others to assess the profitability of alternative designs, implement different process assumptions, and help guide process development and optimization. PMID:27559626

  13. Techno-economic analysis of a transient plant-based platform for monoclonal antibody production.

    PubMed

    Nandi, Somen; Kwong, Aaron T; Holtz, Barry R; Erwin, Robert L; Marcel, Sylvain; McDonald, Karen A

    Plant-based biomanufacturing of therapeutic proteins is a relatively new platform with a small number of commercial-scale facilities, but offers advantages of linear scalability, reduced upstream complexity, reduced time to market, and potentially lower capital and operating costs. In this study we present a detailed process simulation model for a large-scale new "greenfield" biomanufacturing facility that uses transient agroinfiltration of Nicotiana benthamiana plants grown hydroponically indoors under light-emitting diode lighting for the production of a monoclonal antibody. The model was used to evaluate the total capital investment, annual operating cost, and cost of goods sold as a function of mAb expression level in the plant (g mAb/kg fresh weight of the plant) and production capacity (kg mAb/year). For the Base Case design scenario (300 kg mAb/year, 1 g mAb/kg fresh weight, and 65% recovery in downstream processing), the model predicts a total capital investment of $122 million dollars and cost of goods sold of $121/g including depreciation. Compared with traditional biomanufacturing platforms that use mammalian cells grown in bioreactors, the model predicts significant reductions in capital investment and >50% reduction in cost of goods compared with published values at similar production scales. The simulation model can be modified or adapted by others to assess the profitability of alternative designs, implement different process assumptions, and help guide process development and optimization.

  14. Comparison of roll-to-roll replication approaches for microfluidic and optical functions in lab-on-a-chip diagnostic devices

    NASA Astrophysics Data System (ADS)

    Brecher, Christian; Baum, Christoph; Bastuck, Thomas

    2015-03-01

    Economically advantageous microfabrication technologies for lab-on-a-chip diagnostic devices substituting commonly used glass etching or injection molding processes are one of the key enablers for the emerging market of microfluidic devices. On-site detection in fields of life sciences, point of care diagnostics and environmental analysis requires compact, disposable and highly functionalized systems. Roll-to-roll production as a high volume process has become the emerging fabrication technology for integrated, complex high technology products within recent years (e.g. fuel cells). Differently functionalized polymer films enable researchers to create a new generation of lab-on-a-chip devices by combining electronic, microfluidic and optical functions in multilayer architecture. For replication of microfluidic and optical functions via roll-to-roll production process competitive approaches are available. One of them is to imprint fluidic channels and optical structures of micro- or nanometer scale from embossing rollers into ultraviolet (UV) curable lacquers on polymer substrates. Depending on dimension, shape and quantity of those structures there are alternative manufacturing technologies for the embossing roller. Ultra-precise diamond turning, electroforming or casting polymer materials are used either for direct structuring or manufacturing of roller sleeves. Mastering methods are selected for application considering replication quality required and structure complexity. Criteria for the replication quality are surface roughness and contour accuracy. Structure complexity is evaluated by shapes producible (e.g. linear, circular) and aspect ratio. Costs for the mastering process and structure lifetime are major cost factors. The alternative replication approaches are introduced and analyzed corresponding to the criteria presented. Advantages and drawbacks of each technology are discussed and exemplary applications are presented.

  15. Piece-wise quadratic approximations of arbitrary error functions for fast and robust machine learning.

    PubMed

    Gorban, A N; Mirkes, E M; Zinovyev, A

    2016-12-01

    Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0

  16. A class of stochastic optimization problems with one quadratic & several linear objective functions and extended portfolio selection model

    NASA Astrophysics Data System (ADS)

    Xu, Jiuping; Li, Jun

    2002-09-01

    In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.

  17. A Two-Dimensional Variational Analysis Method for NSCAT Ambiguity Removal: Methodology, Sensitivity, and Tuning

    NASA Technical Reports Server (NTRS)

    Hoffman, R. N.; Leidner, S. M.; Henderson, J. M.; Atlas, R.; Ardizzone, J. V.; Bloom, S. C.; Atlas, Robert (Technical Monitor)

    2001-01-01

    In this study, we apply a two-dimensional variational analysis method (2d-VAR) to select a wind solution from NASA Scatterometer (NSCAT) ambiguous winds. 2d-VAR determines a "best" gridded surface wind analysis by minimizing a cost function. The cost function measures the misfit to the observations, the background, and the filtering and dynamical constraints. The ambiguity closest in direction to the minimizing analysis is selected. 2d-VAR method, sensitivity and numerical behavior are described. 2d-VAR is compared to statistical interpolation (OI) by examining the response of both systems to a single ship observation and to a swath of unique scatterometer winds. 2d-VAR is used with both NSCAT ambiguities and NSCAT backscatter values. Results are roughly comparable. When the background field is poor, 2d-VAR ambiguity removal often selects low probability ambiguities. To avoid this behavior, an initial 2d-VAR analysis, using only the two most likely ambiguities, provides the first guess for an analysis using all the ambiguities or the backscatter data. 2d-VAR and median filter selected ambiguities usually agree. Both methods require horizontal consistency, so disagreements occur in clumps, or as linear features. In these cases, 2d-VAR ambiguities are often more meteorologically reasonable and more consistent with satellite imagery.

  18. An IoT Reader for Wireless Passive Electromagnetic Sensors.

    PubMed

    Galindo-Romera, Gabriel; Carnerero-Cano, Javier; Martínez-Martínez, José Juan; Herraiz-Martínez, Francisco Javier

    2017-03-28

    In the last years, many passive electromagnetic sensors have been reported. Some of these sensors are used for measuring harmful substances. Moreover, the response of these sensors is usually obtained with laboratory equipment. This approach highly increases the total cost and complexity of the sensing system. In this work, a novel low-cost and portable Internet-of-Things (IoT) reader for passive wireless electromagnetic sensors is proposed. The reader is used to interrogate the sensors within a short-range wireless link avoiding the direct contact with the substances under test. The IoT functionalities of the reader allows remote sensing from computers and handheld devices. For that purpose, the proposed design is based on four functional layers: the radiating layer, the RF interface, the IoT mini-computer and the power unit. In this paper a demonstrator of the proposed reader is designed and manufactured. The demonstrator shows, through the remote measurement of different substances, that the proposed system can estimate the dielectric permittivity. It has been demonstrated that a linear approximation with a small error can be extracted from the reader measurements. It is remarkable that the proposed reader can be used with other type of electromagnetic sensors, which transduce the magnitude variations in the frequency domain.

  19. An IoT Reader for Wireless Passive Electromagnetic Sensors

    PubMed Central

    Galindo-Romera, Gabriel; Carnerero-Cano, Javier; Martínez-Martínez, José Juan; Herraiz-Martínez, Francisco Javier

    2017-01-01

    In the last years, many passive electromagnetic sensors have been reported. Some of these sensors are used for measuring harmful substances. Moreover, the response of these sensors is usually obtained with laboratory equipment. This approach highly increases the total cost and complexity of the sensing system. In this work, a novel low-cost and portable Internet-of-Things (IoT) reader for passive wireless electromagnetic sensors is proposed. The reader is used to interrogate the sensors within a short-range wireless link avoiding the direct contact with the substances under test. The IoT functionalities of the reader allows remote sensing from computers and handheld devices. For that purpose, the proposed design is based on four functional layers: the radiating layer, the RF interface, the IoT mini-computer and the power unit. In this paper a demonstrator of the proposed reader is designed and manufactured. The demonstrator shows, through the remote measurement of different substances, that the proposed system can estimate the dielectric permittivity. It has been demonstrated that a linear approximation with a small error can be extracted from the reader measurements. It is remarkable that the proposed reader can be used with other type of electromagnetic sensors, which transduce the magnitude variations in the frequency domain. PMID:28350356

  20. A Low-Cost Point-of-Care Testing System for Psychomotor Symptoms of Depression Affecting Standing Balance: A Preliminary Study in India.

    PubMed

    Dutta, Arindam; Kumar, Robins; Malhotra, Suruchi; Chugh, Sanjay; Banerjee, Alakananda; Dutta, Anirban

    2013-01-01

    The World Health Organization estimated that major depression is the fourth most significant cause of disability worldwide for people aged 65 and older, where depressed older adults reported decreased independence, poor health, poor quality of life, functional decline, disability, and increased chronic medical problems. Therefore, the objectives of this study were (1) to develop a low-cost point-of-care testing system for psychomotor symptoms of depression and (2) to evaluate the system in community dwelling elderly in India. The preliminary results from the cross-sectional study showed a significant negative linear correlation between balance and depression. Here, monitoring quantitative electroencephalography along with the center of pressure for cued response time during functional reach tasks may provide insights into the psychomotor symptoms of depression where average slope of the Theta-Alpha power ratio versus average slope of baseline-normalized response time may be a candidate biomarker, which remains to be evaluated in our future clinical studies. Once validated, the biomarker can be used for monitoring the outcome of a comprehensive therapy program in conjunction with pharmacological interventions. Furthermore, the frequency of falls can be monitored with a mobile phone-based application where the propensity of falls during the periods of psychomotor symptoms of depression can be investigated further.

Top