Analysis of Different Cost Functions in the Geosect Airspace Partitioning Tool
NASA Technical Reports Server (NTRS)
Wong, Gregory L.
2010-01-01
A new cost function representing air traffic controller workload is implemented in the Geosect airspace partitioning tool. Geosect currently uses a combination of aircraft count and dwell time to select optimal airspace partitions that balance controller workload. This is referred to as the aircraft count/dwell time hybrid cost function. The new cost function is based on Simplified Dynamic Density, a measure of different aspects of air traffic controller workload. Three sectorizations are compared. These are the current sectorization, Geosect's sectorization based on the aircraft count/dwell time hybrid cost function, and Geosect s sectorization based on the Simplified Dynamic Density cost function. Each sectorization is evaluated for maximum and average workload along with workload balance using the Simplified Dynamic Density as the workload measure. In addition, the Airspace Concept Evaluation System, a nationwide air traffic simulator, is used to determine the capacity and delay incurred by each sectorization. The sectorization resulting from the Simplified Dynamic Density cost function had a lower maximum workload measure than the other sectorizations, and the sectorization based on the combination of aircraft count and dwell time did a better job of balancing workload and balancing capacity. However, the current sectorization had the lowest average workload, highest sector capacity, and the least system delay.
A non-stationary cost-benefit based bivariate extreme flood estimation approach
NASA Astrophysics Data System (ADS)
Qi, Wei; Liu, Junguo
2018-02-01
Cost-benefit analysis and flood frequency analysis have been integrated into a comprehensive framework to estimate cost effective design values. However, previous cost-benefit based extreme flood estimation is based on stationary assumptions and analyze dependent flood variables separately. A Non-Stationary Cost-Benefit based bivariate design flood estimation (NSCOBE) approach is developed in this study to investigate influence of non-stationarities in both the dependence of flood variables and the marginal distributions on extreme flood estimation. The dependence is modeled utilizing copula functions. Previous design flood selection criteria are not suitable for NSCOBE since they ignore time changing dependence of flood variables. Therefore, a risk calculation approach is proposed based on non-stationarities in both marginal probability distributions and copula functions. A case study with 54-year observed data is utilized to illustrate the application of NSCOBE. Results show NSCOBE can effectively integrate non-stationarities in both copula functions and marginal distributions into cost-benefit based design flood estimation. It is also found that there is a trade-off between maximum probability of exceedance calculated from copula functions and marginal distributions. This study for the first time provides a new approach towards a better understanding of influence of non-stationarities in both copula functions and marginal distributions on extreme flood estimation, and could be beneficial to cost-benefit based non-stationary bivariate design flood estimation across the world.
A cost-performance model for ground-based optical communications receiving telescopes
NASA Technical Reports Server (NTRS)
Lesh, J. R.; Robinson, D. L.
1986-01-01
An analytical cost-performance model for a ground-based optical communications receiving telescope is presented. The model considers costs of existing telescopes as a function of diameter and field of view. This, coupled with communication performance as a function of receiver diameter and field of view, yields the appropriate telescope cost versus communication performance curve.
2010-01-01
Objectives To determine the cost-effectiveness of open reduction internal fixation (ORIF) of displaced, midshaft clavicle fractures in adults. Design Formal cost-effectiveness analysis based on a prospective, randomized controlled trial. Setting Eight hospitals in Canada (seven university affiliated and one community hospital) Patients/Participants 132 adults with acute, completely displaced, midshaft clavicle fractures Intervention Clavicle ORIF versus nonoperative treatment Main Outcome Measurements Utilities derived from SF-6D Results The base-case cost per quality adjusted life year (QALY) gained for ORIF was $65,000. Cost-effectiveness improved to $28,150/QALY gained when the functional benefit from ORIF was assumed to be permanent, with cost per QALY gained falling below $50,000 when the functional advantage persisted for 9.3 years or more. In other sensitivity analyses, the cost per QALY gained for ORIF fell below $50,000 when ORIF cost less than $10,465 (base case cost $13,668) or the long-term utility difference between nonoperative treatment and ORIF was greater than 0.034 (base-case difference 0.014). Short-term disutility associated with fracture healing also affected cost-effectiveness, with the cost per QALY gained for ORIF falling below $50,000 when the utility of a fracture treated nonoperatively prior to union was less than 0.617 (base-case utility 0.706) or when nonoperative treatment increased the time to union by 20 weeks (base-case difference 12 weeks). Conclusions The cost-effectiveness of ORIF after acute clavicle fracture depended on the durability of functional advantage for ORIF compared to nonoperative treatment. When functional benefits persisted for more than 9 years, ORIF had favorable value compared with many accepted health interventions. PMID:20577073
18 CFR 301.7 - Average System Cost methodology functionalization.
Code of Federal Regulations, 2010 CFR
2010-04-01
... SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER... functionalization under its Direct Analysis assigns costs, revenues, debits or credits based upon the actual and/or...) Functionalization methods. (1) Direct analysis, if allowed or required by Table 1, assigns costs, revenues, debits...
Cost characteristics of hospitals.
Smet, Mike
2002-09-01
Modern hospitals are complex multi-product organisations. The analysis of a hospital's production and/or cost structure should therefore use the appropriate techniques. Flexible functional forms based on the neo-classical theory of the firm seem to be most suitable. Using neo-classical cost functions implicitly assumes minimisation of (variable) costs given that input prices and outputs are exogenous. Local and global properties of flexible functional forms and short-run versus long-run equilibrium are further issues that require thorough investigation. In order to put the results based on econometric estimations of cost functions in the right perspective, it is important to keep these considerations in mind when using flexible functional forms. The more recent studies seem to agree that hospitals generally do not operate in their long-run equilibrium (they tend to over-invest in capital (capacity and equipment)) and that it is therefore appropriate to estimate a short-run variable cost function. However, few studies explicitly take into account the implicit assumptions and restrictions embedded in the models they use. An alternative method to explain differences in costs uses management accounting techniques to identify the cost drivers of overhead costs. Related issues such as cost-shifting and cost-adjusting behaviour of hospitals and the influence of market structure on competition, prices and costs are also discussed shortly.
Development of Activity-based Cost Functions for Cellulase, Invertase, and Other Enzymes
NASA Astrophysics Data System (ADS)
Stowers, Chris C.; Ferguson, Elizabeth M.; Tanner, Robert D.
As enzyme chemistry plays an increasingly important role in the chemical industry, cost analysis of these enzymes becomes a necessity. In this paper, we examine the aspects that affect the cost of enzymes based upon enzyme activity. The basis for this study stems from a previously developed objective function that quantifies the tradeoffs in enzyme purification via the foam fractionation process (Cherry et al., Braz J Chem Eng 17:233-238, 2000). A generalized cost function is developed from our results that could be used to aid in both industrial and lab scale chemical processing. The generalized cost function shows several nonobvious results that could lead to significant savings. Additionally, the parameters involved in the operation and scaling up of enzyme processing could be optimized to minimize costs. We show that there are typically three regimes in the enzyme cost analysis function: the low activity prelinear region, the moderate activity linear region, and high activity power-law region. The overall form of the cost analysis function appears to robustly fit the power law form.
Simulation analysis of a microcomputer-based, low-cost Omega navigation system
NASA Technical Reports Server (NTRS)
Lilley, R. W.; Salter, R. J., Jr.
1976-01-01
The current status of research on a proposed micro-computer-based, low-cost Omega Navigation System (ONS) is described. The design approach emphasizes minimum hardware, maximum software, and the use of a low-cost, commercially-available microcomputer. Currently under investigation is the implementation of a low-cost navigation processor and its interface with an omega sensor to complete the hardware-based ONS. Sensor processor functions are simulated to determine how many of the sensor processor functions can be handled by innovative software. An input data base of live Omega ground and flight test data was created. The Omega sensor and microcomputer interface modules used to collect the data are functionally described. Automatic synchronization to the Omega transmission pattern is described as an example of the algorithms developed using this data base.
Are Education Cost Functions Ready for Prime Time? An Examination of Their Validity and Reliability
ERIC Educational Resources Information Center
Duncombe, William; Yinger, John
2011-01-01
This article makes the case that cost functions are the best available methodology for ensuring consistency between a state's educational accountability system and its education finance system. Because they are based on historical data and well-known statistical methods, cost functions are a particularly flexible and low-cost way to forecast what…
ERIC Educational Resources Information Center
Lloyd, Blair P.; Wehby, Joseph H.; Weaver, Emily S.; Goldman, Samantha E.; Harvey, Michelle N.; Sherlock, Daniel R.
2015-01-01
Although functional analysis (FA) remains the standard for identifying the function of problem behavior for students with developmental disabilities, traditional FA procedures are typically costly in terms of time, resources, and perceived risks. Preliminary research suggests that trial-based FA may be a less costly alternative. The purpose of…
Papadopoulos, Anthony
2009-01-01
The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.
Practice expenses in the MFS (Medicare fee schedule): the service-class approach.
Latimer, E A; Kane, N M
1995-01-01
The practice expense component of the Medicare fee schedule (MFS), which is currently based on historical charges and rewards physician procedures at the expense of cognitive services, is due to be changed by January 1, 1998. The Physician Payment Review Commission (PPRC) and others have proposed microcosting direct costs and allocating all indirect costs on a common basis, such as physician time or work plus direct costs. Without altering the treatment of direct costs, the service-class approach disaggregates indirect costs into six practice function costs. The practice function costs are then allocated to classes of services using cost-accounting and statistical methods. This approach would make the practice expense component more resource-based than other proposed alternatives.
1994-09-01
costs are the costs associated with a particular piece of equipment that do not change despite change in variable operating cost ( Horngren and Foster...The Operating and maintenance costs account for direct and indirect costs associated with their respective functions and vary with the utilization of...each vehicle. The operating direct cost includes all on-base and off- base fuel cost . Indirect operations costs account for bench 28 stock items
Home health care cost-function analysis
Hay, Joel W.; Mandes, George
1984-01-01
An exploratory home health care (HHC) cost-function model is estimated using State rate-setting data for the 74 traditional (nonprofit) Connecticut agencies. The analysis demonstrates U-shaped average costs curves for agencies' provision of skilled nursing visits, with substantial diseconomies of scale in the observable range. It is determined from the estimated cost function that the sample representative agency is providing fewer visits than optimal, and its marginal cost is significantly below average cost. The finding that an agency's costs are predominantly related to output levels, with little systematic variation due to other agency or patient characteristics, suggests that the economic inefficiency in a cost-based HHC reimbursement policy may be substantial. PMID:10310596
2013-01-01
Background Day-hospital-based treatment programmes have been recommended for poorly functioning patients with personality disorders (PD). However, more research is needed to confirm the cost-effectiveness of such extensive programmes over other, presumably simpler, treatment formats. Methods This study compared health service costs and psychosocial functioning for PD patients randomly allocated to either a day-hospital-based treatment programme combining individual and group psychotherapy in a step-down format, or outpatient individual psychotherapy at a specialist practice. It included 107 PD patients, 46% of whom had borderline PD, and 40% of whom had avoidant PD. Costs included the two treatment conditions and additional primary and secondary in- and outpatient services. Psychosocial functioning was assessed using measures of global (observer-rated GAF) and occupational (self-report) functioning. Repeated assessments over three years were analysed using mixed models. Results The costs of step-down treatment were higher than those of outpatient treatment, but these high costs were compensated by considerably lower costs of other health services. However, costs and clinical gains depended on the type of PD. For borderline PD patients, cost-effectiveness did not differ by treatment condition. Health service costs declined during the trial, and functioning improved to mild impairment levels (GAF > 60). For avoidant PD patients, considerable adjuvant health services expanded the outpatient format. Clinical improvements were nevertheless superior to the step-down condition. Conclusion Our results indicate that decisions on treatment format should differentiate between PD types. For borderline PD patients, the costs and gains of step-down and outpatient treatment conditions did not differ. For avoidant PD patients, the outpatient format was a better alternative, leaning, however, on costly additional health services in the early phase of treatment. Trial registration Clinical Trials NCT00378248 PMID:24268099
Lindholm, C; Gustavsson, A; Jönsson, L; Wimo, A
2013-05-01
Because the prevalence of many brain disorders rises with age, and brain disorders are costly, the economic burden of brain disorders will increase markedly during the next decades. The purpose of this study is to analyze how the costs to society vary with different levels of functioning and with the presence of a brain disorder. Resource utilization and costs from a societal viewpoint were analyzed versus cognition, activities of daily living (ADL), instrumental activities of daily living (IADL), brain disorder diagnosis and age in a population-based cohort of people aged 65 years and older in Nordanstig in Northern Sweden. Descriptive statistics, non-parametric bootstrapping and a generalized linear model (GLM) were used for the statistical analyses. Most people were zero users of care. Societal costs of dementia were by far the highest, ranging from SEK 262,000 (mild) to SEK 519,000 per year (severe dementia). In univariate analysis, all measures of functioning were significantly related to costs. When controlling for ADL and IADL in the multivariate GLM, cognition did not have a statistically significant effect on total cost. The presence of a brain disorder did not impact total cost when controlling for function. The greatest shift in costs was seen when comparing no dependency in ADL and dependency in one basic ADL function. It is the level of functioning, rather than the presence of a brain disorder diagnosis, which predicts costs. ADLs are better explanatory variables of costs than Mini mental state examination. Most people in a population-based cohort are zero users of care. Copyright © 2012 John Wiley & Sons, Ltd.
Functional design specification: NASA form 1510
NASA Technical Reports Server (NTRS)
1979-01-01
The 1510 worksheet used to calculate approved facility project cost estimates is explained. Topics covered include data base considerations, program structure, relationship of the 1510 form to the 1509 form, and functions which the application must perform: WHATIF, TENENTER, TENTYPE, and data base utilities. A sample NASA form 1510 printout and a 1510 data dictionary are presented in the appendices along with the cost adjustment table, the floppy disk index, and methods for generating the calculated values (TENCALC) and for calculating cost adjustment (CONSTADJ). Storage requirements are given.
NASA Astrophysics Data System (ADS)
Das, Mangal; Kumar, Amitesh; Singh, Rohit; Than Htay, Myo; Mukherjee, Shaibal
2018-02-01
Single synaptic device with inherent learning and memory functions is demonstrated based on a forming-free amorphous Y2O3 (yttria) memristor fabricated by dual ion beam sputtering system. Synaptic functions such as nonlinear transmission characteristics, long-term plasticity, short-term plasticity and ‘learning behavior (LB)’ are achieved using a single synaptic device based on cost-effective metal-insulator-semiconductor (MIS) structure. An ‘LB’ function is demonstrated, for the first time in the literature, for a yttria based memristor, which bears a resemblance to certain memory functions of biological systems. The realization of key synaptic functions in a cost-effective MIS structure would promote much cheaper synapse for artificial neural network.
Zatsiorsky, Vladimir M.
2011-01-01
One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907
NASA Astrophysics Data System (ADS)
Teeples, Ronald; Glyer, David
1987-05-01
Both policy and technical analysis of water delivery systems have been based on cost functions that are inconsistent with or are incomplete representations of the neoclassical production functions of economics. We present a full-featured production function model of water delivery which can be estimated from a multiproduct, dual cost function. The model features implicit prices for own-water inputs and is implemented as a jointly estimated system of input share equations and a translog cost function. Likelihood ratio tests are performed showing that a minimally constrained, full-featured production function is a necessary specification of the water delivery operations in our sample. This, plus the model's highly efficient and economically correct parameter estimates, confirms the usefulness of a production function approach to modeling the economic activities of water delivery systems.
Bertoldi, Eduardo G; Stella, Steffen F; Rohde, Luis Eduardo P; Polanczyk, Carisi A
2017-05-04
The aim of this research is to evaluate the relative cost-effectiveness of functional and anatomical strategies for diagnosing stable coronary artery disease (CAD), using exercise (Ex)-ECG, stress echocardiogram (ECHO), single-photon emission CT (SPECT), coronary CT angiography (CTA) or stress cardiacmagnetic resonance (C-MRI). Decision-analytical model, comparing strategies of sequential tests for evaluating patients with possible stable angina in low, intermediate and high pretest probability of CAD, from the perspective of a developing nation's public healthcare system. Hypothetical cohort of patients with pretest probability of CAD between 20% and 70%. The primary outcome is cost per correct diagnosis of CAD. Proportion of false-positive or false-negative tests and number of unnecessary tests performed were also evaluated. Strategies using Ex-ECG as initial test were the least costly alternatives but generated more frequent false-positive initial tests and false-negative final diagnosis. Strategies based on CTA or ECHO as initial test were the most attractive and resulted in similar cost-effectiveness ratios (I$ 286 and I$ 305 per correct diagnosis, respectively). A strategy based on C-MRI was highly effective for diagnosing stable CAD, but its high cost resulted in unfavourable incremental cost-effectiveness (ICER) in moderate-risk and high-risk scenarios. Non-invasive strategies based on SPECT have been dominated. An anatomical diagnostic strategy based on CTA is a cost-effective option for CAD diagnosis. Functional strategies performed equally well when based on ECHO. C-MRI yielded acceptable ICER only at low pretest probability, and SPECT was not cost-effective in our analysis. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
ERIC Educational Resources Information Center
Arnold, Robert
Problems in educational cost accounting and a new cost accounting approach are described in this paper. The limitations of the individualized cost (student units) approach and the comparative cost approach (in the form of fund-function-object) are illustrated. A new strategy, an activity-based system of accounting, is advocated. Borrowed from…
Carbon-Based Functional Materials Derived from Waste for Water Remediation and Energy Storage.
Ma, Qinglang; Yu, Yifu; Sindoro, Melinda; Fane, Anthony G; Wang, Rong; Zhang, Hua
2017-04-01
Carbon-based functional materials hold the key for solving global challenges in the areas of water scarcity and the energy crisis. Although carbon nanotubes (CNTs) and graphene have shown promising results in various fields of application, their high preparation cost and low production yield still dramatically hinder their wide practical applications. Therefore, there is an urgent call for preparing carbon-based functional materials from low-cost, abundant, and sustainable sources. Recent innovative strategies have been developed to convert various waste materials into valuable carbon-based functional materials. These waste-derived carbon-based functional materials have shown great potential in many applications, especially as sorbents for water remediation and electrodes for energy storage. Here, the research progress in the preparation of waste-derived carbon-based functional materials is summarized, along with their applications in water remediation and energy storage; challenges and future research directions in this emerging research field are also discussed. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
1993-12-31
19,23,25,26,27,28,32,33,35,41]) - A new cost function is postulated and an algorithm that employs this cost function is proposed for the learning of...updates the controller parameters from time to time [53]. The learning control algorithm consist of updating the parameter estimates as used in the...proposed cost function with the other learning type algorithms , such as based upon learning of iterative tasks [Kawamura-85], variable structure
NASA Astrophysics Data System (ADS)
Sukhikh, E.; Sheino, I.; Vertinsky, A.
2017-09-01
Modern modalities of radiation treatment therapy allow irradiation of the tumor to high dose values and irradiation of organs at risk (OARs) to low dose values at the same time. In this paper we study optimal radiation treatment plans made in Monaco system. The first aim of this study was to evaluate dosimetric features of Monaco treatment planning system using biological versus dose-based cost functions for the OARs and irradiation targets (namely tumors) when the full potential of built-in biological cost functions is utilized. The second aim was to develop criteria for the evaluation of radiation dosimetry plans for patients based on the macroscopic radiobiological criteria - TCP/NTCP. In the framework of the study four dosimetric plans were created utilizing the full extent of biological and physical cost functions using dose calculation-based treatment planning for IMRT Step-and-Shoot delivery of stereotactic body radiation therapy (SBRT) in prostate case (5 fractions per 7 Gy).
Liu, Derong; Wang, Ding; Wang, Fei-Yue; Li, Hongliang; Yang, Xiong
2014-12-01
In this paper, the infinite horizon optimal robust guaranteed cost control of continuous-time uncertain nonlinear systems is investigated using neural-network-based online solution of Hamilton-Jacobi-Bellman (HJB) equation. By establishing an appropriate bounded function and defining a modified cost function, the optimal robust guaranteed cost control problem is transformed into an optimal control problem. It can be observed that the optimal cost function of the nominal system is nothing but the optimal guaranteed cost of the original uncertain system. A critic neural network is constructed to facilitate the solution of the modified HJB equation corresponding to the nominal system. More importantly, an additional stabilizing term is introduced for helping to verify the stability, which reinforces the updating process of the weight vector and reduces the requirement of an initial stabilizing control. The uniform ultimate boundedness of the closed-loop system is analyzed by using the Lyapunov approach as well. Two simulation examples are provided to verify the effectiveness of the present control approach.
Relationship between functional disability and costs one and two years post stroke
Lekander, Ingrid; Willers, Carl; von Euler, Mia; Lilja, Mikael; Sunnerhagen, Katharina S.; Pessah-Rasmussen, Hélène; Borgström, Fredrik
2017-01-01
Background and purpose Stroke affects mortality, functional ability, quality of life and incurs costs. The primary objective of this study was to estimate the costs of stroke care in Sweden by level of disability and stroke type (ischemic (IS) or hemorrhagic stroke (ICH)). Method Resource use during first and second year following a stroke was estimated based on a research database containing linked data from several registries. Costs were estimated for the acute and post-acute management of stroke, including direct (health care consumption and municipal services) and indirect (productivity losses) costs. Resources and costs were estimated per stroke type and functional disability categorised by Modified Rankin Scale (mRS). Results The results indicated that the average costs per patient following a stroke were 350,000SEK/€37,000–480,000SEK/€50,000, dependent on stroke type and whether it was the first or second year post stroke. Large variations were identified between different subgroups of functional disability and stroke type, ranging from annual costs of 100,000SEK/€10,000–1,100,000SEK/€120,000 per patient, with higher costs for patients with ICH compared to IS and increasing costs with more severe functional disability. Conclusion Functional outcome is a major determinant on costs of stroke care. The stroke type associated with worse outcome (ICH) was also consistently associated to higher costs. Measures to improve function are not only important to individual patients and their family but may also decrease the societal burden of stroke. PMID:28384164
Mangla, Sundeep; O'Connell, Keara; Kumari, Divya; Shahrzad, Maryam
2016-01-20
Ischemic strokes result in significant healthcare expenditures (direct costs) and loss of quality-adjusted life years (QALYs) (indirect costs). Interventional therapy has demonstrated improved functional outcomes in patients with large vessel occlusions (LVOs), which are likely to reduce the economic burden of strokes. To develop a novel real-world dollar model to assess the direct and indirect cost-benefit of mechanical embolectomy compared with medical treatment with intravenous tissue plasminogen activator (IV tPA) based on shifts in modified Rankin scores (mRS). A cost model was developed including multiple parameters to account for both direct and indirect stroke costs. These were adjusted based upon functional outcome (mRS). The model compared IV tPA with mechanical embolectomy to assess the costs and benefits of both therapies. Direct stroke-related costs included hospitalization, inpatient and outpatient rehabilitation, home care, skilled nursing facilities, and long-term care facility costs. Indirect costs included years of life expectancy lost and lost QALYs. Values for the model cost parameters were derived from numerous resources and functional outcomes were derived from the MR CLEAN study as a reflective sample of LVOs. Direct and indirect costs and benefits for the two treatments were assessed using Microsoft Excel 2013. This cost-benefit model found a cost-benefit of mechanical embolectomy over IV tPA of $163 624.27 per patient and the cost benefit for 50 000 patients on an annual basis is $8 181 213 653.77. If applied widely within the USA, mechanical embolectomy will significantly reduce the direct and indirect financial burden of stroke ($8 billion/50 000 patients). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
NASA Astrophysics Data System (ADS)
Naguib, Hussein; Bol, Igor I.; Lora, J.; Chowdhry, R.
1994-09-01
This paper presents a case study on the implementation of ABC to calculate the cost per wafer and to drive cost reduction efforts for a new IC product line. The cost reduction activities were conducted through the efforts of 11 cross-functional teams which included members of the finance, purchasing, technology development, process engineering, equipment engineering, production control, and facility groups. The activities of these cross functional teams were coordinated by a cost council. It will be shown that these activities have resulted in a 57% reduction in the wafer manufacturing cost of the new product line. Factors contributed to successful implementation of an ABC management system are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granderson, G.D.
The purpose of the dissertation is to examine the impact of rate-of-return regulation on the cost of transporting natural gas in interstate commerce. Of particular interest is the effect of the regulation on the input choice of a firm. Does regulation induce a regulated firm to produce its selected level of output at greater than minimum cost The theoretical model is based on the work of Rolf Faere and James Logan who investigate the duality relationship between the cost and production functions of a rate-of-return regulated firm. Faere and Logan derive the cost function for a regulated firm as themore » minimum cost of producing the firm's selected level of output, subject to the regulatory constraint. The regulated cost function is used to recover the unregulated cost function. A firm's unregulated cost function is the minimum cost of producing its selected level of output. Characteristics of the production technology are obtained from duality between the production and unregulated cost functions. Using data on 20 pipeline companies from 1977 to 1987, the author estimates a random effects model that consists of a regulated cost function and its associated input share equations. The model is estimated as a set of seemingly unrelated regressions. The empirical results are used to test the Faere and Logan theory and the traditional Averch-Johnson hypothesis of overcapitalization. Parameter estimates are used to recover the unregulated cost function and to calculate the amount by which transportation costs are increased by the regulation of the industry. Empirical results show that a firm's transportation cost decreases as the allowed rate of return increases and the regulatory constraint becomes less tight. Elimination of the regulatory constraint would lead to a reduction in costs on average of 5.278%. There is evidence that firms overcapitalize on pipeline capital. There is inconclusive evidence on whether firms overcapitalized on compressor station capital.« less
Moghimi-Dehkordi, Bijan; Vahedi, Mohsen; Pourhoseingholi, Mohammad Amin; Khoshkrood Mansoori, Babak; Safaee, Azadeh; Habibi, Manijeh; Pourhoseingholi, Asma; Zali, Mohammad Reza
2011-10-01
While few population-based studies on the economic burden of functional bowel disorders (FBD) have been published from developing countries like Iran, this study aimed to estimate their direct and indirect costs for five groups of patients: irritable bowel syndrome (IBS), functional constipation (FC), unspecified-FBD (U-FBD), functional abdominal bloating (FAB) and functional diarrhea (FD). Up to 18,180 adults randomly sampled from Tehran, Iran (2006-2007) were interviewed using two questionnaires based on the Rome III criteria to detect FBD patients and to estimate their medical expenses (such as visiting the doctor, drugs, hospitalization and laboratory tests) and productivity loss in the previous 6 months. All costs were converted to dollar purchasing power parity (PPP$) to facilitate cross-country comparisons. The mean total 6-month costs were approximately: 160, 147, 103, 96 and 42 PPP$ for IBS, FC, U-FBD, FAB and FD, respectively. The highest proportion of drug consumption was found in IBS patients. The highest mean duration of absence from work was seen in IBS patients (2.26 days). Overall, doctor visit costs accounted for approximately 1/3 of the total costs for FBD, followed by hospitalization. A higher indirect cost of illness was found in IBS (54 PPP$), whereas it was zero in FD. The economic burden of FBD seems to be moderately high in Iran and it imposes a relatively heavy financial burden on the Iranian national health system because of its high prevalence and its impact on quality of life, productivity and waste of resources. © 2011 The Authors. Journal of Digestive Diseases © 2011 Chinese Medical Association Shanghai Branch, Chinese Society of Gastroenterology, Renji Hospital Affiliated to Shanghai Jiaotong University School of Medicine and Blackwell Publishing Asia Pty Ltd.
NASA Astrophysics Data System (ADS)
Guérin, Joris; Gibaru, Olivier; Thiery, Stéphane; Nyiri, Eric
2017-01-01
Recent methods of Reinforcement Learning have enabled to solve difficult, high dimensional, robotic tasks under unknown dynamics using iterative Linear Quadratic Gaussian control theory. These algorithms are based on building a local time-varying linear model of the dynamics from data gathered through interaction with the environment. In such tasks, the cost function is often expressed directly in terms of the state and control variables so that it can be locally quadratized to run the algorithm. If the cost is expressed in terms of other variables, a model is required to compute the cost function from the variables manipulated. We propose a method to learn the cost function directly from the data, in the same way as for the dynamics. This way, the cost function can be defined in terms of any measurable quantity and thus can be chosen more appropriately for the task to be carried out. With our method, any sensor information can be used to design the cost function. We demonstrate the efficiency of this method through simulating, with the V-REP software, the learning of a Cartesian positioning task on several industrial robots with different characteristics. The robots are controlled in joint space and no model is provided a priori. Our results are compared with another model free technique, consisting in writing the cost function as a state variable.
Vehicle operating costs, fuel consumption, and pavement type condition factors
DOT National Transportation Integrated Search
1982-06-01
This report presents updated vehicle operating cost tables which may be used by a highway agency for estimation of vehicle operating costs as a function of operational and roadway variables. These results, partially based on fuel consumption tests on...
Bhat; Bergstrom; Teasley; Bowker; Cordell
1998-01-01
/ This paper describes a framework for estimating the economic value of outdoor recreation across different ecoregions. Ten ecoregions in the continental United States were defined based on similarly functioning ecosystem characters. The individual travel cost method was employed to estimate recreation demand functions for activities such as motor boating and waterskiing, developed and primitive camping, coldwater fishing, sightseeing and pleasure driving, and big game hunting for each ecoregion. While our ecoregional approach differs conceptually from previous work, our results appear consistent with the previous travel cost method valuation studies.KEY WORDS: Recreation; Ecoregion; Travel cost method; Truncated Poisson model
NASA Astrophysics Data System (ADS)
Chen, Ming-Chih; Hsiao, Shen-Fu
In this paper, we propose an area-efficient design of Advanced Encryption Standard (AES) processor by applying a new common-expression-elimination (CSE) method to the sub-functions of various transformations required in AES. The proposed method reduces the area cost of realizing the sub-functions by extracting the common factors in the bit-level XOR/AND-based sum-of-product expressions of these sub-functions using a new CSE algorithm. Cell-based implementation results show that the AES processor with our proposed CSE method has significant area improvement compared with previous designs.
Chan, B
2015-01-01
Background Functional improvements have been seen in stroke patients who have received an increased intensity of physiotherapy. This requires additional costs in the form of increased physiotherapist time. Objectives The objective of this economic analysis is to determine the cost-effectiveness of increasing the intensity of physiotherapy (duration and/or frequency) during inpatient rehabilitation after stroke, from the perspective of the Ontario Ministry of Health and Long-term Care. Data Sources The inputs for our economic evaluation were extracted from articles published in peer-reviewed journals and from reports from government sources or the Canadian Stroke Network. Where published data were not available, we sought expert opinion and used inputs based on the experts' estimates. Review Methods The primary outcome we considered was cost per quality-adjusted life-year (QALY). We also evaluated functional strength training because of its similarities to physiotherapy. We used a 2-state Markov model to evaluate the cost-effectiveness of functional strength training and increased physiotherapy intensity for stroke inpatient rehabilitation. The model had a lifetime timeframe with a 5% annual discount rate. We then used sensitivity analyses to evaluate uncertainty in the model inputs. Results We found that functional strength training and higher-intensity physiotherapy resulted in lower costs and improved outcomes over a lifetime. However, our sensitivity analyses revealed high levels of uncertainty in the model inputs, and therefore in the results. Limitations There is a high level of uncertainty in this analysis due to the uncertainty in model inputs, with some of the major inputs based on expert panel consensus or expert opinion. In addition, the utility outcomes were based on a clinical study conducted in the United Kingdom (i.e., 1 study only, and not in an Ontario or Canadian setting). Conclusions Functional strength training and higher-intensity physiotherapy may result in lower costs and improved health outcomes. However, these results should be interpreted with caution. PMID:26366241
The cost of a small membrane bioreactor.
Lo, C H; McAdam, E; Judd, S
2015-01-01
The individual cost contributions to the mechanical components of a small membrane bioreactor (MBR) (100-2,500 m3/d flow capacity) are itemised and collated to generate overall capital and operating costs (CAPEX and OPEX) as a function of size. The outcomes are compared to those from previously published detailed cost studies provided for both very small containerised plants (<40 m3/day capacity) and larger municipal plants (2,200-19,000 m3/d). Cost curves, as a function of flow capacity, determined for OPEX, CAPEX and net present value (NPV) based on the heuristic data used indicate a logarithmic function for OPEX and a power-based one for the CAPEX. OPEX correlations were in good quantitative agreement with those reported in the literature. Disparities in the calculated CAPEX trend compared with reported data were attributed to differences in assumptions concerning cost contributions. More reasonable agreement was obtained with the reported membrane separation component CAPEX data from published studies. The heuristic approach taken appears appropriate for small-scale MBRs with minimal costs associated with installation. An overall relationship of net present value=(a tb)Q(-c lnt+d) was determined for the net present value where a=1.265, b=0.44, c=0.00385 and d=0.868 according to the dataset employed for the analysis.
Automotive Maintenance Data Base for Model Years 1976-1979. Part I
DOT National Transportation Integrated Search
1980-12-01
An update of the existing data base was developed to include life cycle maintenance costs of representative vehicles for the model years 1976-1979. Repair costs as a function of time are also developed for a passenger car in each of the compact, subc...
Noun-phrase anaphors and focus: the informational load hypothesis.
Almor, A
1999-10-01
The processing of noun-phrase (NP) anaphors in discourse is argued to reflect constraints on the activation and processing of semantic information in working memory. The proposed theory views NP anaphor processing as an optimization process that is based on the principle that processing cost, defined in terms of activating semantic information, should serve some discourse function--identifying the antecedent, adding new information, or both. In a series of 5 self-paced reading experiments, anaphors' functionality was manipulated by changing the discourse focus, and their cost was manipulated by changing the semantic relation between the anaphors and their antecedents. The results show that reading times of NP anaphors reflect their functional justification: Anaphors were read faster when their cost had a better functional justification. These results are incompatible with any theory that treats NP anaphors as one homogeneous class regardless of discourse function and processing cost.
NASA Technical Reports Server (NTRS)
Lee, Taesik; Jeziorek, Peter
2004-01-01
Large complex projects cost large sums of money throughout their life cycle for a variety of reasons and causes. For such large programs, the credible estimation of the project cost, a quick assessment of the cost of making changes, and the management of the project budget with effective cost reduction determine the viability of the project. Cost engineering that deals with these issues requires a rigorous method and systematic processes. This paper introduces a logical framework to a&e effective cost engineering. The framework is built upon Axiomatic Design process. The structure in the Axiomatic Design process provides a good foundation to closely tie engineering design and cost information together. The cost framework presented in this paper is a systematic link between the functional domain (FRs), physical domain (DPs), cost domain (CUs), and a task/process-based model. The FR-DP map relates a system s functional requirements to design solutions across all levels and branches of the decomposition hierarchy. DPs are mapped into CUs, which provides a means to estimate the cost of design solutions - DPs - from the cost of the physical entities in the system - CUs. The task/process model describes the iterative process ot-developing each of the CUs, and is used to estimate the cost of CUs. By linking the four domains, this framework provides a superior traceability from requirements to cost information.
Valuation of opportunity costs by rats working for rewarding electrical brain stimulation.
Solomon, Rebecca Brana; Conover, Kent; Shizgal, Peter
2017-01-01
Pursuit of one goal typically precludes simultaneous pursuit of another. Thus, each exclusive activity entails an "opportunity cost:" the forgone benefits from the next-best activity eschewed. The present experiment estimates, in laboratory rats, the function that maps objective opportunity costs into subjective ones. In an operant chamber, rewarding electrical brain stimulation was delivered when the cumulative time a lever had been depressed reached a criterion duration. The value of the activities forgone during this duration is the opportunity cost of the electrical reward. We determined which of four functions best describes how objective opportunity costs, expressed as the required duration of lever depression, are translated into their subjective equivalents. The simplest account is the identity function, which equates subjective and objective opportunity costs. A variant of this function called the "sigmoidal-slope function," converges on the identity function at longer durations but deviates from it at shorter durations. The sigmoidal-slope function has the form of a hockey stick. The flat "blade" denotes a range over which opportunity costs are subjectively equivalent; these durations are too short to allow substitution of more beneficial activities. The blade extends into an upward-curving portion over which costs become discriminable and finally into the straight "handle," over which objective and subjective costs match. The two remaining functions are based on hyperbolic and exponential temporal discounting, respectively. The results are best described by the sigmoidal-slope function. That this is so suggests that different principles of intertemporal choice are involved in the evaluation of time spent working for a reward or waiting for its delivery. The subjective opportunity-cost function plays a key role in the evaluation and selection of goals. An accurate description of its form and parameters is essential to successful modeling and prediction of instrumental performance and reward-related decision making.
Automotive Maintenance Data Base for Model Years 1976-1979. Part II : Appendix E and F
DOT National Transportation Integrated Search
1980-12-01
An update of the existing data base was developed to include life cycle maintenance costs of representative vehicles for the model years 1976-1979. Repair costs as a function of time are also developed for a passenger car in each of the compact, subc...
NASA Astrophysics Data System (ADS)
Ibraheem, Omveer, Hasan, N.
2010-10-01
A new hybrid stochastic search technique is proposed to design of suboptimal AGC regulator for a two area interconnected non reheat thermal power system incorporating DC link in parallel with AC tie-line. In this technique, we are proposing the hybrid form of Genetic Algorithm (GA) and simulated annealing (SA) based regulator. GASA has been successfully applied to constrained feedback control problems where other PI based techniques have often failed. The main idea in this scheme is to seek a feasible PI based suboptimal solution at each sampling time. The feasible solution decreases the cost function rather than minimizing the cost function.
An Extension of RSS-based Model Comparison Tests for Weighted Least Squares
2012-08-22
use the model comparison test statistic to analyze the null hypothesis. Under the null hypothesis, the weighted least squares cost functional is JWLS ...q̂WLSH ) = 10.3040×106. Under the alternative hypothesis, the weighted least squares cost functional is JWLS (q̂WLS) = 8.8394 × 106. Thus the model
Application of target costing in machining
NASA Astrophysics Data System (ADS)
Gopalakrishnan, Bhaskaran; Kokatnur, Ameet; Gupta, Deepak P.
2004-11-01
In today's intensely competitive and highly volatile business environment, consistent development of low cost and high quality products meeting the functionality requirements is a key to a company's survival. Companies continuously strive to reduce the costs while still producing quality products to stay ahead in the competition. Many companies have turned to target costing to achieve this objective. Target costing is a structured approach to determine the cost at which a proposed product, meeting the quality and functionality requirements, must be produced in order to generate the desired profits. It subtracts the desired profit margin from the company's selling price to establish the manufacturing cost of the product. Extensive literature review revealed that companies in automotive, electronic and process industries have reaped the benefits of target costing. However target costing approach has not been applied in the machining industry, but other techniques based on Geometric Programming, Goal Programming, and Lagrange Multiplier have been proposed for application in this industry. These models follow a forward approach, by first selecting a set of machining parameters, and then determining the machining cost. Hence in this study we have developed an algorithm to apply the concepts of target costing, which is a backward approach that selects the machining parameters based on the required machining costs, and is therefore more suitable for practical applications in process improvement and cost reduction. A target costing model was developed for turning operation and was successfully validated using practical data.
Population dynamics and mutualism: Functional responses of benefits and costs
Holland, J. Nathaniel; DeAngelis, Donald L.; Bronstein, Judith L.
2002-01-01
We develop an approach for studying population dynamics resulting from mutualism by employing functional responses based on density‐dependent benefits and costs. These functional responses express how the population growth rate of a mutualist is modified by the density of its partner. We present several possible dependencies of gross benefits and costs, and hence net effects, to a mutualist as functions of the density of its partner. Net effects to mutualists are likely a monotonically saturating or unimodal function of the density of their partner. We show that fundamental differences in the growth, limitation, and dynamics of a population can occur when net effects to that population change linearly, unimodally, or in a saturating fashion. We use the mutualism between senita cactus and its pollinating seed‐eating moth as an example to show the influence of different benefit and cost functional responses on population dynamics and stability of mutualisms. We investigated two mechanisms that may alter this mutualism's functional responses: distribution of eggs among flowers and fruit abortion. Differences in how benefits and costs vary with density can alter the stability of this mutualism. In particular, fruit abortion may allow for a stable equilibrium where none could otherwise exist.
Integrating More Solar with Smart Inverters: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoke, Anderson F; Giraldez Miner, Julieta I; Symko-Davies, Martha
In Hawai'i, the relatively high cost of electricity costs coupled with various incentives have made it cost-effective to install solar photovoltaics (PV) on residential homes and larger central-station PV plants. On some of the islands, PV has reached over 50% of the installed generation capacity base. To make sure these inverter-based PV plants can maintain stable and safe operations, new smart inverter functionality is being evaluated and demonstrated at significant scale across the islands This paper describes research conducted to validate high PV penetration scenarios with smart inverters and recent progress on the use of these advanced inverter grid supportmore » functions in actual power grids in Hawai'i.« less
Cuperus, Nienke; van den Hout, Wilbert B; Hoogeboom, Thomas J; van den Hoogen, Frank H J; Vliet Vlieland, Thea P M; van den Ende, Cornelia H M
2016-04-01
To evaluate, from a societal perspective, the cost utility and cost effectiveness of a nonpharmacologic face-to-face treatment program compared with a telephone-based treatment program for patients with generalized osteoarthritis (GOA). An economic evaluation was carried out alongside a randomized clinical trial involving 147 patients with GOA. Program costs were estimated from time registrations. One-year medical and nonmedical costs were estimated using cost questionnaires. Quality-adjusted life years (QALYs) were estimated using the EuroQol (EQ) classification system, EQ rating scale, and the Short Form 6D (SF-6D). Daily function was measured using the Health Assessment Questionnaire (HAQ) disability index (DI). Cost and QALY/effect differences were analyzed using multilevel regression analysis and cost-effectiveness acceptability curves. Medical costs of the face-to-face treatment and telephone-based treatment were estimated at €387 and €252, respectively. The difference in total societal costs was nonsignificantly in favor of the face-to-face program (difference €708; 95% confidence interval [95% CI] -€5,058, €3,642). QALYs were similar for both groups according to the EQ, but were significantly in favor of the face-to-face group, according to the SF-6D (difference 0.022 [95% CI 0.000, 0.045]). Daily function was similar according to the HAQ DI. Since both societal costs and QALYs/effects were in favor of the face-to-face program, the economic assessment favored this program, regardless of society's willingness to pay. There was a 65-90% chance that the face-to-face program had better cost utility and a 60-70% chance of being cost effective. This economic evaluation from a societal perspective showed that a nonpharmacologic, face-to-face treatment program for patients with GOA was likely to be cost effective, relative to a telephone-based program. © 2016, American College of Rheumatology.
Simple, Defensible Sample Sizes Based on Cost Efficiency
Bacchetti, Peter; McCulloch, Charles E.; Segal, Mark R.
2009-01-01
Summary The conventional approach of choosing sample size to provide 80% or greater power ignores the cost implications of different sample size choices. Costs, however, are often impossible for investigators and funders to ignore in actual practice. Here, we propose and justify a new approach for choosing sample size based on cost efficiency, the ratio of a study’s projected scientific and/or practical value to its total cost. By showing that a study’s projected value exhibits diminishing marginal returns as a function of increasing sample size for a wide variety of definitions of study value, we are able to develop two simple choices that can be defended as more cost efficient than any larger sample size. The first is to choose the sample size that minimizes the average cost per subject. The second is to choose sample size to minimize total cost divided by the square root of sample size. This latter method is theoretically more justifiable for innovative studies, but also performs reasonably well and has some justification in other cases. For example, if projected study value is assumed to be proportional to power at a specific alternative and total cost is a linear function of sample size, then this approach is guaranteed either to produce more than 90% power or to be more cost efficient than any sample size that does. These methods are easy to implement, based on reliable inputs, and well justified, so they should be regarded as acceptable alternatives to current conventional approaches. PMID:18482055
NASA Technical Reports Server (NTRS)
Bao, Han P.; Samareh, J. A.
2000-01-01
The primary objective of this paper is to demonstrate the use of process-based manufacturing and assembly cost models in a traditional performance-focused multidisciplinary design and optimization process. The use of automated cost-performance analysis is an enabling technology that could bring realistic processbased manufacturing and assembly cost into multidisciplinary design and optimization. In this paper, we present a new methodology for incorporating process costing into a standard multidisciplinary design optimization process. Material, manufacturing processes, and assembly processes costs then could be used as the objective function for the optimization method. A case study involving forty-six different configurations of a simple wing is presented, indicating that a design based on performance criteria alone may not necessarily be the most affordable as far as manufacturing and assembly cost is concerned.
NASA Astrophysics Data System (ADS)
Li, Xuxu; Li, Xinyang; wang, Caixia
2018-03-01
This paper proposes an efficient approach to decrease the computational costs of correlation-based centroiding methods used for point source Shack-Hartmann wavefront sensors. Four typical similarity functions have been compared, i.e. the absolute difference function (ADF), ADF square (ADF2), square difference function (SDF), and cross-correlation function (CCF) using the Gaussian spot model. By combining them with fast search algorithms, such as three-step search (TSS), two-dimensional logarithmic search (TDL), cross search (CS), and orthogonal search (OS), computational costs can be reduced drastically without affecting the accuracy of centroid detection. Specifically, OS reduces calculation consumption by 90%. A comprehensive simulation indicates that CCF exhibits a better performance than other functions under various light-level conditions. Besides, the effectiveness of fast search algorithms has been verified.
DOT National Transportation Integrated Search
2011-07-01
Based upon 50 large and medium hub airports over a 13 year period, this research estimates one and two : output translog models of airport short run operating costs. Output is passengers transported on non-stop : segments and pounds of cargo shipped....
DOT National Transportation Integrated Search
2011-12-01
Based upon 50 large and medium hub airports over a 13 year period, this research estimates one and two : output translog models of airport short run operating costs. Output is passengers transported on non-stop : segments and pounds of cargo shipped....
School Finance Adequacy: What Is It and How Do We Measure It?
ERIC Educational Resources Information Center
Picus, Lawrence O.
2001-01-01
Discusses legal definition of school-finance "adequacy" and four methods for determining the cost of an adequate system: Cost function, observational methods, professional judgment, and costs of a comprehensive school design. Draws implications for school districts' resource-allocation decisions based on adequacy. (Contains 21 references.) (PKP)
Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.
Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís
2010-10-01
Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.
A LiDAR data-based camera self-calibration method
NASA Astrophysics Data System (ADS)
Xu, Lijun; Feng, Jing; Li, Xiaolu; Chen, Jianjun
2018-07-01
To find the intrinsic parameters of a camera, a LiDAR data-based camera self-calibration method is presented here. Parameters have been estimated using particle swarm optimization (PSO), enhancing the optimal solution of a multivariate cost function. The main procedure of camera intrinsic parameter estimation has three parts, which include extraction and fine matching of interest points in the images, establishment of cost function, based on Kruppa equations and optimization of PSO using LiDAR data as the initialization input. To improve the precision of matching pairs, a new method of maximal information coefficient (MIC) and maximum asymmetry score (MAS) was used to remove false matching pairs based on the RANSAC algorithm. Highly precise matching pairs were used to calculate the fundamental matrix so that the new cost function (deduced from Kruppa equations in terms of the fundamental matrix) was more accurate. The cost function involving four intrinsic parameters was minimized by PSO for the optimal solution. To overcome the issue of optimization pushed to a local optimum, LiDAR data was used to determine the scope of initialization, based on the solution to the P4P problem for camera focal length. To verify the accuracy and robustness of the proposed method, simulations and experiments were implemented and compared with two typical methods. Simulation results indicated that the intrinsic parameters estimated by the proposed method had absolute errors less than 1.0 pixel and relative errors smaller than 0.01%. Based on ground truth obtained from a meter ruler, the distance inversion accuracy in the experiments was smaller than 1.0 cm. Experimental and simulated results demonstrated that the proposed method was highly accurate and robust.
A Cradle-to-Grave Integrated Approach to Using UNIFORMAT II
ERIC Educational Resources Information Center
Schneider, Richard C.; Cain, David A.
2009-01-01
The ASTM E1557/UNIFORMAT II standard is a three-level, function-oriented classification which links the schematic phase Preliminary Project Descriptions (PPD), based on Construction Standard Institute (CSI) Practice FF/180, to elemental cost estimates based on R.S. Means Cost Data. With the UNIFORMAT II Standard Classification for Building…
Printed organo-functionalized graphene for biosensing applications.
Wisitsoraat, A; Mensing, J Ph; Karuwan, C; Sriprachuabwong, C; Jaruwongrungsee, K; Phokharatkul, D; Daniels, T M; Liewhiran, C; Tuantranont, A
2017-01-15
Graphene is a highly promising material for biosensors due to its excellent physical and chemical properties which facilitate electron transfer between the active locales of enzymes or other biomaterials and a transducer surface. Printing technology has recently emerged as a low-cost and practical method for fabrication of flexible and disposable electronics devices. The combination of these technologies is promising for the production and commercialization of low cost sensors. In this review, recent developments in organo-functionalized graphene and printed biosensor technologies are comprehensively covered. Firstly, various methods for printing graphene-based fluids on different substrates are discussed. Secondly, different graphene-based ink materials and preparation methods are described. Lastly, biosensing performances of printed or printable graphene-based electrochemical and field effect transistor sensors for some important analytes are elaborated. The reported printed graphene based sensors exhibit promising properties with good reliability suitable for commercial applications. Among most reports, only a few printed graphene-based biosensors including screen-printed oxidase-functionalized graphene biosensor have been demonstrated. The technology is still at early stage but rapidly growing and will earn great attention in the near future due to increasing demand of low-cost and disposable biosensors. Copyright © 2016 Elsevier B.V. All rights reserved.
Forecasting overhaul or replacement intervals based on estimated system failure intensity
NASA Astrophysics Data System (ADS)
Gannon, James M.
1994-12-01
System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.
Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered
2011-01-01
Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023
Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.
Mathiassen, Svend Erik; Bolin, Kristian
2011-05-21
Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.
Concave utility, transaction costs, and risk in measuring discounting of delayed rewards.
Kirby, Kris N; Santiesteban, Mariana
2003-01-01
Research has consistently found that the decline in the present values of delayed rewards as delay increases is better fit by hyperbolic than by exponential delay-discounting functions. However, concave utility, transaction costs, and risk each could produce hyperbolic-looking data, even when the underlying discounting function is exponential. In Experiments 1 (N = 45) and 2 (N = 103), participants placed bids indicating their present values of real future monetary rewards in computer-based 2nd-price auctions. Both experiments suggest that utility is not sufficiently concave to account for the superior fit of hyperbolic functions. Experiment 2 provided no evidence that the effects of transaction costs and risk are large enough to account for the superior fit of hyperbolic functions.
De Sanctis, A; Russo, S; Craciun, M F; Alexeev, A; Barnes, M D; Nagareddy, V K; Wright, C D
2018-06-06
Graphene-based materials are being widely explored for a range of biomedical applications, from targeted drug delivery to biosensing, bioimaging and use for antibacterial treatments, to name but a few. In many such applications, it is not graphene itself that is used as the active agent, but one of its chemically functionalized forms. The type of chemical species used for functionalization will play a key role in determining the utility of any graphene-based device in any particular biomedical application, because this determines to a large part its physical, chemical, electrical and optical interactions. However, other factors will also be important in determining the eventual uptake of graphene-based biomedical technologies, in particular the ease and cost of manufacture of proposed device and system designs. In this work, we describe three novel routes for the chemical functionalization of graphene using oxygen, iron chloride and fluorine. We also introduce novel in situ methods for controlling and patterning such functionalization on the micro- and nanoscales. Our approaches are readily transferable to large-scale manufacturing, potentially paving the way for the eventual cost-effective production of functionalized graphene-based materials, devices and systems for a range of important biomedical applications.
Knowledge-based assistance in costing the space station DMS
NASA Technical Reports Server (NTRS)
Henson, Troy; Rone, Kyle
1988-01-01
The Software Cost Engineering (SCE) methodology developed over the last two decades at IBM Systems Integration Division (SID) in Houston is utilized to cost the NASA Space Station Data Management System (DMS). An ongoing project to capture this methodology, which is built on a foundation of experiences and lessons learned, has resulted in the development of an internal-use-only, PC-based prototype that integrates algorithmic tools with knowledge-based decision support assistants. This prototype Software Cost Engineering Automation Tool (SCEAT) is being employed to assist in the DMS costing exercises. At the same time, DMS costing serves as a forcing function and provides a platform for the continuing, iterative development, calibration, and validation and verification of SCEAT. The data that forms the cost engineering database is derived from more than 15 years of development of NASA Space Shuttle software, ranging from low criticality, low complexity support tools to highly complex and highly critical onboard software.
Process-based Cost Estimation for Ramjet/Scramjet Engines
NASA Technical Reports Server (NTRS)
Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John
2003-01-01
Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.
Guaranteed cost control of polynomial fuzzy systems via a sum of squares approach.
Tanaka, Kazuo; Ohtake, Hiroshi; Wang, Hua O
2009-04-01
This paper presents the guaranteed cost control of polynomial fuzzy systems via a sum of squares (SOS) approach. First, we present a polynomial fuzzy model and controller that are more general representations of the well-known Takagi-Sugeno (T-S) fuzzy model and controller, respectively. Second, we derive a guaranteed cost control design condition based on polynomial Lyapunov functions. Hence, the design approach discussed in this paper is more general than the existing LMI approaches (to T-S fuzzy control system designs) based on quadratic Lyapunov functions. The design condition realizes a guaranteed cost control by minimizing the upper bound of a given performance function. In addition, the design condition in the proposed approach can be represented in terms of SOS and is numerically (partially symbolically) solved via the recent developed SOSTOOLS. To illustrate the validity of the design approach, two design examples are provided. The first example deals with a complicated nonlinear system. The second example presents micro helicopter control. Both the examples show that our approach provides more extensive design results for the existing LMI approach.
NASA Technical Reports Server (NTRS)
1975-01-01
Tables covering the selling price of hydrogen as a function of each process temperature studied are presented. Estimated selling price, based on capital costs and operating and maintenance costs, is included. In all cases, no credit was given for the methane component of hydrogen.
A cost-effective methodology for the design of massively-parallel VLSI functional units
NASA Technical Reports Server (NTRS)
Venkateswaran, N.; Sriram, G.; Desouza, J.
1993-01-01
In this paper we propose a generalized methodology for the design of cost-effective massively-parallel VLSI Functional Units. This methodology is based on a technique of generating and reducing a massive bit-array on the mask-programmable PAcube VLSI array. This methodology unifies (maintains identical data flow and control) the execution of complex arithmetic functions on PAcube arrays. It is highly regular, expandable and uniform with respect to problem-size and wordlength, thereby reducing the communication complexity. The memory-functional unit interface is regular and expandable. Using this technique functional units of dedicated processors can be mask-programmed on the naked PAcube arrays, reducing the turn-around time. The production cost of such dedicated processors can be drastically reduced since the naked PAcube arrays can be mass-produced. Analysis of the the performance of functional units designed by our method yields promising results.
NASA Technical Reports Server (NTRS)
Freeman, William T.; Ilcewicz, L. B.; Swanson, G. D.; Gutowski, T.
1992-01-01
A conceptual and preliminary designers' cost prediction model has been initiated. The model will provide a technically sound method for evaluating the relative cost of different composite structural designs, fabrication processes, and assembly methods that can be compared to equivalent metallic parts or assemblies. The feasibility of developing cost prediction software in a modular form for interfacing with state of the art preliminary design tools and computer aided design programs is being evaluated. The goal of this task is to establish theoretical cost functions that relate geometric design features to summed material cost and labor content in terms of process mechanics and physics. The output of the designers' present analytical tools will be input for the designers' cost prediction model to provide the designer with a data base and deterministic cost methodology that allows one to trade and synthesize designs with both cost and weight as objective functions for optimization. The approach, goals, plans, and progress is presented for development of COSTADE (Cost Optimization Software for Transport Aircraft Design Evaluation).
Application of a territorial-based filtering algorithm in turbomachinery blade design optimization
NASA Astrophysics Data System (ADS)
Bahrami, Salman; Khelghatibana, Maryam; Tribes, Christophe; Yi Lo, Suk; von Fellenberg, Sven; Trépanier, Jean-Yves; Guibault, François
2017-02-01
A territorial-based filtering algorithm (TBFA) is proposed as an integration tool in a multi-level design optimization methodology. The design evaluation burden is split between low- and high-cost levels in order to properly balance the cost and required accuracy in different design stages, based on the characteristics and requirements of the case at hand. TBFA is in charge of connecting those levels by selecting a given number of geometrically different promising solutions from the low-cost level to be evaluated in the high-cost level. Two test case studies, a Francis runner and a transonic fan rotor, have demonstrated the robustness and functionality of TBFA in real industrial optimization problems.
Computational approaches for drug discovery.
Hung, Che-Lun; Chen, Chi-Chun
2014-09-01
Cellular proteins are the mediators of multiple organism functions being involved in physiological mechanisms and disease. By discovering lead compounds that affect the function of target proteins, the target diseases or physiological mechanisms can be modulated. Based on knowledge of the ligand-receptor interaction, the chemical structures of leads can be modified to improve efficacy, selectivity and reduce side effects. One rational drug design technology, which enables drug discovery based on knowledge of target structures, functional properties and mechanisms, is computer-aided drug design (CADD). The application of CADD can be cost-effective using experiments to compare predicted and actual drug activity, the results from which can used iteratively to improve compound properties. The two major CADD-based approaches are structure-based drug design, where protein structures are required, and ligand-based drug design, where ligand and ligand activities can be used to design compounds interacting with the protein structure. Approaches in structure-based drug design include docking, de novo design, fragment-based drug discovery and structure-based pharmacophore modeling. Approaches in ligand-based drug design include quantitative structure-affinity relationship and pharmacophore modeling based on ligand properties. Based on whether the structure of the receptor and its interaction with the ligand are known, different design strategies can be seed. After lead compounds are generated, the rule of five can be used to assess whether these have drug-like properties. Several quality validation methods, such as cost function analysis, Fisher's cross-validation analysis and goodness of hit test, can be used to estimate the metrics of different drug design strategies. To further improve CADD performance, multi-computers and graphics processing units may be applied to reduce costs. © 2014 Wiley Periodicals, Inc.
2D/3D registration using a rotation-invariant cost function based on Zernike moments
NASA Astrophysics Data System (ADS)
Birkfellner, Wolfgang; Yang, Xinhui; Burgstaller, Wolfgang; Baumann, Bernard; Jacob, Augustinus L.; Niederer, Peter F.; Regazzoni, Pietro; Messmer, Peter
2004-05-01
We present a novel in-plane rotation invariant cost function for 2D/3D registration utilizing projection-invariant transformation properties and the decomposition of the X-ray nad the DRR under comparision into orhogonal Zernike moments. As a result, only five dof have to be optimized, and the number of iteration necessary for registration can be significantly reduced. Results in a phantom study show that an accuracy of approximately 0.7° and 2 mm can be achieved using this method. We conclude that reduction of coupled dof and usage of linear independent coefficients for cost function evaluation provide intersting new perspectives for the field of 2D/3D registration.
Hurley, M V; Walsh, N E; Mitchell, H; Nicholas, J; Patel, A
2012-02-01
Chronic joint pain is a major cause of pain and disability. Exercise and self-management have short-term benefits, but few studies follow participants for more than 6 months. We investigated the long-term (up to 30 months) clinical and cost effectiveness of a rehabilitation program combining self-management and exercise: Enabling Self-Management and Coping of Arthritic Knee Pain Through Exercise (ESCAPE-knee pain). In this pragmatic, cluster randomized, controlled trial, 418 people with chronic knee pain (recruited from 54 primary care surgeries) were randomized to usual care (pragmatic control) or the ESCAPE-knee pain program. The primary outcome was physical function (Western Ontario and McMaster Universities Osteoarthritis Index [WOMAC] function), with a clinically meaningful improvement in physical function defined as a ≥15% change from baseline. Secondary outcomes included pain, psychosocial and physiologic variables, costs, and cost effectiveness. Compared to usual care, ESCAPE-knee pain participants had large initial improvements in function (mean difference in WOMAC function -5.5; 95% confidence interval [95% CI] -7.8, -3.2). These improvements declined over time, but 30 months after completing the program, ESCAPE-knee pain participants still had better physical function (difference in WOMAC function -2.8; 95% CI -5.3, -0.2); lower community-based health care costs (£-47; 95% CI £-94, £-7), medication costs (£-16; 95% CI £-29, £-3), and total health and social care costs (£-1,118; 95% CI £-2,566, £-221); and a high probability (80-100%) of being cost effective. Clinical and cost benefits of ESCAPE-knee pain were still evident 30 months after completing the program. ESCAPE-knee pain is a more effective and efficient model of care that could substantially improve the health, well-being, and independence of many people, while reducing health care costs. Copyright © 2012 by the American College of Rheumatology.
Airport and Airway Costs: Allocation and Recovery in the 1980’s.
1987-02-01
1997 [8]. 3*X S.% Volume 4, FAA Cost Recovery Options [9). Volume 5, Econometric Cost Functions for FAA Cost Allocation Model [10]. Volume 6, Users...and relative price elasticities ( Ramsey pricing technique). User fees based on the Ramsey pricing tend to be less burdensome on users and minimize...full discussion of the Ramsey pricing techniques is provided in Allocation of Federal Airport and Airway Costs for FY 1985 [6]. -12- In step 5
A reliability-based cost effective fail-safe design procedure
NASA Technical Reports Server (NTRS)
Hanagud, S.; Uppaluri, B.
1976-01-01
The authors have developed a methodology for cost-effective fatigue design of structures subject to random fatigue loading. A stochastic model for fatigue crack propagation under random loading has been discussed. Fracture mechanics is then used to estimate the parameters of the model and the residual strength of structures with cracks. The stochastic model and residual strength variations have been used to develop procedures for estimating the probability of failure and its changes with inspection frequency. This information on reliability is then used to construct an objective function in terms of either a total weight function or cost function. A procedure for selecting the design variables, subject to constraints, by optimizing the objective function has been illustrated by examples. In particular, optimum design of stiffened panel has been discussed.
Younis, Mustafa Z; Jabr, Samer; Smith, Pamela C; Al-Hajeri, Maha; Hartmann, Michael
2011-01-01
Academic research investigating health care costs in the Palestinian region is limited. Therefore, this study examines the costs of the cardiac catheterization unit of one of the largest hospitals in Palestine. We focus on costs of a cardiac catheterization unit and the increasing number of deaths over the past decade in the region due to cardiovascular diseases (CVDs). We employ cost-volume-profit (CVP) analysis to determine the unit's break-even point (BEP), and investigate expected benefits (EBs) of Palestinian government subsidies to the unit. Findings indicate variable costs represent 56 percent of the hospital's total costs. Based on the three functions of the cardiac catheterization unit, results also indicate that the number of patients receiving services exceed the break-even point in each function, despite the unit receiving a government subsidy. Our findings, although based on one hospital, will permit hospital management to realize the importance of unit costs in order to make informed financial decisions. The use of break-even analysis will allow area managers to plan minimum production capacity for the organization. The economic benefits for patients and the government from the unit may encourage government officials to focus efforts on increasing future subsidies to the hospital.
NASA Astrophysics Data System (ADS)
Qi, Wei
2017-11-01
Cost-benefit analysis is commonly used for engineering planning and design problems in practice. However, previous cost-benefit based design flood estimation is based on stationary assumption. This study develops a non-stationary cost-benefit based design flood estimation approach. This approach integrates a non-stationary probability distribution function into cost-benefit analysis, and influence of non-stationarity on expected total cost (including flood damage and construction costs) and design flood estimation can be quantified. To facilitate design flood selections, a 'Risk-Cost' analysis approach is developed, which reveals the nexus of extreme flood risk, expected total cost and design life periods. Two basins, with 54-year and 104-year flood data respectively, are utilized to illustrate the application. It is found that the developed approach can effectively reveal changes of expected total cost and extreme floods in different design life periods. In addition, trade-offs are found between extreme flood risk and expected total cost, which reflect increases in cost to mitigate risk. Comparing with stationary approaches which generate only one expected total cost curve and therefore only one design flood estimation, the proposed new approach generate design flood estimation intervals and the 'Risk-Cost' approach selects a design flood value from the intervals based on the trade-offs between extreme flood risk and expected total cost. This study provides a new approach towards a better understanding of the influence of non-stationarity on expected total cost and design floods, and could be beneficial to cost-benefit based non-stationary design flood estimation across the world.
ERIC Educational Resources Information Center
Reuben, David B.; Seeman, Teresa E.; Keeler, Emmett; Hayes, Risa P.; Bowman, Lee; Sewall, Ase; Hirsch, Susan H.; Wallace, Robert B.; Guralnik, Jack M.
2004-01-01
Purpose: We determined the prognostic value of self-reported and performance-based measurement of function, including functional transitions and combining different measurement approaches, on utilization. Design and Methods: Our cohort study used the 6th, 7th, and 10th waves of three sites of the Established Populations for Epidemiologic Studies…
A Genetic Algorithm for the Generation of Packetization Masks for Robust Image Communication
Zapata-Quiñones, Katherine; Duran-Faundez, Cristian; Gutiérrez, Gilberto; Lecuire, Vincent; Arredondo-Flores, Christopher; Jara-Lipán, Hugo
2017-01-01
Image interleaving has proven to be an effective solution to provide the robustness of image communication systems when resource limitations make reliable protocols unsuitable (e.g., in wireless camera sensor networks); however, the search for optimal interleaving patterns is scarcely tackled in the literature. In 2008, Rombaut et al. presented an interesting approach introducing a packetization mask generator based in Simulated Annealing (SA), including a cost function, which allows assessing the suitability of a packetization pattern, avoiding extensive simulations. In this work, we present a complementary study about the non-trivial problem of generating optimal packetization patterns. We propose a genetic algorithm, as an alternative to the cited work, adopting the mentioned cost function, then comparing it to the SA approach and a torus automorphism interleaver. In addition, we engage the validation of the cost function and provide results attempting to conclude about its implication in the quality of reconstructed images. Several scenarios based on visual sensor networks applications were tested in a computer application. Results in terms of the selected cost function and image quality metric PSNR show that our algorithm presents similar results to the other approaches. Finally, we discuss the obtained results and comment about open research challenges. PMID:28452934
The specification of a hospital cost function. A comment on the recent literature.
Breyer, F
1987-06-01
In the empirical estimation of hospital cost functions, two radically different types of specifications have been chosen to date, ad-hoc forms and flexible functional forms based on neoclassical production theory. This paper discusses the respective strengths and weaknesses of both approaches and emphasizes the apparently unreconcilable conflict between the goals of maintaining functional flexibility and keeping the number of variables manageable if at the same time patient heterogeneity is to be adequately reflected in the case mix variables. A new specification is proposed which strikes a compromise between these goals, and the underlying assumptions are discussed critically.
Adaptive pattern recognition by mini-max neural networks as a part of an intelligent processor
NASA Technical Reports Server (NTRS)
Szu, Harold H.
1990-01-01
In this decade and progressing into 21st Century, NASA will have missions including Space Station and the Earth related Planet Sciences. To support these missions, a high degree of sophistication in machine automation and an increasing amount of data processing throughput rate are necessary. Meeting these challenges requires intelligent machines, designed to support the necessary automations in a remote space and hazardous environment. There are two approaches to designing these intelligent machines. One of these is the knowledge-based expert system approach, namely AI. The other is a non-rule approach based on parallel and distributed computing for adaptive fault-tolerances, namely Neural or Natural Intelligence (NI). The union of AI and NI is the solution to the problem stated above. The NI segment of this unit extracts features automatically by applying Cauchy simulated annealing to a mini-max cost energy function. The feature discovered by NI can then be passed to the AI system for future processing, and vice versa. This passing increases reliability, for AI can follow the NI formulated algorithm exactly, and can provide the context knowledge base as the constraints of neurocomputing. The mini-max cost function that solves the unknown feature can furthermore give us a top-down architectural design of neural networks by means of Taylor series expansion of the cost function. A typical mini-max cost function consists of the sample variance of each class in the numerator, and separation of the center of each class in the denominator. Thus, when the total cost energy is minimized, the conflicting goals of intraclass clustering and interclass segregation are achieved simultaneously.
Uncertainty in sample estimates and the implicit loss function for soil information.
NASA Astrophysics Data System (ADS)
Lark, Murray
2015-04-01
One significant challenge in the communication of uncertain information is how to enable the sponsors of sampling exercises to make a rational choice of sample size. One way to do this is to compute the value of additional information given the loss function for errors. The loss function expresses the costs that result from decisions made using erroneous information. In certain circumstances, such as remediation of contaminated land prior to development, loss functions can be computed and used to guide rational decision making on the amount of resource to spend on sampling to collect soil information. In many circumstances the loss function cannot be obtained prior to decision making. This may be the case when multiple decisions may be based on the soil information and the costs of errors are hard to predict. The implicit loss function is proposed as a tool to aid decision making in these circumstances. Conditional on a logistical model which expresses costs of soil sampling as a function of effort, and statistical information from which the error of estimates can be modelled as a function of effort, the implicit loss function is the loss function which makes a particular decision on effort rational. In this presentation the loss function is defined and computed for a number of arbitrary decisions on sampling effort for a hypothetical soil monitoring problem. This is based on a logistical model of sampling cost parameterized from a recent geochemical survey of soil in Donegal, Ireland and on statistical parameters estimated with the aid of a process model for change in soil organic carbon. It is shown how the implicit loss function might provide a basis for reflection on a particular choice of sample size by comparing it with the values attributed to soil properties and functions. Scope for further research to develop and apply the implicit loss function to help decision making by policy makers and regulators is then discussed.
Chen, Jinsong; Liu, Lei; Shih, Ya-Chen T; Zhang, Daowen; Severini, Thomas A
2016-03-15
We propose a flexible model for correlated medical cost data with several appealing features. First, the mean function is partially linear. Second, the distributional form for the response is not specified. Third, the covariance structure of correlated medical costs has a semiparametric form. We use extended generalized estimating equations to simultaneously estimate all parameters of interest. B-splines are used to estimate unknown functions, and a modification to Akaike information criterion is proposed for selecting knots in spline bases. We apply the model to correlated medical costs in the Medical Expenditure Panel Survey dataset. Simulation studies are conducted to assess the performance of our method. Copyright © 2015 John Wiley & Sons, Ltd.
Closa, Conxita; Mas, Miquel À; Santaeugènia, Sebastià J; Inzitari, Marco; Ribera, Aida; Gallofré, Miquel
2017-09-01
To compare outcomes and costs for patients with orthogeriatric conditions in a home-based integrated care program versus conventional hospital-based care. Quasi-experimental longitudinal study. An acute care hospital, an intermediate care hospital, and the community of an urban area in the North of Barcelona, in Southern Europe. In a 2-year period, we recruited 367 older patients attended at an orthopedic/traumatology unit in an acute hospital for fractures and/or arthroplasty. Patients were referred to a hospital-at-home integrated care unit or to standard hospital-based postacute orthogeriatric unit, based on their social support and availability of the resource. We compared home-based care versus hospital-based care for Relative Functional Gain (gain/loss of function measured by the Barthel Index), mean direct costs, and potential savings in terms of reduction of stay in the acute care hospital. No differences were found in Relative Functional Gain, median (Q25-Q75) = 0.92 (0.64-1.09) in the home-based group versus 0.93 (0.59-1) in the hospital-based group, P =.333. Total health service direct cost [mean (standard deviation)] was significantly lower for patients receiving home-based care: €7120 (3381) versus €12,149 (6322), P < .001. Length of acute hospital stay was significantly shorter in patients discharged to home-based care [10.1 (7)] than in patients discharged to the postacute orthogeriatric hospital-based unit [15.3 (12) days, P < .001]. The hospital-at-home integrated care program was suitable for managing older patients with orthopedic conditions who have good social support for home care. It provided clinical care comparable to the hospital-based model, and it seems to enable earlier acute hospital discharge and lower direct costs. Copyright © 2017 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Lin, N. J.; Quinn, R. D.
1991-01-01
A locally-optimal trajectory management (LOTM) approach is analyzed, and it is found that care should be taken in choosing the Ritz expansion and cost function. A modified cost function for the LOTM approach is proposed which includes the kinetic energy along with the base reactions in a weighted and scale sum. The effects of the modified functions are demonstrated with numerical examples for robots operating in two- and three-dimensional space. It is pointed out that this modified LOTM approach shows good performance, the reactions do not fluctuate greatly, joint velocities reach their objectives at the end of the manifestation, and the CPU time is slightly more than twice the manipulation time.
Gazijahani, Farhad Samadi; Ravadanegh, Sajad Najafi; Salehi, Javad
2018-02-01
The inherent volatility and unpredictable nature of renewable generations and load demand pose considerable challenges for energy exchange optimization of microgrids (MG). To address these challenges, this paper proposes a new risk-based multi-objective energy exchange optimization for networked MGs from economic and reliability standpoints under load consumption and renewable power generation uncertainties. In so doing, three various risk-based strategies are distinguished by using conditional value at risk (CVaR) approach. The proposed model is specified as a two-distinct objective function. The first function minimizes the operation and maintenance costs, cost of power transaction between upstream network and MGs as well as power loss cost, whereas the second function minimizes the energy not supplied (ENS) value. Furthermore, the stochastic scenario-based approach is incorporated into the approach in order to handle the uncertainty. Also, Kantorovich distance scenario reduction method has been implemented to reduce the computational burden. Finally, non-dominated sorting genetic algorithm (NSGAII) is applied to minimize the objective functions simultaneously and the best solution is extracted by fuzzy satisfying method with respect to risk-based strategies. To indicate the performance of the proposed model, it is performed on the modified IEEE 33-bus distribution system and the obtained results show that the presented approach can be considered as an efficient tool for optimal energy exchange optimization of MGs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Kray, Jutta
2006-08-11
Adult age differences in task switching and advance preparation were examined by comparing cue-based and memory-based switching conditions. Task switching was assessed by determining two types of costs that occur at the general (mixing costs) and specific (switching costs) level of switching. Advance preparation was investigated by varying the time interval until the next task (short, middle, very long). Results indicated that the implementation of task sets was different for cue-based switching with random task sequences and memory-based switching with predictable task sequences. Switching costs were strongly reduced under cue-based switching conditions, indicating that task-set cues facilitate the retrieval of the next task. Age differences were found for mixing costs and for switching costs only under cue-based conditions in which older adults showed smaller switching costs than younger adults. It is suggested that older adults adopt a less extreme bias between two tasks than younger adults in situations associated with uncertainty. For cue-based switching with random task sequences, older adults are less engaged in a complete reconfiguration of task sets because of the probability of a further task change. Furthermore, the reduction of switching costs was more pronounced for cue- than memory-based switching for short preparation intervals, whereas the reduction of switch costs was more pronounced for memory- than cue-based switching for longer preparation intervals at least for older adults. Together these findings suggest that the implementation of task sets is functionally different for the two types of task-switching conditions.
Technoeconomic aspects of alternative municipal solid wastes treatment methods.
Economopoulos, Alexander P
2010-04-01
This paper considers selected treatment technologies for comingled domestic and similar wastes and provides technoeconomic data and information, useful for the development of strategic management plans. For this purpose, treatment technologies of interest are reviewed and representative flow diagrams, along with material and energy balances, are presented for the typical composition of wastes in Greece; possible difficulties in the use of treatment products, along with their management implications, are discussed, and; cost functions are developed, allowing assessment of the initial capital investment and annual operating costs. Based on the latter, cost functions are developed for predicting the normalized treatment costs of alternative methods (in euro/t of MSW treated), as function of the quantity of MSW processed by plants built and operated (a) by municipality associations, and (b) by private enterprises. Finally, the alternative technologies considered are evaluated on the basis of their cost aspects, product utilization and compatibility with the EU waste framework Directive 2008/98. Copyright 2009 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Mahalakshmi; Murugesan, R.
2018-04-01
This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.
Optical Metrology for CIGS Solar Cell Manufacturing and its Cost Implications
NASA Astrophysics Data System (ADS)
Sunkoju, Sravan Kumar
Solar energy is a promising source of renewable energy which can meet the demand for clean energy in near future with advances in research in the field of photovoltaics and cost reduction by commercialization. Availability of a non-contact, in-line, real time robust process control strategies can greatly aid in reducing the gap between cell and module efficiencies, thereby leading to cost-effective large-scale manufacturing of high efficiency CIGS solar cells. In order to achieve proper process monitoring and control for the deposition of the functional layers of CuIn1-xGaxSe 2 (CIGS) based thin film solar cell, optical techniques such as spectroscopic reflectometry and polarimetry are advantageous because they can be set up in an unobtrusive manner in the manufacturing line, and collect data in-line and in-situ. The use of these techniques requires accurate optical models that correctly represent the properties of the layers being deposited. In this study, Spectroscopic ellipsometry (SE) has been applied for the characterization of each individual stage of CIGS layers deposited using the 3-stage co-evaporation process along with the other functional layers. Dielectric functions have been determined for the energy range from 0.7 eV to 5.1 eV. Critical-point line-shape analysis was used in this study to determine the critical point energies of the CIGS based layers. To control the compositional and thickness uniformity of all the functional layers during the fabrication of CIGS solar cells over large areas, multilayer photovoltaics (PV) stack optical models were developed with the help of extracted dielectric functions. In this study, mapping capability of RC2 spectroscopic ellipsometer was used to map all the functional layer thicknesses of a CIGS solar cell in order to probe the spatial non-uniformities that can affect the performance of a cell. The optical functions for each of the stages of CIGS 3-stage deposition process along with buffer layer and transparent conducting oxide (TCO) bi-layer, thus derived were used in a fiber optic-based spectroscopic reflectometry optical monitoring system installed in the pilot line at the PVMC's Halfmoon facility. Results obtained from this study show that the use of regular fiber optics, instead of polarization-maintaining fiber optics, is sufficient for the purpose of process monitoring. Also, the technique does not need to be used "in-situ", but the measurements can be taken in-line, and are applicable to a variety of deposition techniques used for different functional layers deposited on rigid or flexible substrates. In addition, effect of Cu concentration on the CIGS optical properties has been studied. Mixed CIGS/Cu2-xSe phase was observed at the surface at the end of the second stage of 3-stage deposition process, under Cu-rich conditions. A significant change in optical behavior of CIGS due to Cu2-xSe at the surface was observed under Cu-rich conditions, which can be used as end-point detection method to move from 2nd stage to 3rd stage in the deposition process. Developed optical functions were applied to in-line reflectance measurements not only to identify the Cu2-xSe phase at the surface but also to measure the thickness of the mixed CIGS/Cu2-xSe layer. This spectroscopic reflectometry based in-line process control technique can be used for end-point detection as well as to control thickness during the preparation of large area CIGS films. These results can assist in the development of optical process-control tools for the manufacturing of high quality CIGS based photovoltaic cells, increasing the uptime and yield of the production line. Finally, to understand the cost implications, low cost potential of two different deposition technologies has been studied on both rigid and flexible substrates with the help of cost analysis. Cost advantages of employing a contactless optics based process control technique have been investigated in order to achieve a low cost of < 0.5 $/W for CIGS module production. Based on cost analysis, one of the best strategies for achieving the low cost targets would be increasing manufacturing throughput, using roll-to-roll thin-film module manufacturing, with co-evaporation and chemical bath deposition processes for absorber and buffer layer respectively, while applying a low-cost process control technique such as spectroscopic reflectometry to improve module efficiencies and maintain high yield.
NASA Astrophysics Data System (ADS)
Latief, Yusuf; Berawi, Mohammed Ali; Basten, Van; Riswanto; Budiman, Rachmat
2017-07-01
Green building concept becomes important in current building life cycle to mitigate environment issues. The purpose of this paper is to optimize building construction performance towards green building premium cost, achieving green building rating tools with optimizing life cycle cost. Therefore, this study helps building stakeholder determining building fixture to achieve green building certification target. Empirically the paper collects data of green building in the Indonesian construction industry such as green building fixture, initial cost, operational and maintenance cost, and certification score achievement. After that, using value engineering method optimized green building fixture based on building function and cost aspects. Findings indicate that construction performance optimization affected green building achievement with increasing energy and water efficiency factors and life cycle cost effectively especially chosen green building fixture.
Ku, Li-Jung Elizabeth; Pai, Ming-Chyi; Shih, Pei-Yu
2016-01-01
Given the shortage of cost-of-illness studies in dementia outside of the Western population, the current study estimated the annual cost of dementia in Taiwan and assessed whether different categories of care costs vary by severity using multiple disease-severity measures. This study included 231 dementia patient-caregiver dyads in a dementia clinic at a national university hospital in southern Taiwan. Three disease measures including cognitive, functional, and behavioral disturbances were obtained from patients based on medical history. A societal perspective was used to estimate the total costs of dementia according to three cost sub-categories. The association between dementia severity and cost of care was examined through bivariate and multivariate analyses. Total costs of care for moderate dementia patient were 1.4 times the costs for mild dementia and doubled from mild to severe dementia among our community-dwelling dementia sample. Multivariate analysis indicated that functional declines had a greater impact on all cost outcomes as compared to behavioral disturbance, which showed no impact on any costs. Informal care costs accounted for the greatest share in total cost of care for both mild (42%) and severe (43%) dementia patients. Since the total costs of dementia increased with severity, providing care to delay disease progression, with a focus on maintaining patient physical function, may reduce the overall cost of dementia. The greater contribution of informal care to total costs as opposed to social care also suggests a need for more publicly-funded long-term care services to assist family caregivers of dementia patients in Taiwan.
Liu, Lei; Wang, Zhanshan; Zhang, Huaguang
2018-04-01
This paper is concerned with the robust optimal tracking control strategy for a class of nonlinear multi-input multi-output discrete-time systems with unknown uncertainty via adaptive critic design (ACD) scheme. The main purpose is to establish an adaptive actor-critic control method, so that the cost function in the procedure of dealing with uncertainty is minimum and the closed-loop system is stable. Based on the neural network approximator, an action network is applied to generate the optimal control signal and a critic network is used to approximate the cost function, respectively. In contrast to the previous methods, the main features of this paper are: 1) the ACD scheme is integrated into the controllers to cope with the uncertainty and 2) a novel cost function, which is not in quadric form, is proposed so that the total cost in the design procedure is reduced. It is proved that the optimal control signals and the tracking errors are uniformly ultimately bounded even when the uncertainty exists. Finally, a numerical simulation is developed to show the effectiveness of the present approach.
Christensen, Michael Cronquist; Munro, Vicki
2018-04-01
To determine the cost-effectiveness of vortioxetine vs duloxetine in adults with moderate-to-severe major depressive disorder (MDD) in Norway using a definition of a successfully treated patient (STP) that incorporates improvement in both mood symptoms and functional capacity. Using the population of patients who completed the 8-week CONNECT study, the cost-effectiveness of vortioxetine (n = 168) (10-20 mg/day) vs duloxetine (n = 176) (60 mg/day) was investigated for the treatment of adults in Norway with moderate-to-severe MDD and self-reported cognitive dysfunction over an 8-week treatment period. Cost-effectiveness was assessed in terms of cost per STP, defined as improvement in mood symptoms (≥50% decrease from baseline in Montgomery-Åsberg Depression Rating Scale total score) and change in UCSD [University of California San Diego] performance-based skills assessment [UPSA] score of ≥7. The base case analysis utilized pharmacy retail price (apotek utsalgspris (AUP)) for branded vortioxetine (Brintellix) and branded duloxetine (Cymbalta). After 8 weeks of antidepressant therapy, there were more STPs with vortioxetine than with duloxetine (27.4% vs 22.5%, respectively). The mean number needed to treat for each STP was 3.6 for vortioxetine and 4.4 for duloxetine, resulting in a lower mean cost per STP for vortioxetine (NOK [Norwegian Kroner] 3264) than for duloxetine (NOK 3310) and an incremental cost per STP of NOK 3051. The use of a more challenging change in the UPSA score from baseline (≥9) resulted in a mean cost per STP of NOK 3822 for vortioxetine compared with NOK 3983 for duloxetine and an incremental cost per STP of NOK 3181. Vortioxetine may be a cost-effective alternative to duloxetine, owing to its superior ability to improve functional capacity. The dual-response STP concept introduced here represents a more comprehensive analysis of the cost-effectiveness of antidepressants.
Improved Evolutionary Programming with Various Crossover Techniques for Optimal Power Flow Problem
NASA Astrophysics Data System (ADS)
Tangpatiphan, Kritsana; Yokoyama, Akihiko
This paper presents an Improved Evolutionary Programming (IEP) for solving the Optimal Power Flow (OPF) problem, which is considered as a non-linear, non-smooth, and multimodal optimization problem in power system operation. The total generator fuel cost is regarded as an objective function to be minimized. The proposed method is an Evolutionary Programming (EP)-based algorithm with making use of various crossover techniques, normally applied in Real Coded Genetic Algorithm (RCGA). The effectiveness of the proposed approach is investigated on the IEEE 30-bus system with three different types of fuel cost functions; namely the quadratic cost curve, the piecewise quadratic cost curve, and the quadratic cost curve superimposed by sine component. These three cost curves represent the generator fuel cost functions with a simplified model and more accurate models of a combined-cycle generating unit and a thermal unit with value-point loading effect respectively. The OPF solutions by the proposed method and Pure Evolutionary Programming (PEP) are observed and compared. The simulation results indicate that IEP requires less computing time than PEP with better solutions in some cases. Moreover, the influences of important IEP parameters on the OPF solution are described in details.
Kloek, Corelien J J; Bossen, Daniël; Veenhof, Cindy; van Dongen, Johanna M; Dekker, Joost; de Bakker, Dinny H
2014-08-08
Exercise therapy in patients with hip and/or knee osteoarthritis is effective in reducing pain, increasing physical activity and physical functioning, but costly and a burden for the health care budget. A web-based intervention is cheap in comparison to face-to-face exercise therapy and has the advantage of supporting in home exercises because of the 24/7 accessibility. However, the lack of face-to-face contact with a professional is a disadvantage of web-based interventions and is probably one of the reasons for low adherence rates. In order to combine the best of two worlds, we have developed the intervention e-Exercise. In this blended intervention face-to-face contacts with a physical therapist are partially replaced by a web-based exercise intervention. The aim of this study is to investigate the short- (3 months) and long-term (12 months) (cost)-effectiveness of e-Exercise compared to usual care physical therapy. Our hypothesis is that e-Exercise is more effective and cost-effective in increasing physical functioning and physical activity compared to usual care. This paper presents the protocol of a prospective, single-blinded, multicenter cluster randomized controlled trial. In total, 200 patients with OA of the hip and/or knee will be randomly allocated into either e-Exercise or usual care (physical therapy). E-Exercise is a 12-week intervention, consisting of maximum five face-to-face physical therapy contacts supplemented with a web-based program. The web-based program contains assignments to gradually increase patients' physical activity, strength and stability exercises and information about OA related topics. Primary outcomes are physical activity and physical functioning. Secondary outcomes are health related quality of life, self-perceived effect, pain, tiredness and self-efficacy. All measurements will be performed at baseline, 3 and 12 months after inclusion. Retrospective cost questionnaires will be sent at 3, 6, 9 and 12 months and used for the cost-effectiveness and cost-utility analysis. This study is the first randomized controlled trial in the (cost)-effectiveness of a blended exercise intervention for patients with osteoarthritis of the hip and/or knee. The findings will help to improve the treatment of patients with osteoarthritis. NTR4224.
Gajana Bhat; John Bergsrom; R. Jeff Teasley
1998-01-01
This paper describes a framework for estimating the economic value of outdoor recreation across different ecoregions. Ten ecoregions in the continental United States were defined based on similarly functioning ecosystem characters. The individual travel cost method was employed to estimate recreation demand functions for activities such...
Kobelt, G; Lindgren, P; Singh, A; Klareskog, L
2005-08-01
To estimate the cost effectiveness of combination treatment with etanercept plus methotrexate in comparison with monotherapies in patients with active rheumatoid arthritis (RA) using a new model that incorporates both functional status and disease activity. Effectiveness data were based on a 2 year trial in 682 patients with active RA (TEMPO). Data on resource consumption and utility related to function and disease activity were obtained from a survey of 616 patients in Sweden. A Markov model was constructed with five states according to functional status (Health Assessment Questionnaire (HAQ)) subdivided into high and low disease activity. The cost for each quality adjusted life year (QALY) gained was estimated by Monte Carlo simulation. Disease activity had a highly significant effect on utilities, independently of HAQ. For resource consumption, only HAQ was a significant predictor, with the exception of sick leave. Compared with methotrexate alone, etanercept plus methotrexate over 2 years increased total costs by 14,221 euros and led to a QALY gain of 0.38. When treatment was continued for 10 years, incremental costs were 42,148 euros for a QALY gain of 0.91. The cost per QALY gained was 37,331 euros and 46,494 euros, respectively. The probability that the cost effectiveness ratio is below a threshold of 50,000 euros/QALY is 88%. Incorporating the influence of disease activity into this new model allows better assessment of the effects of anti-tumour necrosis factor treatment on patients' general wellbeing. In this analysis, the cost per QALY gained with combination treatment with etanercept plus methotrexate compared with methotrexate alone falls within the acceptable range.
Elements of Designing for Cost
NASA Technical Reports Server (NTRS)
Dean, Edwin B.; Unal, Resit
1992-01-01
During recent history in the United States, government systems development has been performance driven. As a result, systems within a class have experienced exponentially increasing cost over time in fixed year dollars. Moreover, little emphasis has been placed on reducing cost. This paper defines designing for cost and presents several tools which, if used in the engineering process, offer the promise of reducing cost. Although other potential tools exist for designing for cost, this paper focuses on rules of thumb, quality function deployment, Taguchi methods, concurrent engineering, and activity based costing. Each of these tools has been demonstrated to reduce cost if used within the engineering process.
Elements of designing for cost
NASA Technical Reports Server (NTRS)
Dean, Edwin B.; Unal, Resit
1992-01-01
During recent history in the United States, government systems development has been performance driven. As a result, systems within a class have experienced exponentially increasing cost over time in fixed year dollars. Moreover, little emphasis has been placed on reducing cost. This paper defines designing for cost and presents several tools which, if used in the engineering process, offer the promise of reducing cost. Although other potential tools exist for designing for cost, this paper focuses on rules of thumb, quality function deployment, Taguchi methods, concurrent engineering, and activity-based costing. Each of these tools has been demonstrated to reduce cost if used within the engineering process.
Mixed H(2)/H(sub infinity): Control with output feedback compensators using parameter optimization
NASA Technical Reports Server (NTRS)
Schoemig, Ewald; Ly, Uy-Loi
1992-01-01
Among the many possible norm-based optimization methods, the concept of H-infinity optimal control has gained enormous attention in the past few years. Here the H-infinity framework, based on the Small Gain Theorem and the Youla Parameterization, effectively treats system uncertainties in the control law synthesis. A design approach involving a mixed H(sub 2)/H-infinity norm strives to combine the advantages of both methods. This advantage motivates researchers toward finding solutions to the mixed H(sub 2)/H-infinity control problem. The approach developed in this research is based on a finite time cost functional that depicts an H-infinity bound control problem in a H(sub 2)-optimization setting. The goal is to define a time-domain cost function that optimizes the H(sub 2)-norm of a system with an H-infinity-constraint function.
Mixed H2/H(infinity)-Control with an output-feedback compensator using parameter optimization
NASA Technical Reports Server (NTRS)
Schoemig, Ewald; Ly, Uy-Loi
1992-01-01
Among the many possible norm-based optimization methods, the concept of H-infinity optimal control has gained enormous attention in the past few years. Here the H-infinity framework, based on the Small Gain Theorem and the Youla Parameterization, effectively treats system uncertainties in the control law synthesis. A design approach involving a mixed H(sub 2)/H-infinity norm strives to combine the advantages of both methods. This advantage motivates researchers toward finding solutions to the mixed H(sub 2)/H-infinity control problem. The approach developed in this research is based on a finite time cost functional that depicts an H-infinity bound control problem in a H(sub 2)-optimization setting. The goal is to define a time-domain cost function that optimizes the H(sub 2)-norm of a system with an H-infinity-constraint function.
A Rapid Aerodynamic Design Procedure Based on Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
2001-01-01
An aerodynamic design procedure that uses neural networks to model the functional behavior of the objective function in design space has been developed. This method incorporates several improvements to an earlier method that employed a strategy called parameter-based partitioning of the design space in order to reduce the computational costs associated with design optimization. As with the earlier method, the current method uses a sequence of response surfaces to traverse the design space in search of the optimal solution. The new method yields significant reductions in computational costs by using composite response surfaces with better generalization capabilities and by exploiting synergies between the optimization method and the simulation codes used to generate the training data. These reductions in design optimization costs are demonstrated for a turbine airfoil design study where a generic shape is evolved into an optimal airfoil.
Market Mechanism Design for Renewable Energy based on Risk Theory
NASA Astrophysics Data System (ADS)
Yang, Wu; Bo, Wang; Jichun, Liu; Wenjiao, Zai; Pingliang, Zeng; Haobo, Shi
2018-02-01
Generation trading between renewable energy and thermal power is an efficient market means for transforming supply structure of electric power into sustainable development pattern. But the trading is hampered by the output fluctuations of renewable energy and the cost differences between renewable energy and thermal power at present. In this paper, the external environmental cost (EEC) is defined and the EEC is introduced into the generation cost. At same time, the incentive functions of renewable energy and low-emission thermal power are designed, which are decreasing functions of EEC. On these bases, for the market risks caused by the random variability of EEC, the decision-making model of generation trading between renewable energy and thermal power is constructed according to the risk theory. The feasibility and effectiveness of the proposed model are verified by simulation results.
Lundqvist, Christofer; Beiske, Antonie Giæver; Reiertsen, Ola; Kristiansen, Ivar Sønbø
2014-12-01
Advanced-stage Parkinson's disease (PD) strongly affects quality of life (QoL). Continuous intraduodenal administration of levodopa (IDL) is efficacious, but entails high costs. This study aims to estimate these costs in routine care. 10 patients with advanced-PD who switched from oral medication to IDL were assessed at baseline, and subsequently at 3, 6, 9 and 12 months follow-up. We used the Unified PD Rating Scale (UPDRS) for function and 15D for Quality of Life (QoL). Costs were assessed using quarterly structured patient questionnaires and hospital registries. Costs per quality adjusted life year (QALY) were estimated for conventional treatment prior to switch and for 1-year treatment with IDL. Probabilistic sensitivity analysis was based on bootstrapping. IDL significantly improved functional scores and was safe to use. One-year conventional oral treatment entailed 0.63 QALY while IDL entailed 0.68 (p > 0.05). The estimated total 1-year treatment cost was NOK419,160 on conventional treatment and NOK890,920 on IDL, representing a cost of NOK9.2 million (€1.18 mill) per additional QALY. The incremental cost per unit UPDRS improvement was NOK25,000 (€3,250). Medication was the dominant cost during IDL (45% of total costs), it represented only 6.4% of the total for conventional treatment. IDL improves function but is not cost effective using recommended thresholds for cost/QALY in Norway.
["Activity based costing" in radiology].
Klose, K J; Böttcher, J
2002-05-01
The introduction of diagnosis related groups for reimbursement of hospital services in Germany (g-drg) demands for a reconsideration of utilization of radiological products and costs related to them. Traditional cost accounting as approach to internal, department related budgets are compared with the accounting method of activity based costing (ABC). The steps, which are necessary to implement ABC in radiology are developed. The introduction of a process-oriented cost analysis is feasible for radiology departments. ABC plays a central role in the set-up of decentralized controlling functions within this institutions. The implementation seems to be a strategic challenge for department managers to get more appropriate data for adequate enterprise decisions. The necessary steps of process analysis can be used for other purposes (Certification, digital migration) as well.
Opportunities in SMR Emergency Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moe, Wayne L.
2014-10-01
Using year 2014 cost information gathered from twenty different locations within the current commercial nuclear power station fleet, an assessment was performed concerning compliance costs associated with the offsite emergency Planning Standards contained in 10 CFR 50.47(b). The study was conducted to quantitatively determine the potential cost benefits realized if an emergency planning zone (EPZ) were reduced in size according to the lowered risks expected to accompany small modular reactors (SMR). Licensees are required to provide a technical basis when proposing to reduce the surrounding EPZ size to less than the 10 mile plume exposure and 50 mile ingestion pathwaymore » distances currently being used. To assist licensees in assessing the savings that might be associated with such an action, this study established offsite emergency planning costs in connection with four discrete EPZ boundary distances, i.e., site boundary, 2 miles, 5 miles and 10 miles. The boundary selected by the licensee would be based on where EPA Protective Action Guidelines are no longer likely to be exceeded. Additional consideration was directed towards costs associated with reducing the 50 mile ingestion pathway EPZ. The assessment methodology consisted of gathering actual capital costs and annual operating and maintenance costs for offsite emergency planning programs at the surveyed sites, partitioning them according to key predictive factors, and allocating those portions to individual emergency Planning Standards as a function of EPZ size. Two techniques, an offsite population-based approach and an area-based approach, were then employed to calculate the scaling factors which enabled cost projections as a function of EPZ size. Site-specific factors that influenced source data costs, such as the effects of supplemental funding to external state and local agencies for offsite response organization activities, were incorporated into the analysis to the extent those factors could be representatively apportioned.« less
Terminal spacecraft rendezvous and capture with LASSO model predictive control
NASA Astrophysics Data System (ADS)
Hartley, Edward N.; Gallieri, Marco; Maciejowski, Jan M.
2013-11-01
The recently investigated ℓasso model predictive control (MPC) is applied to the terminal phase of a spacecraft rendezvous and capture mission. The interaction between the cost function and the treatment of minimum impulse bit is also investigated. The propellant consumption with ℓasso MPC for the considered scenario is noticeably less than with a conventional quadratic cost and control actions are sparser in time. Propellant consumption and sparsity are competitive with those achieved using a zone-based ℓ1 cost function, whilst requiring fewer decision variables in the optimisation problem than the latter. The ℓasso MPC is demonstrated to meet tighter specifications on control precision and also avoids the risk of undesirable behaviours often associated with pure ℓ1 stage costs.
NASA Astrophysics Data System (ADS)
Liu, Xiaomei; Li, Shengtao; Zhang, Kanjian
2017-08-01
In this paper, we solve an optimal control problem for a class of time-invariant switched stochastic systems with multi-switching times, where the objective is to minimise a cost functional with different costs defined on the states. In particular, we focus on problems in which a pre-specified sequence of active subsystems is given and the switching times are the only control variables. Based on the calculus of variation, we derive the gradient of the cost functional with respect to the switching times on an especially simple form, which can be directly used in gradient descent algorithms to locate the optimal switching instants. Finally, a numerical example is given, highlighting the validity of the proposed methodology.
Cost-effectiveness of a classification-based system for sub-acute and chronic low back pain.
Apeldoorn, Adri T; Bosmans, Judith E; Ostelo, Raymond W; de Vet, Henrica C W; van Tulder, Maurits W
2012-07-01
Identifying relevant subgroups in patients with low back pain (LBP) is considered important to guide physical therapy practice and to improve outcomes. The aim of the present study was to assess the cost-effectiveness of a modified version of Delitto's classification-based treatment approach compared with usual physical therapy care in patients with sub-acute and chronic LBP with 1 year follow-up. All patients were classified using the modified version of Delitto's classification-based system and then randomly assigned to receive either classification-based treatment or usual physical therapy care. The main clinical outcomes measured were; global perceived effect, intensity of pain, functional disability and quality of life. Costs were measured from a societal perspective. Multiple imputations were used for missing data. Uncertainty surrounding cost differences and incremental cost-effectiveness ratios was estimated using bootstrapping. Cost-effectiveness planes and cost-effectiveness acceptability curves were estimated. In total, 156 patients were included. The outcome analyses showed a significantly better outcome on global perceived effect favoring the classification-based approach, and no differences between the groups on pain, disability and quality-adjusted life-years. Mean total societal costs for the classification-based group were
NASA Astrophysics Data System (ADS)
Turko, Nir A.; Isbach, Michael; Ketelhut, Steffi; Greve, Burkhard; Schnekenburger, Jürgen; Shaked, Natan T.; Kemper, Björn
2017-02-01
We explored photothermal quantitative phase imaging (PTQPI) of living cells with functionalized nanoparticles (NPs) utilizing a cost-efficient setup based on a cell culture microscope. The excitation light was modulated by a mechanical chopper wheel with low frequencies. Quantitative phase imaging (QPI) was performed with Michelson interferometer-based off-axis digital holographic microscopy and a standard industrial camera. We present results from PTQPI observations on breast cancer cells that were incubated with functionalized gold NPs binding to the epidermal growth factor receptor. Moreover, QPI was used to quantify the impact of the NPs and the low frequency light excitation on cell morphology and viability.
Wu, Ling; Liu, Xiang-Nan; Zhou, Bo-Tian; Liu, Chuan-Hao; Li, Lu-Feng
2012-12-01
This study analyzed the sensitivities of three vegetation biochemical parameters [chlorophyll content (Cab), leaf water content (Cw), and leaf area index (LAI)] to the changes of canopy reflectance, with the effects of each parameter on the wavelength regions of canopy reflectance considered, and selected three vegetation indices as the optimization comparison targets of cost function. Then, the Cab, Cw, and LAI were estimated, based on the particle swarm optimization algorithm and PROSPECT + SAIL model. The results showed that retrieval efficiency with vegetation indices as the optimization comparison targets of cost function was better than that with all spectral reflectance. The correlation coefficients (R2) between the measured and estimated values of Cab, Cw, and LAI were 90.8%, 95.7%, and 99.7%, and the root mean square errors of Cab, Cw, and LAI were 4.73 microg x cm(-2), 0.001 g x cm(-2), and 0.08, respectively. It was suggested that to adopt vegetation indices as the optimization comparison targets of cost function could effectively improve the efficiency and precision of the retrieval of biochemical parameters based on PROSPECT + SAIL model.
A Survey of Reliability, Maintainability, Supportability, and Testability Software Tools
1991-04-01
designs in terms of their contributions toward forced mission termination and vehicle or function loss . Includes the ability to treat failure modes of...ABSTRACT: Inputs: MTBFs, MTTRs, support equipment costs, equipment weights and costs, available targets, military occupational specialty skill level and...US Army CECOM NAME: SPARECOST ABSTRACT: Calculates expected number of failures and performs spares holding optimization based on cost, weight , or
Ross, Robert H; Callas, Peter W; Sargent, Jesse Q; Amick, Benjamin C; Rooney, Ted
2006-12-01
Work related musculoskeletal disorders (WRMSDs) remain costly. The Worker-Based Outcomes Assessment System (WBOAS) is an injury treatment improvement tool. Its purpose is to increase treatment effectiveness and decrease the cost of care delivered in Occupational Health Service clinics. The study used a non-randomized (parallel cohort) control trial design to test the effects on injured employee outcomes of augmenting the standard care delivered by physical and occupational therapists (PT/OTs) with the WBOAS. The WBOAS works by putting patient-reported functional health status, pain symptom, and work role performance outcomes data into the hands of PT/OTs and their patients. Test clinic therapists were trained to incorporate WBOAS trends data into standard practice. Control clinic therapists delivered standard care alone. WBOAS-augmented PT/OT care did improve (p< or =.05) physical functioning and new injury/re-injury avoidance and, on these same dimensions, cost-adjusted outcome. It did not improve (p>.05) mental health or pain symptoms or return-to-work or stay-at-work success nor, on these same dimensions, cost-adjusted outcome. Training PT/OTs to incorporate patient-reported health status, pain symptom, and work role performance outcomes trends data into standard practice does appear to improve treatment effectiveness and cost on some (e.g. physical functioning) but not other (e.g. mental health, pain symptoms) outcomes.
Cost drivers and resource allocation in military health care systems.
Fulton, Larry; Lasdon, Leon S; McDaniel, Reuben R
2007-03-01
This study illustrates the feasibility of incorporating technical efficiency considerations in the funding of military hospitals and identifies the primary drivers for hospital costs. Secondary data collected for 24 U.S.-based Army hospitals and medical centers for the years 2001 to 2003 are the basis for this analysis. Technical efficiency was measured by using data envelopment analysis; subsequently, efficiency estimates were included in logarithmic-linear cost models that specified cost as a function of volume, complexity, efficiency, time, and facility type. These logarithmic-linear models were compared against stochastic frontier analysis models. A parsimonious, three-variable, logarithmic-linear model composed of volume, complexity, and efficiency variables exhibited a strong linear relationship with observed costs (R(2) = 0.98). This model also proved reliable in forecasting (R(2) = 0.96). Based on our analysis, as much as $120 million might be reallocated to improve the United States-based Army hospital performance evaluated in this study.
Validation of a unique concept for a low-cost, lightweight space-deployable antenna structure
NASA Technical Reports Server (NTRS)
Freeland, R. E.; Bilyeu, G. D.; Veal, G. R.
1993-01-01
An experiment conducted in the framework of a NASA In-Space Technology Experiments Program based on a concept of inflatable deployable structures is described. The concept utilizes very low inflation pressure to maintain the required geometry on orbit and gravity-induced deflection of the structure precludes any meaningful ground-based demonstrations of functions performance. The experiment is aimed at validating and characterizing the mechanical functional performance of a 14-m-diameter inflatable deployable reflector antenna structure in the orbital operational environment. Results of the experiment are expected to significantly reduce the user risk associated with using large space-deployable antennas by demonstrating the functional performance of a concept that meets the criteria for low-cost, lightweight, and highly reliable space-deployable structures.
Darbin, Olivier; Gubler, Coral; Naritoku, Dean; Dees, Daniel; Martino, Anthony; Adams, Elizabeth
2016-01-01
This study describes a cost-effective screening protocol for parkinsonism based on combined objective and subjective monitoring of balance function. Objective evaluation of balance function was performed using a game industry balance board and an automated analyses of the dynamic of the center of pressure in time, frequency, and non-linear domains collected during short series of stand up tests with different modalities and severity of sensorial deprivation. The subjective measurement of balance function was performed using the Dizziness Handicap Inventory questionnaire. Principal component analyses on both objective and subjective measurements of balance function allowed to obtained a specificity and selectivity for parkinsonian patients (vs. healthy subjects) of 0.67 and 0.71 respectively. The findings are discussed regarding the relevance of cost-effective balance-based screening system as strategy to meet the needs of broader and earlier screening for parkinsonism in communities with limited access to healthcare.
NASA Astrophysics Data System (ADS)
Chaudhuri, Anirban
Global optimization based on expensive and time consuming simulations or experiments usually cannot be carried out to convergence, but must be stopped because of time constraints, or because the cost of the additional function evaluations exceeds the benefits of improving the objective(s). This dissertation sets to explore the implications of such budget and time constraints on the balance between exploration and exploitation and the decision of when to stop. Three different aspects are considered in terms of their effects on the balance between exploration and exploitation: 1) history of optimization, 2) fixed evaluation budget, and 3) cost as a part of objective function. To this end, this research develops modifications to the surrogate-based optimization technique, Efficient Global Optimization algorithm, that controls better the balance between exploration and exploitation, and stopping criteria facilitated by these modifications. Then the focus shifts to examining experimental optimization, which shares the issues of cost and time constraints. Through a study on optimization of thrust and power for a small flapping wing for micro air vehicles, important differences and similarities between experimental and simulation-based optimization are identified. The most important difference is that reduction of noise in experiments becomes a major time and cost issue, and a second difference is that parallelism as a way to cut cost is more challenging. The experimental optimization reveals the tendency of the surrogate to display optimistic bias near the surrogate optimum, and this tendency is then verified to also occur in simulation based optimization.
Cost estimation model for advanced planetary programs, fourth edition
NASA Technical Reports Server (NTRS)
Spadoni, D. J.
1983-01-01
The development of the planetary program cost model is discussed. The Model was updated to incorporate cost data from the most recent US planetary flight projects and extensively revised to more accurately capture the information in the historical cost data base. This data base is comprised of the historical cost data for 13 unmanned lunar and planetary flight programs. The revision was made with a two fold objective: to increase the flexibility of the model in its ability to deal with the broad scope of scenarios under consideration for future missions, and to maintain and possibly improve upon the confidence in the model's capabilities with an expected accuracy of 20%. The Model development included a labor/cost proxy analysis, selection of the functional forms of the estimating relationships, and test statistics. An analysis of the Model is discussed and two sample applications of the cost model are presented.
High performance, low cost, self-contained, multipurpose PC based ground systems
NASA Technical Reports Server (NTRS)
Forman, Michael; Nickum, William; Troendly, Gregory
1993-01-01
The use of embedded processors greatly enhances the capabilities of personal computers when used for telemetry processing and command control center functions. Parallel architectures based on the use of transputers are shown to be very versatile and reusable, and the synergism between the PC and the embedded processor with transputers results in single unit, low cost workstations of 20 less than MIPS less than or equal to 1000.
Optimal inverse functions created via population-based optimization.
Jennings, Alan L; Ordóñez, Raúl
2014-06-01
Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.
Ku, Li-Jung Elizabeth; Pai, Ming-Chyi; Shih, Pei-Yu
2016-01-01
Objective Given the shortage of cost-of-illness studies in dementia outside of the Western population, the current study estimated the annual cost of dementia in Taiwan and assessed whether different categories of care costs vary by severity using multiple disease-severity measures. Methods This study included 231 dementia patient–caregiver dyads in a dementia clinic at a national university hospital in southern Taiwan. Three disease measures including cognitive, functional, and behavioral disturbances were obtained from patients based on medical history. A societal perspective was used to estimate the total costs of dementia according to three cost sub-categories. The association between dementia severity and cost of care was examined through bivariate and multivariate analyses. Results Total costs of care for moderate dementia patient were 1.4 times the costs for mild dementia and doubled from mild to severe dementia among our community-dwelling dementia sample. Multivariate analysis indicated that functional declines had a greater impact on all cost outcomes as compared to behavioral disturbance, which showed no impact on any costs. Informal care costs accounted for the greatest share in total cost of care for both mild (42%) and severe (43%) dementia patients. Conclusions Since the total costs of dementia increased with severity, providing care to delay disease progression, with a focus on maintaining patient physical function, may reduce the overall cost of dementia. The greater contribution of informal care to total costs as opposed to social care also suggests a need for more publicly-funded long-term care services to assist family caregivers of dementia patients in Taiwan. PMID:26859891
Development of Protection and Control Unit for Distribution Substation
NASA Astrophysics Data System (ADS)
Iguchi, Fumiaki; Hayashi, Hideyuki; Takeuchi, Motohiro; Kido, Mitsuyasu; Kobayashi, Takashi; Yanaoka, Atsushi
The Recently, electronics and IT technologies have been rapidly innovated and have been introduced to power system protection & control system to achieve high reliability, maintainability and more functionality. Concerning the distribution substation application, digital relays have been applied for more than 10 years. Because of a number of electronic devices used for it, product cost becomes higher. Also, products installed during the past high-growth period will be at the end of lifetime and will be replaced. Therefore, replacing market is expected to grow and the reduction of cost is demanded. Considering above mentioned background, second generation digital protection and control unit as a successor is designed to have following concepts. Functional integration based on advanced digital technologies, Ethernet LAN based indoor communication network, cost reduction and downsizing. Pondering above concepts, integration of protection and control function is adopted in contrary to the functional segregation applied to the previous system in order to achieve one-unit concept. Also the adoption of Ethernet LAN for inter-unit communication is objective. This report shows the development of second-generation digital relay for distribution substation, which is equipped with control function and Ethernet LAN by reducing the size of auxiliary transformer unit and the same size as previous product is realized.
The costs of heparin-induced thrombocytopenia: a patient-based cost of illness analysis.
Wilke, T; Tesch, S; Scholz, A; Kohlmann, T; Greinacher, A
2009-05-01
SUMMARY BACKGROUND AND OBJECTIVES: Due to the complexity of heparin-induced thrombocytopenia (HIT), currently available cost analyses are rough estimates. The objectives of this study were quantification of costs involved in HIT and identification of main cost drivers based on a patient-oriented approach. Patients diagnosed with HIT (1995-2004, University-hospital Greifswald, Germany) based on a positive functional assay (HIPA test) were retrieved from the laboratory records and scored (4T-score) by two medical experts using the patient file. For cost of illness analysis, predefined HIT-relevant cost parameters (medication costs, prolonged in-hospital stay, diagnostic and therapeutic interventions, laboratory tests, blood transfusions) were retrieved from the patient files. The data were analysed by linear regression estimates with the log of costs and a gamma regression model. Mean length of stay data of non-HIT patients were obtained from the German Federal Statistical Office, adjusted for patient characteristics, comorbidities and year of treatment. Hospital costs were provided by the controlling department. One hundred and thirty HIT cases with a 4T-score >or=4 and a positive HIPA test were analyzed. Mean additional costs of a HIT case were 9008 euro. The main cost drivers were prolonged in-hospital stay (70.3%) and costs of alternative anticoagulants (19.7%). HIT was more costly in surgical patients compared with medical patients and in patients with thrombosis. Early start of alternative anticoagulation did not increase HIT costs despite the high medication costs indicating prevention of costly complications. An HIT cost calculator is provided, allowing online calculation of HIT costs based on local cost structures and different currencies.
Operations analysis (study 2.1): Shuttle upper stage software requirements
NASA Technical Reports Server (NTRS)
Wolfe, R. R.
1974-01-01
An investigation of software costs related to space shuttle upper stage operations with emphasis on the additional costs attributable to space servicing was conducted. The questions and problem areas include the following: (1) the key parameters involved with software costs; (2) historical data for extrapolation of future costs; (3) elements of the basic software development effort that are applicable to servicing functions; (4) effect of multiple servicing on complexity of the operation; and (5) are recurring software costs significant. The results address these questions and provide a foundation for estimating software costs based on the costs of similar programs and a series of empirical factors.
NASA Astrophysics Data System (ADS)
Daniell, James; Wenzel, Friedemann
2014-05-01
Over the past decade, the production of economic indices behind the CATDAT Damaging Earthquakes Database has allowed for the conversion of historical earthquake economic loss and cost events into today's terms using long-term spatio-temporal series of consumer price index (CPI), construction costs, wage indices, and GDP from 1900-2013. As part of the doctoral thesis of Daniell (2014), databases and GIS layers for a country and sub-country level have been produced for population, GDP per capita, net and gross capital stock (depreciated and non-depreciated) using studies, census information and the perpetual inventory method. In addition, a detailed study has been undertaken to collect and reproduce as many historical isoseismal maps, macroseismic intensity results and reproductions of earthquakes as possible out of the 7208 damaging events in the CATDAT database from 1900 onwards. a) The isoseismal database and population bounds from 3000+ collected damaging events were compared with the output parameters of GDP and net and gross capital stock per intensity bound and administrative unit, creating a spatial join for analysis. b) The historical costs were divided into shaking/direct ground motion effects, and secondary effects costs. The shaking costs were further divided into gross capital stock related and GDP related costs for each administrative unit, intensity bound couplet. c) Costs were then estimated based on the optimisation of the function in terms of costs vs. gross capital stock and costs vs. GDP via the regression of the function. Losses were estimated based on net capital stock, looking at the infrastructure age and value at the time of the event. This dataset was then used to develop an economic exposure for each historical earthquake in comparison with the loss recorded in the CATDAT Damaging Earthquakes Database. The production of economic fragility functions for each country was possible using a temporal regression based on the parameters of macroseismic intensity, capital stock estimate, GDP estimate, year and the combined seismic building index (a created combination of the global seismic code index, building practice factor, building age and infrastructure vulnerability). The analysis provided three key results: a) The production of economic fragility functions from the 1900-2008 events showed very good correlation to the economic loss and cost from earthquakes from 2009-2013, in real-time. This methodology has been extended to other natural disaster types (typhoon, flood, drought). b) The reanalysis of historical earthquake events in order to check associated historical loss and costs versus the expected exposure in terms of intensities. The 1939 Chillan, 1948 Turkmenistan, 1950 Iran, 1972 Managua, 1980 Western Nepal and 1992 Erzincan earthquake events were seen as huge outliers compared with the modelled capital stock and GDP and thus additional studies were undertaken to check the original loss results. c) A worldwide GIS layer database of capital stock (gross and net), GDP, infrastructure age and economic indices over the period 1900-2013 have been created in conjunction with the CATDAT database in order to define correct economic loss and costs.
48 CFR 9904.418-60 - Illustrations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... of which perform various functions on units of the work-in-process of multiple final cost objectives... assembly overhead cost pool. The business unit finds it impractical to use an allocation measure based on... occasionally does significant amounts of work for other activities of the business unit. The labor used in...
48 CFR 9904.418-60 - Illustrations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... of which perform various functions on units of the work-in-process of multiple final cost objectives... assembly overhead cost pool. The business unit finds it impractical to use an allocation measure based on... occasionally does significant amounts of work for other activities of the business unit. The labor used in...
48 CFR 9904.418-60 - Illustrations.
Code of Federal Regulations, 2013 CFR
2013-10-01
... of which perform various functions on units of the work-in-process of multiple final cost objectives... assembly overhead cost pool. The business unit finds it impractical to use an allocation measure based on... occasionally does significant amounts of work for other activities of the business unit. The labor used in...
48 CFR 9904.409-40 - Fundamental requirement.
Code of Federal Regulations, 2013 CFR
2013-10-01
... service life shall reflect the pattern of consumption of services over the life of the asset. (4) The gain... function as, an organizational unit whose costs are charged to other cost objectives based on measurement... pools. (4) The gain or loss which is recognized upon disposition of a tangible capital asset, where...
48 CFR 9904.409-40 - Fundamental requirement.
Code of Federal Regulations, 2012 CFR
2012-10-01
... service life shall reflect the pattern of consumption of services over the life of the asset. (4) The gain... function as, an organizational unit whose costs are charged to other cost objectives based on measurement... pools. (4) The gain or loss which is recognized upon disposition of a tangible capital asset, where...
48 CFR 9904.409-40 - Fundamental requirement.
Code of Federal Regulations, 2014 CFR
2014-10-01
... service life shall reflect the pattern of consumption of services over the life of the asset. (4) The gain... function as, an organizational unit whose costs are charged to other cost objectives based on measurement... pools. (4) The gain or loss which is recognized upon disposition of a tangible capital asset, where...
48 CFR 52.215-23 - Limitations on Pass-Through Charges.
Code of Federal Regulations, 2010 CFR
2010-10-01
... indirect costs and associated profit/fee based on such costs). No or negligible value means the Contractor... value means that the Contractor performs subcontract management functions that the Contracting Officer... Contractor or subcontractor that adds no or negligible value to a contract or subcontract, means a charge to...
A cost-analysis of two approaches to infection control in a lung function laboratory.
Side, E A; Harrington, G; Thien, F; Walters, E H; Johns, D P
1999-02-01
The Thoracic Society of Australia and New Zealand (TSANZ) guidelines for infection control in respiratory laboratories are based on a 'Universal Precautions' approach to patient care. This requires that one-way breathing valves, flow sensors, and other items, be cleaned and disinfected between patient use. However, this is impractical in a busy laboratory. The recent introduction of disposable barrier filters may provide a practical solution to this problem, although most consider this approach to be an expensive option. To compare the cost of implementing the TSANZ infection control guidelines with the cost of using disposable barrier filters. Costs were based on the standard tests and equipment currently used in the lung function laboratory at The Alfred Hospital. We have assumed that a barrier filter offers the same degree of protection against cross-infection between patients as the TSANZ infection control guidelines. Time and motion studies were performed on the dismantling, cleaning, disinfecting, reassembling and re-calibrating of equipment. Conservative estimates were made as to the frequency of replacing pneumotachographs and rubber mouthpieces based on previous equipment turnover. Labour costs for a scientist to reprocess the equipment was based on $20.86/hour. The cost of employing a casual cleaner at an hourly rate of $14.07 to assist in reprocessing equipment was also investigated. The new high efficiency HyperFilter disposable barrier filter, costing $2.95 was used in this cost-analysis. The cost of reprocessing equipment required for spirometry alone was $17.58 per test if a scientist reprocesses the equipment, and $15.56 per test if a casual cleaner is employed to assist the scientist in performing these duties. In contrast, using a disposable filter would cost only $2.95 per test. Using a filter was considerably less expensive than following the TSANZ guidelines for all tests and equipment used in this cost-analysis. The TSANZ infection control guidelines are expensive and impractical to implement. However, disposable barrier filters provide a practical and inexpensive method of infection control.
A novel edge-preserving nonnegative matrix factorization method for spectral unmixing
NASA Astrophysics Data System (ADS)
Bao, Wenxing; Ma, Ruishi
2015-12-01
Spectral unmixing technique is one of the key techniques to identify and classify the material in the hyperspectral image processing. A novel robust spectral unmixing method based on nonnegative matrix factorization(NMF) is presented in this paper. This paper used an edge-preserving function as hypersurface cost function to minimize the nonnegative matrix factorization. To minimize the hypersurface cost function, we constructed the updating functions for signature matrix of end-members and abundance fraction respectively. The two functions are updated alternatively. For evaluation purpose, synthetic data and real data have been used in this paper. Synthetic data is used based on end-members from USGS digital spectral library. AVIRIS Cuprite dataset have been used as real data. The spectral angle distance (SAD) and abundance angle distance(AAD) have been used in this research for assessment the performance of proposed method. The experimental results show that this method can obtain more ideal results and good accuracy for spectral unmixing than present methods.
Societal and Family Lifetime Cost of Dementia: Implications for Policy.
Jutkowitz, Eric; Kane, Robert L; Gaugler, Joseph E; MacLehose, Richard F; Dowd, Bryan; Kuntz, Karen M
2017-10-01
To estimate the cost of dementia and the extra cost of caring for someone with dementia over the cost of caring for someone without dementia. We developed an evidence-based mathematical model to simulate disease progression for newly diagnosed individuals with dementia. Data-driven trajectories of cognition, function, and behavioral and psychological symptoms were used to model disease progression and predict costs. Using modeling, we evaluated lifetime and annual costs of individuals with dementia, compared costs of those with and without clinical features of dementia, and evaluated the effect of reducing functional decline or behavioral and psychological symptoms by 10% for 12 months (implemented when Mini-Mental State Examination score ≤21). Mathematical model. Representative simulated U.S. incident dementia cases. Value of informal care, out-of-pocket expenditures, Medicaid expenditures, and Medicare expenditures. From time of diagnosis (mean age 83), discounted total lifetime cost of care for a person with dementia was $321,780 (2015 dollars). Families incurred 70% of the total cost burden ($225,140), Medicaid accounted for 14% ($44,090), and Medicare accounted for 16% ($52,540). Costs for a person with dementia over a lifetime were $184,500 greater (86% incurred by families) than for someone without dementia. Total annual cost peaked at $89,000, and net cost peaked at $72,400. Reducing functional decline or behavioral and psychological symptoms by 10% resulted in $3,880 and $680 lower lifetime costs than natural disease progression. Dementia substantially increases lifetime costs of care. Long-lasting, effective interventions are needed to support families because they incur the most dementia cost. © 2017, Copyright the Authors Journal compilation © 2017, The American Geriatrics Society.
Chinta, Ravi; Burns, David J; Manolis, Chris; Nighswander, Tristan
2013-01-01
The expectation that aging leads to a progressive deterioration of biological functions leading to higher healthcare costs is known as the healthcare cost creep due to age creep phenomenon. The authors empirically test the validity of this phenomenon in the context of hospitalization costs based on more than 8 million hospital inpatient records from 1,056 hospitals in the United States. The results question the existence of cost creep due to age creep after the age of 65 years as far as average hospitalization costs are concerned. The authors discuss implications for potential knowledge transfer for cost minimization and medical tourism.
NASA Astrophysics Data System (ADS)
Doerr, Timothy; Alves, Gelio; Yu, Yi-Kuo
2006-03-01
Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time. This suggests a way to efficiently find approximate solutions - - find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the fininte number of high- ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks - - peptide sequencing using tandem mass spectrometry data.
NASA Technical Reports Server (NTRS)
1973-01-01
Power subsystem cost/weight tradeoffs are discussed for the Venus probe spacecraft. The cost estimations of power subsystem units were based upon DSCS-2, DSP, and Pioneer 10 and 11 hardware design and development and manufacturing experience. Parts count and degree of modification of existing hardware were factored into the estimate of manufacturing and design and development costs. Cost data includes sufficient quantities of units to equip probe bus and orbiter versions. It was based on the orbiter complement of equipment, but the savings in fewer slices for the probe bus balance the cost of the different probe bus battery. The preferred systems for the Thor/Delta and for the Atlas/Centaur are discussed. The weights of the candidate designs were based upon slice or tray weights for functionally equivalent circuitry measured on existing hardware such as Pioneers 10 and 11, Intelsat 3, DSCS-2, or DSP programs. Battery weights were based on measured cell weight data adjusted for case weight or off-the-shelf battery weights. The solar array weight estimate was based upon recent hardware experience on DSCS-2 and DSP arrays.
Straub, Niels; Beivers, Andreas; Lenk, Ekaterina; Aradi, Daniel; Sibbing, Dirk
2014-02-01
Although some observational studies reported that the measured level of P2Y12-inhibition is predictive for thrombotic events, the clinical and economic benefit of incorporating PFT to personalize P2Y12-receptor directed antiplatelet treatment is unknown. Here, we assessed the clinical impact and cost-effectiveness of selecting P2Y12-inhibitors based on platelet function testing (PFT) in acute coronary syndrome (ACS) patients undergoing PCI. A decision model was developed to analyse the health economic effects of different strategies. PFT-guided treatment was compared with the three options of general clopidogrel, prasugrel or ticagrelor treatment. In the PFT arm, low responders to clopidogrel received prasugrel, while normal responders carried on with clopidogrel. The associated endpoints in the model were cardiovascular death, stent thrombosis and major bleeding. With a simulated cohort of 10,000 patients treated for one year, there were 93 less events in the PFT arm compared to general clopidogrel. In prasugrel and ticagrelor arms, 110 and 86 events were prevented compared to clopidogrel treatment, respectively. The total expected costs (including event costs, drug costs and PFT costs) for generic clopidogrel therapy were US$ 1,059/patient. In the PFT arm, total costs were US$ 1,494, while in the prasugrel and ticagrelor branches they were US$ 3,102 and US$ 3,771, respectively. The incremental-cost-effectiveness-ratio (ICER) was US$ 46,770 for PFT-guided therapy, US$ 185,783 for prasugrel and US$ 315,360 for ticagrelor. In this model-based analysis, a PFT-guided therapy may have fewer adverse outcomes than general treatment with clopidogrel and may be more cost-effective than prasugrel or ticagrelor treatment in ACS patients undergoing PCI.
In-depth investigation of spin-on doped solar cells with thermally grown oxide passivation
NASA Astrophysics Data System (ADS)
Ahmad, Samir Mahmmod; Cheow, Siu Leong; Ludin, Norasikin A.; Sopian, K.; Zaidi, Saleem H.
Solar cell industrial manufacturing, based largely on proven semiconductor processing technologies supported by significant advancements in automation, has reached a plateau in terms of cost and efficiency. However, solar cell manufacturing cost (dollar/watt) is still substantially higher than fossil fuels. The route to lowering cost may not lie with continuing automation and economies of scale. Alternate fabrication processes with lower cost and environmental-sustainability coupled with self-reliance, simplicity, and affordability may lead to price compatibility with carbon-based fuels. In this paper, a custom-designed formulation of phosphoric acid has been investigated, for n-type doping in p-type substrates, as a function of concentration and drive-in temperature. For post-diffusion surface passivation and anti-reflection, thermally-grown oxide films in 50-150-nm thickness were grown. These fabrication methods facilitate process simplicity, reduced costs, and environmental sustainability by elimination of poisonous chemicals and toxic gases (POCl3, SiH4, NH3). Simultaneous fire-through contact formation process based on screen-printed front surface Ag and back surface through thermally grown oxide films was optimized as a function of the peak temperature in conveyor belt furnace. Highest efficiency solar cells fabricated exhibited efficiency of ∼13%. Analysis of results based on internal quantum efficiency and minority carried measurements reveals three contributing factors: high front surface recombination, low minority carrier lifetime, and higher reflection. Solar cell simulations based on PC1D showed that, with improved passivation, lower reflection, and high lifetimes, efficiency can be enhanced to match with commercially-produced PECVD SiN-coated solar cells.
NASA Astrophysics Data System (ADS)
Lipiński, Seweryn; Olkowski, Tomasz
2017-10-01
The estimate of the cost of electro-mechanical equipment for new small hydropower plants most often amounts to about 30-40% of the total budget. In case of modernization of existing installations, this estimation represents the main cost. This matter constitutes a research problem for at least few decades. Many models have been developed for that purpose. The aim of our work was to collect and analyse formulas that allow estimation of the cost of investment in electro-mechanical equipment for small hydropower plants. Over a dozen functions were analysed. To achieve the aim of our work, these functions were converted into the form allowing their comparison. Then the costs were simulated with respect to plants' powers and net heads; such approach is novel and allows deeper discussion of the problem, as well as drawing broader conclusions. The following conclusions can be drawn: significant differences in results obtained by using various formulas were observed; there is a need for a wide study based on national investments in small hydropower plants that would allow to develop equations based on local data; the obtained formulas would let to determinate the costs of modernization or a new construction of small hydropower plant more precisely; special attention should be payed to formulas considering turbine type.
Optimal Operation System of the Integrated District Heating System with Multiple Regional Branches
NASA Astrophysics Data System (ADS)
Kim, Ui Sik; Park, Tae Chang; Kim, Lae-Hyun; Yeo, Yeong Koo
This paper presents an optimal production and distribution management for structural and operational optimization of the integrated district heating system (DHS) with multiple regional branches. A DHS consists of energy suppliers and consumers, district heating pipelines network and heat storage facilities in the covered region. In the optimal management system, production of heat and electric power, regional heat demand, electric power bidding and sales, transport and storage of heat at each regional DHS are taken into account. The optimal management system is formulated as a mixed integer linear programming (MILP) where the objectives is to minimize the overall cost of the integrated DHS while satisfying the operation constraints of heat units and networks as well as fulfilling heating demands from consumers. Piecewise linear formulation of the production cost function and stairwise formulation of the start-up cost function are used to compute nonlinear cost function approximately. Evaluation of the total overall cost is based on weekly operations at each district heat branches. Numerical simulations show the increase of energy efficiency due to the introduction of the present optimal management system.
Unobtrusive monitoring of heart rate using a cost-effective speckle-based SI-POF remote sensor
NASA Astrophysics Data System (ADS)
Pinzón, P. J.; Montero, D. S.; Tapetado, A.; Vázquez, C.
2017-03-01
A novel speckle-based sensing technique for cost-effective heart-rate monitoring is demonstrated. This technique detects periodical changes in the spatial distribution of energy on the speckle pattern at the output of a Step-Index Polymer Optical Fiber (SI-POF) lead by using a low-cost webcam. The scheme operates in reflective configuration thus performing a centralized interrogation unit scheme. The prototype has been integrated into a mattress and its functionality has been tested with 5 different patients lying on the mattress in different positions without direct contact with the fiber sensing lead.
Optimal Portfolio Selection Under Concave Price Impact
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma Jin, E-mail: jinma@usc.edu; Song Qingshuo, E-mail: songe.qingshuo@cityu.edu.hk; Xu Jing, E-mail: xujing8023@yahoo.com.cn
2013-06-15
In this paper we study an optimal portfolio selection problem under instantaneous price impact. Based on some empirical analysis in the literature, we model such impact as a concave function of the trading size when the trading size is small. The price impact can be thought of as either a liquidity cost or a transaction cost, but the concavity nature of the cost leads to some fundamental difference from those in the existing literature. We show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solutionmore » to a special type of Quasi-Variational Inequality (QVI). We also prove directly (without using the solution to the QVI) that the optimal strategy exists and more importantly, despite the absence of a fixed cost, it is still in a 'piecewise constant' form, reflecting a more practical perspective.« less
Cost-Efficient and Multi-Functional Secure Aggregation in Large Scale Distributed Application
Zhang, Ping; Li, Wenjun; Sun, Hua
2016-01-01
Secure aggregation is an essential component of modern distributed applications and data mining platforms. Aggregated statistical results are typically adopted in constructing a data cube for data analysis at multiple abstraction levels in data warehouse platforms. Generating different types of statistical results efficiently at the same time (or referred to as enabling multi-functional support) is a fundamental requirement in practice. However, most of the existing schemes support a very limited number of statistics. Securely obtaining typical statistical results simultaneously in the distribution system, without recovering the original data, is still an open problem. In this paper, we present SEDAR, which is a SEcure Data Aggregation scheme under the Range segmentation model. Range segmentation model is proposed to reduce the communication cost by capturing the data characteristics, and different range uses different aggregation strategy. For raw data in the dominant range, SEDAR encodes them into well defined vectors to provide value-preservation and order-preservation, and thus provides the basis for multi-functional aggregation. A homomorphic encryption scheme is used to achieve data privacy. We also present two enhanced versions. The first one is a Random based SEDAR (REDAR), and the second is a Compression based SEDAR (CEDAR). Both of them can significantly reduce communication cost with the trade-off lower security and lower accuracy, respectively. Experimental evaluations, based on six different scenes of real data, show that all of them have an excellent performance on cost and accuracy. PMID:27551747
Cost-Efficient and Multi-Functional Secure Aggregation in Large Scale Distributed Application.
Zhang, Ping; Li, Wenjun; Sun, Hua
2016-01-01
Secure aggregation is an essential component of modern distributed applications and data mining platforms. Aggregated statistical results are typically adopted in constructing a data cube for data analysis at multiple abstraction levels in data warehouse platforms. Generating different types of statistical results efficiently at the same time (or referred to as enabling multi-functional support) is a fundamental requirement in practice. However, most of the existing schemes support a very limited number of statistics. Securely obtaining typical statistical results simultaneously in the distribution system, without recovering the original data, is still an open problem. In this paper, we present SEDAR, which is a SEcure Data Aggregation scheme under the Range segmentation model. Range segmentation model is proposed to reduce the communication cost by capturing the data characteristics, and different range uses different aggregation strategy. For raw data in the dominant range, SEDAR encodes them into well defined vectors to provide value-preservation and order-preservation, and thus provides the basis for multi-functional aggregation. A homomorphic encryption scheme is used to achieve data privacy. We also present two enhanced versions. The first one is a Random based SEDAR (REDAR), and the second is a Compression based SEDAR (CEDAR). Both of them can significantly reduce communication cost with the trade-off lower security and lower accuracy, respectively. Experimental evaluations, based on six different scenes of real data, show that all of them have an excellent performance on cost and accuracy.
Reed, Shelby D.; Neilson, Matthew P.; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H.; Polsky, Daniel E.; Graham, Felicia L.; Bowers, Margaret T.; Paul, Sara C.; Granger, Bradi B.; Schulman, Kevin A.; Whellan, David J.; Riegel, Barbara; Levy, Wayne C.
2015-01-01
Background Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. Methods We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure (TEAM-HF) Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics, use of evidence-based medications, and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model (SHFM). Projections of resource use and quality of life are modeled using relationships with time-varying SHFM scores. The model can be used to evaluate parallel-group and single-cohort designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. Results The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. Conclusion The TEAM-HF Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. PMID:26542504
NASA Astrophysics Data System (ADS)
Grujicic, M.; Arakere, G.; Pandurangan, B.; Sellappan, V.; Vallejo, A.; Ozen, M.
2010-11-01
A multi-disciplinary design-optimization procedure has been introduced and used for the development of cost-effective glass-fiber reinforced epoxy-matrix composite 5 MW horizontal-axis wind-turbine (HAWT) blades. The turbine-blade cost-effectiveness has been defined using the cost of energy (CoE), i.e., a ratio of the three-blade HAWT rotor development/fabrication cost and the associated annual energy production. To assess the annual energy production as a function of the blade design and operating conditions, an aerodynamics-based computational analysis had to be employed. As far as the turbine blade cost is concerned, it is assessed for a given aerodynamic design by separately computing the blade mass and the associated blade-mass/size-dependent production cost. For each aerodynamic design analyzed, a structural finite element-based and a post-processing life-cycle assessment analyses were employed in order to determine a minimal blade mass which ensures that the functional requirements pertaining to the quasi-static strength of the blade, fatigue-controlled blade durability and blade stiffness are satisfied. To determine the turbine-blade production cost (for the currently prevailing fabrication process, the wet lay-up) available data regarding the industry manufacturing experience were combined with the attendant blade mass, surface area, and the duration of the assumed production run. The work clearly revealed the challenges associated with simultaneously satisfying the strength, durability and stiffness requirements while maintaining a high level of wind-energy capture efficiency and a lower production cost.
The real world cost and health resource utilization associated to manic episodes: The MANACOR study.
Hidalgo-Mazzei, Diego; Undurraga, Juan; Reinares, María; Bonnín, Caterina del Mar; Sáez, Cristina; Mur, María; Nieto, Evaristo; Vieta, Eduard
2015-01-01
Bipolar disorder is a relapsing-remitting condition affecting approximately 1-2% of the population. Even when the treatments available are effective, relapses are still very frequent. Therefore, the burden and cost associated to every new episode of the disorder have relevant implications in public health. The main objective of this study was to estimate the associated health resource consumption and direct costs of manic episodes in a real world clinical setting, taking into consideration clinical variables. Bipolar I disorder patients who recently presented an acute manic episode based on DSM-IV criteria were consecutively included. Sociodemographic variables were retrospectively collected and during the 6 following months clinical variables were prospectively assessed (YMRS,HDRS-17,FAST and CGI-BP-M). The health resource consumption and associate cost were estimated based on hospitalization days, pharmacological treatment, emergency department and outpatient consultations. One hundred sixty-nine patients patients from 4 different university hospitals in Catalonia (Spain) were included. The mean direct cost of the manic episodes was €4,771. The 77% (€3,651) was attributable to hospitalization costs while 14% (€684) was related to pharmacological treatment, 8% (€386) to outpatient visits and only 1% (€50) to emergency room visits. The hospitalization days were the main cost driver. An initial FAST score>41 significantly predicted a higher direct cost. Our results show the high cost and burden associated with BD and the need to design more cost-efficient strategies in the prevention and management of manic relapses in order to avoid hospital admissions. Poor baseline functioning predicted high costs, indicating the importance of functional assessment in bipolar disorder. Copyright © 2014 SEP y SEPB. Published by Elsevier España. All rights reserved.
Aghdasi, Nava; Whipple, Mark; Humphreys, Ian M; Moe, Kris S; Hannaford, Blake; Bly, Randall A
2018-06-01
Successful multidisciplinary treatment of skull base pathology requires precise preoperative planning. Current surgical approach (pathway) selection for these complex procedures depends on an individual surgeon's experiences and background training. Because of anatomical variation in both normal tissue and pathology (eg, tumor), a successful surgical pathway used on one patient is not necessarily the best approach on another patient. The question is how to define and obtain optimized patient-specific surgical approach pathways? In this article, we demonstrate that the surgeon's knowledge and decision making in preoperative planning can be modeled by a multiobjective cost function in a retrospective analysis of actual complex skull base cases. Two different approaches- weighted-sum approach and Pareto optimality-were used with a defined cost function to derive optimized surgical pathways based on preoperative computed tomography (CT) scans and manually designated pathology. With the first method, surgeon's preferences were input as a set of weights for each objective before the search. In the second approach, the surgeon's preferences were used to select a surgical pathway from the computed Pareto optimal set. Using preoperative CT and magnetic resonance imaging, the patient-specific surgical pathways derived by these methods were similar (85% agreement) to the actual approaches performed on patients. In one case where the actual surgical approach was different, revision surgery was required and was performed utilizing the computationally derived approach pathway.
Economic Outcomes with Anatomic versus Functional Diagnostic Testing for Coronary Artery Disease
Mark, Daniel B.; Federspiel, Jerome J.; Cowper, Patricia A.; Anstrom, Kevin J.; Hoffmann, Udo; Patel, Manesh R.; Davidson-Ray, Linda; Daniels, Melanie R.; Cooper, Lawton S.; Knight, J. David; Lee, Kerry L.; Douglas, Pamela S.
2016-01-01
Background The PROMISE trial found that initial use of ≥64-slice multidetector computed tomographic angiography (CTA) versus functional diagnostic testing strategies did not improve clinical outcomes in stable symptomatic patients with suspected coronary artery disease (CAD) requiring noninvasive testing. Objective Economic analysis of PROMISE, a major secondary aim. Design Prospective economic study from the US perspective. Comparisons were made by intention-to-treat. Confidence intervals were calculated using bootstrap methods. Setting 190 U.S. centers Patients 9649 U.S. patients enrolled in PROMISE. Enrollment began July 2010 and completed September 2013. Median follow-up was 25 months. Measurements Technical costs of the initial (outpatient) testing strategy were estimated from Premier Research Database data. Hospital-based costs were estimated using hospital bills and Medicare cost-to-charge ratios. Physician fees were taken from the Medicare Fee Schedule. Costs were expressed in 2014 US dollars discounted at 3% and estimated out to 3 years using inverse probability weighting methods. Results The mean initial testing costs were: $174 for exercise ECG; $404 for CTA; $501 to $514 for (exercise, pharmacologic) stress echo; $946 to $1132 for (exercise, pharmacologic) stress nuclear. Mean costs at 90 days for the CTA strategy were $2494 versus $2240 for the functional strategy (mean difference $254, 95% CI −$634 to $906). The difference was associated with more revascularizations and catheterizations (4.25 per 100 patients) with CTA use. After 90 days, the mean cost difference between the arms out to 3 years remained small ($373). Limitations Cost weights for test strategies obtained from sources outside PROMISE. Conclusions CTA and functional diagnostic testing strategies in patients with suspected CAD have similar costs through three years of follow-up. PMID:27214597
Smith, William B; Steinberg, Joni; Scholtes, Stefan; Mcnamara, Iain R
2017-03-01
To compare the age-based cost-effectiveness of total knee arthroplasty (TKA), unicompartmental knee arthroplasty (UKA), and high tibial osteotomy (HTO) for the treatment of medial compartment knee osteoarthritis (MCOA). A Markov model was used to simulate theoretical cohorts of patients 40, 50, 60, and 70 years of age undergoing primary TKA, UKA, or HTO. Costs and outcomes associated with initial and subsequent interventions were estimated by following these virtual cohorts over a 10-year period. Revision and mortality rates, costs, and functional outcome data were estimated from a systematic review of the literature. Probabilistic analysis was conducted to accommodate these parameters' inherent uncertainty, and both discrete and probabilistic sensitivity analyses were utilized to assess the robustness of the model's outputs to changes in key variables. HTO was most likely to be cost-effective in cohorts under 60, and UKA most likely in those 60 and over. Probabilistic results did not indicate one intervention to be significantly more cost-effective than another. The model was exquisitely sensitive to changes in utility (functional outcome), somewhat sensitive to changes in cost, and least sensitive to changes in 10-year revision risk. HTO may be the most cost-effective option when treating MCOA in younger patients, while UKA may be preferred in older patients. Functional utility is the primary driver of the cost-effectiveness of these interventions. For the clinician, this study supports HTO as a competitive treatment option in young patient populations. It also validates each one of the three interventions considered as potentially optimal, depending heavily on patient preferences and functional utility derived over time.
Axiomatic foundations for cost-effectiveness analysis.
Canning, David
2013-12-01
We show that individual utilities can be measured in units of healthy life years. Social preferences over these life metric utilities are assumed to satisfy the Pareto principle, anonymity, and invariance to a change in origin. These axioms generate a utilitarian social welfare function implying the use of cost-effectiveness analysis in ordering health projects, based on maximizing the healthy years equivalents gained from a fixed health budget. For projects outside the health sector, our cost-effectiveness axioms imply a form of cost-benefit analysis where both costs and benefits are measured in equivalent healthy life years. Copyright © 2013 John Wiley & Sons, Ltd.
"Optimal" Size and Schooling: A Relative Concept.
ERIC Educational Resources Information Center
Swanson, Austin D.
Issues in economies of scale and optimal school size are discussed in this paper, which seeks to explain the curvilinear nature of the educational cost curve as a function of "transaction costs" and to establish "optimal size" as a relative concept. Based on the argument that educational consolidation has facilitated diseconomies of scale, the…
A New Approach to Hospital Cost Functions and Some Issues in Revenue Regulation
Friedman, Bernard; Pauly, Mark V.
1983-01-01
An important aspect of hospital revenue regulation at the State level is the use of retroactive allowances for changes in the volume of service. Arguments favoring non-proportional allowances have been based on statistical studies of marginal cost, together with concerns about fairness toward non-profit enterprises or concerns about various inflationary biases in hospital management. This article attempts to review and clarify the regulatory issues and choices, with the aid of new econometric work that explicitly allows for the effects of transitory as well as expected demand changes on hospital expense. The present analysis is also novel in treating length of stay as an endogenous variable in cost functions. We analyzed cost variation for a panel of over 800 hospitals that reported monthly to Hospital Administrative Services between 1973 and 1978. The central results are that marginal cost of unexpected admissions is about half of average cost, while marginal cost of forecasted admissions is about equal to average cost. We obtained relatively low estimates of the cost of an “empty bed.” The study tends to support proportional volume allowances in revenue regulation programs, with perhaps a residual role for selective case review. PMID:10309853
Luzio de Melo, Paulo; da Silva, Miguel Tavares; Martins, Jorge; Newman, Dava
2015-05-01
Functional electrical stimulation (FES) has been used over the last decades as a method to rehabilitate lost motor functions of individuals with spinal cord injury, multiple sclerosis, and post-stroke hemiparesis. Within this field, researchers in need of developing FES-based control solutions for specific disabilities often have to choose between either the acquisition and integration of high-performance industry-level systems, which are rather expensive and hardly portable, or develop custom-made portable solutions, which despite their lower cost, usually require expert-level electronic skills. Here, a flexible low-cost microcontroller-based platform for rapid prototyping of FES neuroprostheses is presented, designed for reduced execution complexity, development time, and production cost. For this reason, the Arduino open-source microcontroller platform was used, together with off-the-shelf components whenever possible. The developed system enables the rapid deployment of portable FES-based gait neuroprostheses, being flexible enough to allow simple open-loop strategies but also more complex closed-loop solutions. The system is based on a modular architecture that allows the development of optimized solutions depending on the desired FES applications, even though the design and testing of the platform were focused toward drop foot correction. The flexibility of the system was demonstrated using two algorithms targeting drop foot condition within different experimental setups. Successful bench testing of the device in healthy subjects demonstrated these neuroprosthesis platform capabilities to correct drop foot. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Kim, Jae-Chang; Moon, Sung-Ki; Kwak, Sangshin
2018-04-01
This paper presents a direct model-based predictive control scheme for voltage source inverters (VSIs) with reduced common-mode voltages (CMVs). The developed method directly finds optimal vectors without using repetitive calculation of a cost function. To adjust output currents with the CMVs in the range of -Vdc/6 to +Vdc/6, the developed method uses voltage vectors, as finite control resources, excluding zero voltage vectors which produce the CMVs in the VSI within ±Vdc/2. In a model-based predictive control (MPC), not using zero voltage vectors increases the output current ripples and the current errors. To alleviate these problems, the developed method uses two non-zero voltage vectors in one sampling step. In addition, the voltage vectors scheduled to be used are directly selected at every sampling step once the developed method calculates the future reference voltage vector, saving the efforts of repeatedly calculating the cost function. And the two non-zero voltage vectors are optimally allocated to make the output current approach the reference current as close as possible. Thus, low CMV, rapid current-following capability and sufficient output current ripple performance are attained by the developed method. The results of a simulation and an experiment verify the effectiveness of the developed method.
Chuang, Kenneth H; Covinsky, Kenneth E; Sands, Laura P; Fortinsky, Richard H; Palmer, Robert M; Landefeld, C Seth
2003-12-01
To determine whether hospital costs are higher in patients with lower functional status at admission, defined as dependence in one or more activities of daily living (ADLs), after adjustment for Medicare Diagnosis-Related Group (DRG) payments. Prospective study. General medical service at a teaching hospital. One thousand six hundred twelve patients aged 70 and older. The hospital cost of care for each patient was determined using a cost management information system, which allocates all hospital costs to individual patients. Hospital costs were higher in patients dependent in ADLs on admission than in patients independent in ADLs on admission ($5,300 vs $4,060, P<.01). Mean hospital costs remained higher in ADL-dependent patients than in ADL-independent patients in an analysis that adjusted for DRG weight ($5,240 vs $4,140, P<.01), and in multivariate analyses adjusting for age, race, sex, Charlson comorbidity score, acute physiology and chronic health evaluation score, and admission from a nursing home as well as for DRG weight ($5,200 vs $4,220, P<.01). This difference represents a 23% (95% confidence interval=15-32%) higher cost to take care of older dependent patients. Hospital cost is higher in patients with worse ADL function, even after adjusting for DRG payments. If this finding is true in other hospitals, DRG-based payments provide hospitals a financial incentive to avoid patients dependent in ADLs and disadvantage hospitals with more patients dependent in ADLs.
Empirical likelihood-based confidence intervals for mean medical cost with censored data.
Jeyarajah, Jenny; Qin, Gengsheng
2017-11-10
In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Sadjadi, Seyed Jafar; Hamidi Hesarsorkh, Aghil; Mohammadi, Mehdi; Bonyadi Naeini, Ali
2015-06-01
Coordination and harmony between different departments of a company can be an important factor in achieving competitive advantage if the company corrects alignment between strategies of different departments. This paper presents an integrated decision model based on recent advances of geometric programming technique. The demand of a product considers as a power function of factors such as product's price, marketing expenditures, and consumer service expenditures. Furthermore, production cost considers as a cubic power function of outputs. The model will be solved by recent advances in convex optimization tools. Finally, the solution procedure is illustrated by numerical example.
NASA Astrophysics Data System (ADS)
Stewart, Iris T.; Loague, Keith
2003-12-01
Groundwater vulnerability assessments of nonpoint source agrochemical contamination at regional scales are either qualitative in nature or require prohibitively costly computational efforts. By contrast, the type transfer function (TTF) modeling approach for vadose zone pesticide leaching presented here estimates solute concentrations at a depth of interest, only uses available soil survey, climatic, and irrigation information, and requires minimal computational cost for application. TTFs are soil texture based travel time probability density functions that describe a characteristic leaching behavior for soil profiles with similar soil hydraulic properties. Seven sets of TTFs, representing different levels of upscaling, were developed for six loam soil textural classes with the aid of simulated breakthrough curves from synthetic data sets. For each TTF set, TTFs were determined from a group or subgroup of breakthrough curves for each soil texture by identifying the effective parameters of the function that described the average leaching behavior of the group. The grouping of the breakthrough curves was based on the TTF index, a measure of the magnitude of the peak concentration, the peak arrival time, and the concentration spread. Comparison to process-based simulations show that the TTFs perform well with respect to mass balance, concentration magnitude, and the timing of concentration peaks. Sets of TTFs based on individual soil textures perform better for all the evaluation criteria than sets that span all textures. As prediction accuracy and computational cost increase with the number of TTFs in a set, the selection of a TTF set is determined by a given application.
A case study in electricity regulation: Theory, evidence, and policy
NASA Astrophysics Data System (ADS)
Luk, Stephen Kai Ming
This research provides a thorough empirical analysis of the problem of excess capacity found in the electricity supply industry in Hong Kong. I utilize a cost-function based temporary equilibrium framework to investigate empirically whether the current regulatory scheme encourages the two utilities to overinvest in capital, and how much consumers would have saved if the underutilized capacity is eliminated. The research is divided into two main parts. The first section attempts to find any evidence of over-investment in capital. As a point of departure from traditional analysis, I treat physical capital as quasi-fixed, which implies a restricted cost function to represent the firm's short-run cost structure. Under such specification, the firm minimizes the cost of employing variable factor inputs subject to predetermined levels of quasi-fixed factors. Using a transcendental logarithmic restricted cost function, I estimate the cost-side equivalent of marginal product of capital, or commonly referred to as "shadow values" of capital. The estimation results suggest that the two electric utilities consistently over-invest in generation capacity. The second part of this research focuses on the economies of capital utilization, and the estimation of distortion cost in capital investment. Again, I utilize a translog specification of the cost function to estimate the actual cost of the excess capacity, and to find out how much consumers could have saved if the underutilized generation capacity were brought closer to the international standard. Estimation results indicate that an increase in the utilization rate can significantly reduce the costs of both utilities. And if the current excess capacity were reduced to the international standard, the combined savings in costs for both firms will reach 4.4 billion. This amount of savings, if redistributed to all consumers evenly, will translate into a 650 rebate per capita. Finally, two policy recommendations: a more stringent policy towards capacity expansion and the creation of a reimbursement program, are discussed.
van Dongen, J M; Groeneweg, R; Rubinstein, S M; Bosmans, J E; Oostendorp, R A B; Ostelo, R W J G; van Tulder, M W
2016-07-01
To evaluate the cost-effectiveness of manual therapy according to the Utrecht School (MTU) in comparison with physiotherapy (PT) in sub-acute and chronic non-specific neck pain patients from a societal perspective. An economic evaluation was conducted alongside a 52-week randomized controlled trial, in which 90 patients were randomized to the MTU group and 91 to the PT group. Clinical outcomes included perceived recovery (yes/no), functional status (continuous and yes/no), and quality-adjusted life-years (QALYs). Costs were measured from a societal perspective using self-reported questionnaires. Missing data were imputed using multiple imputation. To estimate statistical uncertainty, bootstrapping techniques were used. After 52 weeks, there were no significant between-group differences in clinical outcomes. During follow-up, intervention costs (β:€-32; 95 %CI: -54 to -10) and healthcare costs (β:€-126; 95 %CI: -235 to -32) were significantly lower in the MTU group than in the PT group, whereas unpaid productivity costs were significantly higher (β:€186; 95 %CI:19-557). Societal costs did not significantly differ between groups (β:€-96; 95 %CI:-1975-2022). For QALYs and functional status (yes/no), the maximum probability of MTU being cost-effective in comparison with PT was low (≤0.54). For perceived recovery (yes/no) and functional status (continuous), a large amount of money must be paid per additional unit of effect to reach a reasonable probability of cost-effectiveness. From a societal perspective, MTU was not cost-effective in comparison with PT in patients with sub-acute and chronic non-specific neck pain for perceived recovery, functional status, and QALYs. As no clear total societal cost and effect differences were found between MTU and PT, the decision about what intervention to administer, reimburse, and/or implement can be based on the preferences of the patient and the decision-maker at hand. ClinicalTrials.gov Identifier: NCT00713843.
Data on cost analysis of drilling mud displacement during drilling operation.
Okoro, Emeka Emmanuel; Dosunmu, Adewale; Iyuke, Sunny E
2018-08-01
The focus of this research was to present a data article for analyzing the cost of displacing a drilling fluid during the drilling operation. The cost of conventional Spud, KCl and Pseudo Oil base (POBM) muds used in drilling oil and gas wells are compared with that of a Reversible Invert Emulsion Mud. The cost analysis is limited to three sections for optimum and effective Comparison. To optimize drilling operations, it is important that we specify the yardstick by which drilling performance is measured. The most relevant yardstick is the cost per foot drilled. The data have shown that the prices for drilling mud systems are a function of the mud system formulation cost for that particular mud weight and maintenance per day. These costs for different mud systems and depend on the base fluid. The Reversible invert emulsion drilling fluid, eliminates the cost acquired in displacing Pseudo Oil Based mud (POBM) from the well, possible formation damage (permeability impairment) resulting from the use of viscous pill in displacing the POBM from the wellbore, and also eliminates the risk of taking a kick during mud change-over. With this reversible mud system, the costs of special fluids that are rarely applied for the well-completion purpose (cleaning of thick mud filter cake) may be reduced to the barest minimum.
Time-driven activity-based costing in health care: A systematic review of the literature.
Keel, George; Savage, Carl; Rafiq, Muhammad; Mazzocato, Pamela
2017-07-01
Health care organizations around the world are investing heavily in value-based health care (VBHC), and time-driven activity-based costing (TDABC) has been suggested as the cost-component of VBHC capable of addressing costing challenges. The aim of this study is to explore why TDABC has been applied in health care, how its application reflects a seven-step method developed specifically for VBHC, and implications for the future use of TDABC. This is a systematic review following the PRISMA statement. Qualitative methods were employed to analyze data through content analyses. TDABC is applicable in health care and can help to efficiently cost processes, and thereby overcome a key challenge associated with current cost-accounting methods The method's ability to inform bundled payment reimbursement systems and to coordinate delivery across the care continuum remains to be demonstrated in the published literature, and the role of TDABC in this cost-accounting landscape is still developing. TDABC should be gradually incorporated into functional systems, while following and building upon the recommendations outlined in this review. In this way, TDABC will be better positioned to accurately capture the cost of care delivery for conditions and to control cost in the effort to create value in health care. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Validi, AbdoulAhad
2014-03-01
This study introduces a non-intrusive approach in the context of low-rank separated representation to construct a surrogate of high-dimensional stochastic functions, e.g., PDEs/ODEs, in order to decrease the computational cost of Markov Chain Monte Carlo simulations in Bayesian inference. The surrogate model is constructed via a regularized alternative least-square regression with Tikhonov regularization using a roughening matrix computing the gradient of the solution, in conjunction with a perturbation-based error indicator to detect optimal model complexities. The model approximates a vector of a continuous solution at discrete values of a physical variable. The required number of random realizations to achieve a successful approximation linearly depends on the function dimensionality. The computational cost of the model construction is quadratic in the number of random inputs, which potentially tackles the curse of dimensionality in high-dimensional stochastic functions. Furthermore, this vector-valued separated representation-based model, in comparison to the available scalar-valued case, leads to a significant reduction in the cost of approximation by an order of magnitude equal to the vector size. The performance of the method is studied through its application to three numerical examples including a 41-dimensional elliptic PDE and a 21-dimensional cavity flow.
Martín, Fernando; Moreno, Luis; Garrido, Santiago; Blanco, Dolores
2015-09-16
One of the most important skills desired for a mobile robot is the ability to obtain its own location even in challenging environments. The information provided by the sensing system is used here to solve the global localization problem. In our previous work, we designed different algorithms founded on evolutionary strategies in order to solve the aforementioned task. The latest developments are presented in this paper. The engine of the localization module is a combination of the Markov chain Monte Carlo sampling technique and the Differential Evolution method, which results in a particle filter based on the minimization of a fitness function. The robot's pose is estimated from a set of possible locations weighted by a cost value. The measurements of the perceptive sensors are used together with the predicted ones in a known map to define a cost function to optimize. Although most localization methods rely on quadratic fitness functions, the sensed information is processed asymmetrically in this filter. The Kullback-Leibler divergence is the basis of a cost function that makes it possible to deal with different types of occlusions. The algorithm performance has been checked in a real map. The results are excellent in environments with dynamic and unmodeled obstacles, a fact that causes occlusions in the sensing area.
Martín, Fernando; Moreno, Luis; Garrido, Santiago; Blanco, Dolores
2015-01-01
One of the most important skills desired for a mobile robot is the ability to obtain its own location even in challenging environments. The information provided by the sensing system is used here to solve the global localization problem. In our previous work, we designed different algorithms founded on evolutionary strategies in order to solve the aforementioned task. The latest developments are presented in this paper. The engine of the localization module is a combination of the Markov chain Monte Carlo sampling technique and the Differential Evolution method, which results in a particle filter based on the minimization of a fitness function. The robot’s pose is estimated from a set of possible locations weighted by a cost value. The measurements of the perceptive sensors are used together with the predicted ones in a known map to define a cost function to optimize. Although most localization methods rely on quadratic fitness functions, the sensed information is processed asymmetrically in this filter. The Kullback-Leibler divergence is the basis of a cost function that makes it possible to deal with different types of occlusions. The algorithm performance has been checked in a real map. The results are excellent in environments with dynamic and unmodeled obstacles, a fact that causes occlusions in the sensing area. PMID:26389914
Evensen, Stig; Wisløff, Torbjørn; Lystad, June Ullevoldsæter; Bull, Helen; Ueland, Torill; Falkum, Erik
2016-01-01
Schizophrenia is associated with recurrent hospitalizations, need for long-term community support, poor social functioning, and low employment rates. Despite the wide- ranging financial and social burdens associated with the illness, there is great uncertainty regarding prevalence, employment rates, and the societal costs of schizophrenia. The current study investigates 12-month prevalence of patients treated for schizophrenia, employment rates, and cost of schizophrenia using a population-based top-down approach. Data were obtained from comprehensive and mandatory health and welfare registers in Norway. We identified a 12-month prevalence of 0.17% for the entire population. The employment rate among working-age individuals was 10.24%. The societal costs for the 12-month period were USD 890 million. The average cost per individual with schizophrenia was USD 106 thousand. Inpatient care and lost productivity due to high unemployment represented 33% and 29%, respectively, of the total costs. The use of mandatory health and welfare registers enabled a unique and informative analysis on true population-based datasets. PMID:26433216
Feasibility and operating costs of an air cycle for CCHP in a fast food restaurant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez-Blanco, Horacio; Vineyard, Edward
This work considers the possibilities of an air-based Brayton cycle to provide the power, heating and cooling needs of fast-food restaurants. A model of the cycle based on conventional turbomachinery loss coefficients is formulated. The heating, cooling and power capabilities of the cycle are extracted from simulation results. Power and thermal loads for restaurants in Knoxville, TN and in International Falls, MN, are considered. It is found that the cycle can meet the loads by setting speed and mass flow-rate apportionment between the power and cooling functional sections. The associated energy costs appear elevated when compared to the cost ofmore » operating individual components or a more conventional, absorption-based CHP system. Lastly, a first-order estimate of capital investments is provided. Suggestions for future work whereby the operational costs could be reduced are given in the conclusions.« less
Feasibility and operating costs of an air cycle for CCHP in a fast food restaurant
Perez-Blanco, Horacio; Vineyard, Edward
2016-05-06
This work considers the possibilities of an air-based Brayton cycle to provide the power, heating and cooling needs of fast-food restaurants. A model of the cycle based on conventional turbomachinery loss coefficients is formulated. The heating, cooling and power capabilities of the cycle are extracted from simulation results. Power and thermal loads for restaurants in Knoxville, TN and in International Falls, MN, are considered. It is found that the cycle can meet the loads by setting speed and mass flow-rate apportionment between the power and cooling functional sections. The associated energy costs appear elevated when compared to the cost ofmore » operating individual components or a more conventional, absorption-based CHP system. Lastly, a first-order estimate of capital investments is provided. Suggestions for future work whereby the operational costs could be reduced are given in the conclusions.« less
On spatial mutation-selection models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kondratiev, Yuri, E-mail: kondrat@math.uni-bielefeld.de; Kutoviy, Oleksandr, E-mail: kutoviy@math.uni-bielefeld.de, E-mail: kutovyi@mit.edu; Department of Mathematics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139
2013-11-15
We discuss the selection procedure in the framework of mutation models. We study the regulation for stochastically developing systems based on a transformation of the initial Markov process which includes a cost functional. The transformation of initial Markov process by cost functional has an analytic realization in terms of a Kimura-Maruyama type equation for the time evolution of states or in terms of the corresponding Feynman-Kac formula on the path space. The state evolution of the system including the limiting behavior is studied for two types of mutation-selection models.
Network formation: neighborhood structures, establishment costs, and distributed learning.
Chasparis, Georgios C; Shamma, Jeff S
2013-12-01
We consider the problem of network formation in a distributed fashion. Network formation is modeled as a strategic-form game, where agents represent nodes that form and sever unidirectional links with other nodes and derive utilities from these links. Furthermore, agents can form links only with a limited set of neighbors. Agents trade off the benefit from links, which is determined by a distance-dependent reward function, and the cost of maintaining links. When each agent acts independently, trying to maximize its own utility function, we can characterize “stable” networks through the notion of Nash equilibrium. In fact, the introduced reward and cost functions lead to Nash equilibria (networks), which exhibit several desirable properties such as connectivity, bounded-hop diameter, and efficiency (i.e., minimum number of links). Since Nash networks may not necessarily be efficient, we also explore the possibility of “shaping” the set of Nash networks through the introduction of state-based utility functions. Such utility functions may represent dynamic phenomena such as establishment costs (either positive or negative). Finally, we show how Nash networks can be the outcome of a distributed learning process. In particular, we extend previous learning processes to so-called “state-based” weakly acyclic games, and we show that the proposed network formation games belong to this class of games.
Reed, Shelby D; Neilson, Matthew P; Gardner, Matthew; Li, Yanhong; Briggs, Andrew H; Polsky, Daniel E; Graham, Felicia L; Bowers, Margaret T; Paul, Sara C; Granger, Bradi B; Schulman, Kevin A; Whellan, David J; Riegel, Barbara; Levy, Wayne C
2015-11-01
Heart failure disease management programs can influence medical resource use and quality-adjusted survival. Because projecting long-term costs and survival is challenging, a consistent and valid approach to extrapolating short-term outcomes would be valuable. We developed the Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model, a Web-based simulation tool designed to integrate data on demographic, clinical, and laboratory characteristics; use of evidence-based medications; and costs to generate predicted outcomes. Survival projections are based on a modified Seattle Heart Failure Model. Projections of resource use and quality of life are modeled using relationships with time-varying Seattle Heart Failure Model scores. The model can be used to evaluate parallel-group and single-cohort study designs and hypothetical programs. Simulations consist of 10,000 pairs of virtual cohorts used to generate estimates of resource use, costs, survival, and incremental cost-effectiveness ratios from user inputs. The model demonstrated acceptable internal and external validity in replicating resource use, costs, and survival estimates from 3 clinical trials. Simulations to evaluate the cost-effectiveness of heart failure disease management programs across 3 scenarios demonstrate how the model can be used to design a program in which short-term improvements in functioning and use of evidence-based treatments are sufficient to demonstrate good long-term value to the health care system. The Tools for Economic Analysis of Patient Management Interventions in Heart Failure Cost-Effectiveness Model provides researchers and providers with a tool for conducting long-term cost-effectiveness analyses of disease management programs in heart failure. Copyright © 2015 Elsevier Inc. All rights reserved.
LMDS Lightweight Modular Display System.
1982-02-16
based on standard functions. This means that the cost to produce a particular display function can be met in the most economical fashion and at the same...not mean that the NTDS interface would be eliminated. What is anticipated is the use of ETHERNET at a low level of system interface, ie internal to...GENERATOR dSYMBOL GEN eCOMMUNICATION 3-2 The architecture of the unit’s (fig 3-4) input circuitry is based on a video table look-up ROM. The function
Candidate Mission from Planet Earth control and data delivery system architecture
NASA Technical Reports Server (NTRS)
Shapiro, Phillip; Weinstein, Frank C.; Hei, Donald J., Jr.; Todd, Jacqueline
1992-01-01
Using a structured, experienced-based approach, Goddard Space Flight Center (GSFC) has assessed the generic functional requirements for a lunar mission control and data delivery (CDD) system. This analysis was based on lunar mission requirements outlined in GSFC-developed user traffic models. The CDD system will facilitate data transportation among user elements, element operations, and user teams by providing functions such as data management, fault isolation, fault correction, and link acquisition. The CDD system for the lunar missions must not only satisfy lunar requirements but also facilitate and provide early development of data system technologies for Mars. Reuse and evolution of existing data systems can help to maximize system reliability and minimize cost. This paper presents a set of existing and currently planned NASA data systems that provide the basic functionality. Reuse of such systems can have an impact on mission design and significantly reduce CDD and other system development costs.
Electrical cable utilization for wave energy converters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bull, Diana; Baca, Michael; Schenkman, Benjamin
Here, this paper investigates the suitability of sizing the electrical export cable based on the rating of the contributing WECs within a farm. These investigations have produced a new methodology to evaluate the probabilities associated with peak power values on an annual basis. It has been shown that the peaks in pneumatic power production will follow an exponential probability function for a linear model. A methodology to combine all the individual probability functions into an annual view has been demonstrated on pneumatic power production by a Backward Bent Duct Buoy (BBDB). These investigations have also resulted in a highly simplifiedmore » and perfunctory model of installed cable cost as a function of voltage and conductor cross-section. This work solidifies the need to determine electrical export cable rating based on expected energy delivery as opposed to device rating as small decreases in energy delivery can result in cost savings.« less
Electrical cable utilization for wave energy converters
Bull, Diana; Baca, Michael; Schenkman, Benjamin
2018-04-27
Here, this paper investigates the suitability of sizing the electrical export cable based on the rating of the contributing WECs within a farm. These investigations have produced a new methodology to evaluate the probabilities associated with peak power values on an annual basis. It has been shown that the peaks in pneumatic power production will follow an exponential probability function for a linear model. A methodology to combine all the individual probability functions into an annual view has been demonstrated on pneumatic power production by a Backward Bent Duct Buoy (BBDB). These investigations have also resulted in a highly simplifiedmore » and perfunctory model of installed cable cost as a function of voltage and conductor cross-section. This work solidifies the need to determine electrical export cable rating based on expected energy delivery as opposed to device rating as small decreases in energy delivery can result in cost savings.« less
Economic analysis of the space shuttle system, volume 1
NASA Technical Reports Server (NTRS)
1972-01-01
An economic analysis of the space shuttle system is presented. The analysis is based on economic benefits, recurring costs, non-recurring costs, and ecomomic tradeoff functions. The most economic space shuttle configuration is determined on the basis of: (1) objectives of reusable space transportation system, (2) various space transportation systems considered and (3) alternative space shuttle systems.
A Class of Manifold Regularized Multiplicative Update Algorithms for Image Clustering.
Yang, Shangming; Yi, Zhang; He, Xiaofei; Li, Xuelong
2015-12-01
Multiplicative update algorithms are important tools for information retrieval, image processing, and pattern recognition. However, when the graph regularization is added to the cost function, different classes of sample data may be mapped to the same subspace, which leads to the increase of data clustering error rate. In this paper, an improved nonnegative matrix factorization (NMF) cost function is introduced. Based on the cost function, a class of novel graph regularized NMF algorithms is developed, which results in a class of extended multiplicative update algorithms with manifold structure regularization. Analysis shows that in the learning, the proposed algorithms can efficiently minimize the rank of the data representation matrix. Theoretical results presented in this paper are confirmed by simulations. For different initializations and data sets, variation curves of cost functions and decomposition data are presented to show the convergence features of the proposed update rules. Basis images, reconstructed images, and clustering results are utilized to present the efficiency of the new algorithms. Last, the clustering accuracies of different algorithms are also investigated, which shows that the proposed algorithms can achieve state-of-the-art performance in applications of image clustering.
Text-line extraction in handwritten Chinese documents based on an energy minimization framework.
Koo, Hyung Il; Cho, Nam Ik
2012-03-01
Text-line extraction in unconstrained handwritten documents remains a challenging problem due to nonuniform character scale, spatially varying text orientation, and the interference between text lines. In order to address these problems, we propose a new cost function that considers the interactions between text lines and the curvilinearity of each text line. Precisely, we achieve this goal by introducing normalized measures for them, which are based on an estimated line spacing. We also present an optimization method that exploits the properties of our cost function. Experimental results on a database consisting of 853 handwritten Chinese document images have shown that our method achieves a detection rate of 99.52% and an error rate of 0.32%, which outperforms conventional methods.
Optimality Principles for Model-Based Prediction of Human Gait
Ackermann, Marko; van den Bogert, Antonie J.
2010-01-01
Although humans have a large repertoire of potential movements, gait patterns tend to be stereotypical and appear to be selected according to optimality principles such as minimal energy. When applied to dynamic musculoskeletal models such optimality principles might be used to predict how a patient’s gait adapts to mechanical interventions such as prosthetic devices or surgery. In this paper we study the effects of different performance criteria on predicted gait patterns using a 2D musculoskeletal model. The associated optimal control problem for a family of different cost functions was solved utilizing the direct collocation method. It was found that fatigue-like cost functions produced realistic gait, with stance phase knee flexion, as opposed to energy-related cost functions which avoided knee flexion during the stance phase. We conclude that fatigue minimization may be one of the primary optimality principles governing human gait. PMID:20074736
Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions.
Chen, Ke; Wang, Shihai
2011-01-01
Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work.
Smoothing of cost function leads to faster convergence of neural network learning
NASA Astrophysics Data System (ADS)
Xu, Li-Qun; Hall, Trevor J.
1994-03-01
One of the major problems in supervised learning of neural networks is the inevitable local minima inherent in the cost function f(W,D). This often makes classic gradient-descent-based learning algorithms that calculate the weight updates for each iteration according to (Delta) W(t) equals -(eta) (DOT)$DELwf(W,D) powerless. In this paper we describe a new strategy to solve this problem, which, adaptively, changes the learning rate and manipulates the gradient estimator simultaneously. The idea is to implicitly convert the local- minima-laden cost function f((DOT)) into a sequence of its smoothed versions {f(beta t)}Ttequals1, which, subject to the parameter (beta) t, bears less details at time t equals 1 and gradually more later on, the learning is actually performed on this sequence of functionals. The corresponding smoothed global minima obtained in this way, {Wt}Ttequals1, thus progressively approximate W-the desired global minimum. Experimental results on a nonconvex function minimization problem and a typical neural network learning task are given, analyses and discussions of some important issues are provided.
Saramago, Pedro; Cooper, Nicola J; Sutton, Alex J; Hayes, Mike; Dunn, Ken; Manca, Andrea; Kendrick, Denise
2014-05-16
The UK has one of the highest rates for deaths from fire and flames in children aged 0-14 years compared to other high income countries. Evidence shows that smoke alarms can reduce the risk of fire-related injury but little exists on their cost-effectiveness. We aimed to compare the cost effectiveness of different interventions for the uptake of 'functioning' smoke alarms and consequently for the prevention of fire-related injuries in children in the UK. We carried out a decision model-based probabilistic cost-effectiveness analysis. We used a hypothetical population of newborns and evaluated the impact of living in a household with or without a functioning smoke alarm during the first 5 years of their life on overall lifetime costs and quality of life from a public health perspective. We compared seven interventions, ranging from usual care to more complex interventions comprising of education, free/low cost equipment giveaway, equipment fitting and/or home safety inspection. Education and free/low cost equipment was the most cost-effective intervention with an estimated incremental cost-effectiveness ratio of £34,200 per QALY gained compared to usual care. This was reduced to approximately £4,500 per QALY gained when 1.8 children under the age of 5 were assumed per household. Assessing cost-effectiveness, as well as effectiveness, is important in a public sector system operating under a fixed budget restraint. As highlighted in this study, the more effective interventions (in this case the more complex interventions) may not necessarily be the ones considered the most cost-effective.
Thakar, Sumit; Dadlani, Ravi; Sivaraju, Laxminadh; Aryan, Saritha; Mohan, Dilip; Sai Kiran, Narayanam Anantha; Rajarathnam, Ravikiran; Shyam, Maya; Sadanand, Venkatraman; Hegde, Alangar S.
2015-01-01
Background: It is well-accepted that the current healthcare scenario worldwide is due for a radical change, given that it is fraught with mounting costs and varying quality. Various modifications in health policies have been instituted toward this end. An alternative model, the low-cost, value-based health model, focuses on maximizing value for patients by moving away from a physician-centered, supply-driven system to a patient-centered system. Methods: The authors discuss the successful inception, functioning, sustainability, and replicability of a novel health model in neurosurgery built and sustained by inspired humanitarianism and that provides all treatment at no cost to the patients irrespective of their socioeconomic strata, color or creed. Results: The Sri Sathya Sai Institute of Higher Medical Sciences (SSSIHMS) at Whitefield, Bengaluru, India, a private charitable hospital established in 2001, functions on the ideals of providing free state-of-the-art healthcare to all in a compassionate and holistic manner. With modern equipment and respectable outcome benchmarks, its neurosurgery unit has operated on around 18,000 patients since its inception, and as such, has contributed INR 5310 million (USD 88.5 million) to society from an economic standpoint. Conclusions: The inception and sustainability of the SSSIHMS model are based on self-perpetuating philanthropy, a cost-conscious culture and the dissemination of human values. Replicated worldwide, at least in the developing nations, this unique healthcare model may well change the face of healthcare economics. PMID:26322241
NASA Astrophysics Data System (ADS)
Fuchs, Erica R. H.; Bruce, E. J.; Ram, R. J.; Kirchain, Randolph E.
2006-08-01
The monolithic integration of components holds promise to increase network functionality and reduce packaging expense. Integration also drives down yield due to manufacturing complexity and the compounding of failures across devices. Consensus is lacking on the economically preferred extent of integration. Previous studies on the cost feasibility of integration have used high-level estimation methods. This study instead focuses on accurate-to-industry detail, basing a process-based cost model of device manufacture on data collected from 20 firms across the optoelectronics supply chain. The model presented allows for the definition of process organization, including testing, as well as processing conditions, operational characteristics, and level of automation at each step. This study focuses on the cost implications of integration of a 1550-nm DFB laser with an electroabsorptive modulator on an InP platform. Results show the monolithically integrated design to be more cost competitive over discrete component options regardless of production scale. Dominant cost drivers are packaging, testing, and assembly. Leveraging the technical detail underlying model projections, component alignment, bonding, and metal-organic chemical vapor deposition (MOCVD) are identified as processes where technical improvements are most critical to lowering costs. Such results should encourage exploration of the cost advantages of further integration and focus cost-driven technology development.
Parikh, Kushal R; Davenport, Matthew S; Viglianti, Benjamin L; Hubers, David; Brown, Richard K J
2016-07-01
To determine the financial implications of switching technetium (Tc)-99m mercaptoacetyltriglycine (MAG-3) to Tc-99m diethylene triamine penta-acetic acid (DTPA) at certain renal function thresholds before renal scintigraphy. Institutional review board approval was obtained, and informed consent was waived for this HIPAA-compliant, retrospective, cohort study. Consecutive adult subjects (27 inpatients; 124 outpatients) who underwent MAG-3 renal scintigraphy, in the period from July 1, 2012 to June 30, 2013, were stratified retrospectively by hypothetical serum creatinine and estimated glomerular filtration rate (eGFR) thresholds, based on pre-procedure renal function. Thresholds were used to estimate the financial effects of using MAG-3 when renal function was at or worse than a given cutoff value, and DTPA otherwise. Cost analysis was performed with consideration of raw material and preparation costs, with radiotracer costs estimated by both vendor list pricing and proprietary institutional pricing. The primary outcome was a comparison of each hypothetical threshold to the clinical reality in which all subjects received MAG-3, and the results were supported by univariate sensitivity analysis. Annual cost savings by serum creatinine threshold were as follows (threshold given in mg/dL): $17,319 if ≥1.0; $33,015 if ≥1.5; and $35,180 if ≥2.0. Annual cost savings by eGFR threshold were as follows (threshold given in mL/min/1.73 m(2)): $21,649 if ≤60; $28,414 if ≤45; and $32,744 if ≤30. Cost-savings inflection points were approximately 1.25 mg/dL (serum creatinine) and 60 mL/min/1.73m(2) (eGFR). Secondary analysis by proprietary institutional pricing revealed similar trends, and cost savings of similar magnitude. Sensitivity analysis confirmed cost savings at all tested thresholds. Reserving MAG-3 utilization for patients who have impaired renal function can impart substantial annual cost savings to a radiology department. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Reliability based design including future tests and multiagent approaches
NASA Astrophysics Data System (ADS)
Villanueva, Diane
The initial stages of reliability-based design optimization involve the formulation of objective functions and constraints, and building a model to estimate the reliability of the design with quantified uncertainties. However, even experienced hands often overlook important objective functions and constraints that affect the design. In addition, uncertainty reduction measures, such as tests and redesign, are often not considered in reliability calculations during the initial stages. This research considers two areas that concern the design of engineering systems: 1) the trade-off of the effect of a test and post-test redesign on reliability and cost and 2) the search for multiple candidate designs as insurance against unforeseen faults in some designs. In this research, a methodology was developed to estimate the effect of a single future test and post-test redesign on reliability and cost. The methodology uses assumed distributions of computational and experimental errors with re-design rules to simulate alternative future test and redesign outcomes to form a probabilistic estimate of the reliability and cost for a given design. Further, it was explored how modeling a future test and redesign provides a company an opportunity to balance development costs versus performance by simultaneously designing the design and the post-test redesign rules during the initial design stage. The second area of this research considers the use of dynamic local surrogates, or surrogate-based agents, to locate multiple candidate designs. Surrogate-based global optimization algorithms often require search in multiple candidate regions of design space, expending most of the computation needed to define multiple alternate designs. Thus, focusing on solely locating the best design may be wasteful. We extended adaptive sampling surrogate techniques to locate multiple optima by building local surrogates in sub-regions of the design space to identify optima. The efficiency of this method was studied, and the method was compared to other surrogate-based optimization methods that aim to locate the global optimum using two two-dimensional test functions, a six-dimensional test function, and a five-dimensional engineering example.
Dual-use micromechanical inertial sensors
NASA Astrophysics Data System (ADS)
Elwell, John M., Jr.
1995-03-01
A new industry, which will provide low-cost silicon-based inertial sensors to the commercial and military markets. is being created. Inertial measurement units are used extensively in military systems, and new versions are expected to find their way into commercial products, such as automobiles, as production costs fall as technology advances. An automotive inertial measurement unit can be expected to perform a complete range of control, diagnostic, and navigation functions. These functions are expected to provide significant active safety, performance, comfort, convenience, and fuel economy advantages to the automotive consumer. An inertial measurement unit applicable to the automobile industry would meet many of the performance requirements for the military in important areas, such as antenna and image stabilization, autopilot control, and the guidance of smart weapons. Such a new industrial base will significantly reduce the acquisition cost of many future tactical weapons systems. An alliance, consisting of the Charles Stark Draper Laboratory and Rockwell International, has been created to develop inertial products for this new industry.
Price dynamics and market power in an agent-based power exchange
NASA Astrophysics Data System (ADS)
Cincotti, Silvano; Guerci, Eric; Raberto, Marco
2005-05-01
This paper presents an agent-based model of a power exchange. Supply of electric power is provided by competing generating companies, whereas demand is assumed to be inelastic with respect to price and is constant over time. The transmission network topology is assumed to be a fully connected graph and no transmission constraints are taken into account. The price formation process follows a common scheme for real power exchanges: a clearing house mechanism with uniform price, i.e., with price set equal across all matched buyer-seller pairs. A single class of generating companies is considered, characterized by linear cost function for each technology. Generating companies compete for the sale of electricity through repeated rounds of the uniform auction and determine their supply functions according to production costs. However, an individual reinforcement learning algorithm characterizes generating companies behaviors in order to attain the expected maximum possible profit in each auction round. The paper investigates how the market competitive equilibrium is affected by market microstructure and production costs.
NASA Astrophysics Data System (ADS)
Doerr, Timothy P.; Alves, Gelio; Yu, Yi-Kuo
2005-08-01
Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time using the transfer matrix technique or, equivalently, the dynamic programming approach. This suggests a way to efficiently find approximate solutions-find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of the kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the finite number of high-ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks-peptide sequencing using tandem mass spectrometry data. For directed paths in random media, the scaling function depends on the particular realization of randomness; in the mass spectrometry case, the scaling function is spectrum-specific.
A New Model for Solving Time-Cost-Quality Trade-Off Problems in Construction
Fu, Fang; Zhang, Tao
2016-01-01
A poor quality affects project makespan and its total costs negatively, but it can be recovered by repair works during construction. We construct a new non-linear programming model based on the classic multi-mode resource constrained project scheduling problem considering repair works. In order to obtain satisfactory quality without a high increase of project cost, the objective is to minimize total quality cost which consists of the prevention cost and failure cost according to Quality-Cost Analysis. A binary dependent normal distribution function is adopted to describe the activity quality; Cumulative quality is defined to determine whether to initiate repair works, according to the different relationships among activity qualities, namely, the coordinative and precedence relationship. Furthermore, a shuffled frog-leaping algorithm is developed to solve this discrete trade-off problem based on an adaptive serial schedule generation scheme and adjusted activity list. In the program of the algorithm, the frog-leaping progress combines the crossover operator of genetic algorithm and a permutation-based local search. Finally, an example of a construction project for a framed railway overpass is provided to examine the algorithm performance, and it assist in decision making to search for the appropriate makespan and quality threshold with minimal cost. PMID:27911939
A reliable algorithm for optimal control synthesis
NASA Technical Reports Server (NTRS)
Vansteenwyk, Brett; Ly, Uy-Loi
1992-01-01
In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.
Bertoldi, Eduardo G; Stella, Steffan F; Rohde, Luis E; Polanczyk, Carisi A
2016-05-01
Several tests exist for diagnosing coronary artery disease, with varying accuracy and cost. We sought to provide cost-effectiveness information to aid physicians and decision-makers in selecting the most appropriate testing strategy. We used the state-transitions (Markov) model from the Brazilian public health system perspective with a lifetime horizon. Diagnostic strategies were based on exercise electrocardiography (Ex-ECG), stress echocardiography (ECHO), single-photon emission computed tomography (SPECT), computed tomography coronary angiography (CTA), or stress cardiac magnetic resonance imaging (C-MRI) as the initial test. Systematic review provided input data for test accuracy and long-term prognosis. Cost data were derived from the Brazilian public health system. Diagnostic test strategy had a small but measurable impact in quality-adjusted life-years gained. Switching from Ex-ECG to CTA-based strategies improved outcomes at an incremental cost-effectiveness ratio of 3100 international dollars per quality-adjusted life-year. ECHO-based strategies resulted in cost and effectiveness almost identical to CTA, and SPECT-based strategies were dominated because of their much higher cost. Strategies based on stress C-MRI were most effective, but the incremental cost-effectiveness ratio vs CTA was higher than the proposed willingness-to-pay threshold. Invasive strategies were dominant in the high pretest probability setting. Sensitivity analysis showed that results were sensitive to costs of CTA, ECHO, and C-MRI. Coronary CT is cost-effective for the diagnosis of coronary artery disease and should be included in the Brazilian public health system. Stress ECHO has a similar performance and is an acceptable alternative for most patients, but invasive strategies should be reserved for patients at high risk. © 2016 Wiley Periodicals, Inc.
Reliability and Validity in Hospital Case-Mix Measurement
Pettengill, Julian; Vertrees, James
1982-01-01
There is widespread interest in the development of a measure of hospital output. This paper describes the problem of measuring the expected cost of the mix of inpatient cases treated in a hospital (hospital case-mix) and a general approach to its solution. The solution is based on a set of homogenous groups of patients, defined by a patient classification system, and a set of estimated relative cost weights corresponding to the patient categories. This approach is applied to develop a summary measure of the expected relative costliness of the mix of Medicare patients treated in 5,576 participating hospitals. The Medicare case-mix index is evaluated by estimating a hospital average cost function. This provides a direct test of the hypothesis that the relationship between Medicare case-mix and Medicare cost per case is proportional. The cost function analysis also provides a means of simulating the effects of classification error on our estimate of this relationship. Our results indicate that this general approach to measuring hospital case-mix provides a valid and robust measure of the expected cost of a hospital's case-mix. PMID:10309909
NASA Astrophysics Data System (ADS)
Sutrisno, Widowati, Tjahjana, R. Heru
2017-12-01
The future cost in many industrial problem is obviously uncertain. Then a mathematical analysis for a problem with uncertain cost is needed. In this article, we deals with the fuzzy expected value analysis to solve an integrated supplier selection and supplier selection problem with uncertain cost where the costs uncertainty is approached by a fuzzy variable. We formulate the mathematical model of the problems fuzzy expected value based quadratic optimization with total cost objective function and solve it by using expected value based fuzzy programming. From the numerical examples result performed by the authors, the supplier selection problem was solved i.e. the optimal supplier was selected for each time period where the optimal product volume of all product that should be purchased from each supplier for each time period was determined and the product stock level was controlled as decided by the authors i.e. it was followed the given reference level.
Elbasha, Elamin H
2005-05-01
The availability of patient-level data from clinical trials has spurred a lot of interest in developing methods for quantifying and presenting uncertainty in cost-effectiveness analysis (CEA). Although the majority has focused on developing methods for using sample data to estimate a confidence interval for an incremental cost-effectiveness ratio (ICER), a small strand of the literature has emphasized the importance of incorporating risk preferences and the trade-off between the mean and the variance of returns to investment in health and medicine (mean-variance analysis). This paper shows how the exponential utility-moment-generating function approach is a natural extension to this branch of the literature for modelling choices from healthcare interventions with uncertain costs and effects. The paper assumes an exponential utility function, which implies constant absolute risk aversion, and is based on the fact that the expected value of this function results in a convenient expression that depends only on the moment-generating function of the random variables. The mean-variance approach is shown to be a special case of this more general framework. The paper characterizes the solution to the resource allocation problem using standard optimization techniques and derives the summary measure researchers need to estimate for each programme, when the assumption of risk neutrality does not hold, and compares it to the standard incremental cost-effectiveness ratio. The importance of choosing the correct distribution of costs and effects and the issues related to estimation of the parameters of the distribution are also discussed. An empirical example to illustrate the methods and concepts is provided. Copyright 2004 John Wiley & Sons, Ltd
Communications network design and costing model users manual
NASA Technical Reports Server (NTRS)
Logan, K. P.; Somes, S. S.; Clark, C. A.
1983-01-01
The information and procedures needed to exercise the communications network design and costing model for performing network analysis are presented. Specific procedures are included for executing the model on the NASA Lewis Research Center IBM 3033 computer. The concepts, functions, and data bases relating to the model are described. Model parameters and their format specifications for running the model are detailed.
Bu, Jiyoon; Kim, Young Jun; Kang, Yoon-Tae; Lee, Tae Hee; Kim, Jeongsuk; Cho, Young-Ho; Han, Sae-Won
2017-05-01
The metastasis of cancer is strongly associated with the spread of circulating tumor cells (CTCs). Based on the microfluidic devices, which offer rapid recovery of CTCs, a number of studies have demonstrated the potential of CTCs as a diagnostic tool. However, not only the insufficient specificity and sensitivity derived from the rarity and heterogeneity of CTCs, but also the high-cost fabrication processes limit the use of CTC-based medical devices in commercial. Here, we present a low-cost fabric sheet layers for CTC isolation, which are composed of polyester monofilament yarns. Fabric sheet layers are easily functionalized with graphene oxide (GO), which is beneficial for improving both sensitivity and specificity. The GO modification to the low-cost fabrics enhances the binding of anti-EpCAM antibodies, resulting in 10-25% increase of capture efficiency compared to the surface without GO (anti-EpCAM antibodies directly onto the fabric sheets), while achieving high purity by isolating only 50-300 leukocytes in 1 mL of human blood. We investigated CTCs in ten human blood samples and successfully isolated 4-42 CTCs/mL from cancer patients, while none of cancerous cells were found among healthy donors. This remarkable results show the feasibility of GO-functionalized fabric sheet layers to be used in various CTC-based clinical applications, with high sensitivity and selectivity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Index to Estimate the Efficiency of an Ophthalmic Practice.
Chen, Andrew; Kim, Eun Ah; Aigner, Dennis J; Afifi, Abdelmonem; Caprioli, Joseph
2015-08-01
A metric of efficiency, a function of the ratio of quality to cost per patient, will allow the health care system to better measure the impact of specific reforms and compare the effectiveness of each. To develop and evaluate an efficiency index that estimates the performance of an ophthalmologist's practice as a function of cost, number of patients receiving care, and quality of care. Retrospective review of 36 ophthalmology subspecialty practices from October 2011 to September 2012 at a university-based eye institute. The efficiency index (E) was defined as a function of adjusted number of patients (N(a)), total practice adjusted costs (C(a)), and a preliminary measure of quality (Q). Constant b limits E between 0 and 1. Constant y modifies the influence of Q on E. Relative value units and geographic cost indices determined by the Centers for Medicare and Medicaid for 2012 were used to calculate adjusted costs. The efficiency index is expressed as the following: E = b(N(a)/C(a))Q(y). Independent, masked auditors reviewed 20 random patient medical records for each practice and filled out 3 questionnaires to obtain a process-based quality measure. The adjusted number of patients, adjusted costs, quality, and efficiency index were calculated for 36 ophthalmology subspecialties. The median adjusted number of patients was 5516 (interquartile range, 3450-11,863), the median adjusted cost was 1.34 (interquartile range, 0.99-1.96), the median quality was 0.89 (interquartile range, 0.79-0.91), and the median value of the efficiency index was 0.26 (interquartile range, 0.08-0.42). The described efficiency index is a metric that provides a broad overview of performance for a variety of ophthalmology specialties as estimated by resources used and a preliminary measure of quality of care provided. The results of the efficiency index could be used in future investigations to determine its sensitivity to detect the impact of interventions on a practice such as training modules or practice restructuring.
Suzuki, Keishiro; Hirasawa, Yukinori; Yaegashi, Yuji; Miyamoto, Hideki; Shirato, Hiroki
2009-01-01
We developed a web-based, remote radiation treatment planning system which allowed staff at an affiliated hospital to obtain support from a fully staffed central institution. Network security was based on a firewall and a virtual private network (VPN). Client computers were installed at a cancer centre, at a university hospital and at a staff home. We remotely operated the treatment planning computer using the Remote Desktop function built in to the Windows operating system. Except for the initial setup of the VPN router, no special knowledge was needed to operate the remote radiation treatment planning system. There was a time lag that seemed to depend on the volume of data traffic on the Internet, but it did not affect smooth operation. The initial cost and running cost of the system were reasonable.
Cost function approach for estimating derived demand for composite wood products
T. C. Marcin
1991-01-01
A cost function approach was examined for using the concept of duality between production and input factor demands. A translog cost function was used to represent residential construction costs and derived conditional factor demand equations. Alternative models were derived from the translog cost function by imposing parameter restrictions.
NASA Technical Reports Server (NTRS)
Hanagud, S.; Uppaluri, B.
1975-01-01
This paper describes a methodology for making cost effective fatigue design decisions. The methodology is based on a probabilistic model for the stochastic process of fatigue crack growth with time. The development of a particular model for the stochastic process is also discussed in the paper. The model is based on the assumption of continuous time and discrete space of crack lengths. Statistical decision theory and the developed probabilistic model are used to develop the procedure for making fatigue design decisions on the basis of minimum expected cost or risk function and reliability bounds. Selections of initial flaw size distribution, NDT, repair threshold crack lengths, and inspection intervals are discussed.
Alexandrescu, Roxana; Siegert, Richard John; Turner-Stokes, Lynne
2014-01-01
Objectives To describe functional outcomes, care needs and cost-efficiency of hospital rehabilitation for a UK cohort of inpatients with complex rehabilitation needs arising from inflammatory polyneuropathies. Subjects and Setting 186 patients consecutively admitted to specialist neurorehabilitation centres in England with Guillain-Barré Syndrome (n = 118 (63.4%)) or other inflammatory polyneuropathies, including chronic inflammatory demyelinating polyneuropathy (n = 15 (8.1%) or critical illness neuropathy (n = 32 (17.2%)). Methods Cohort analysis of data from the UK Rehabilitation Outcomes Collaborative national clinical dataset. Outcome measures include the UK Functional Assessment Measure, Northwick Park Dependency Score (NPDS) and Care Needs Assessment (NPCNA). Patients were analysed in three groups of dependency based on their admission NPDS score: ‘low’ (NPDS<10), ‘medium’ (NPDS 10–24) and ‘high’ (NPDS ≥25). Cost-efficiency was measured as the time taken to offset the cost of rehabilitation by savings in NPCNA-estimated costs of on-going care in the community. Results The mean rehabilitation length of stay was 72.2 (sd = 66.6) days. Significant differences were seen between the diagnostic groups on admission, but all showed significant improvements between admission and discharge, in both motor and cognitive function (p<0.0001). Patients who were highly dependent on admission had the longest lengths of stay (mean 97.0 (SD 79.0) days), but also showed the greatest reduction in on-going care costs (£1049 per week (SD £994)), so that overall they were the most cost-efficient to treat. Conclusions Patients with polyneuropathies have both physical and cognitive disabilities that are amenable to change with rehabilitation, resulting in significant reduction in on-going care-costs, especially for highly dependent patients. PMID:25402491
Economic Outcomes With Anatomical Versus Functional Diagnostic Testing for Coronary Artery Disease.
Mark, Daniel B; Federspiel, Jerome J; Cowper, Patricia A; Anstrom, Kevin J; Hoffmann, Udo; Patel, Manesh R; Davidson-Ray, Linda; Daniels, Melanie R; Cooper, Lawton S; Knight, J David; Lee, Kerry L; Douglas, Pamela S
2016-07-19
PROMISE (PROspective Multicenter Imaging Study for Evaluation of Chest Pain) found that initial use of at least 64-slice multidetector computed tomography angiography (CTA) versus functional diagnostic testing strategies did not improve clinical outcomes in stable symptomatic patients with suspected coronary artery disease (CAD) requiring noninvasive testing. To conduct an economic analysis for PROMISE (a major secondary aim of the study). Prospective economic study from the U.S. perspective. Comparisons were made according to the intention-to-treat principle, and CIs were calculated using bootstrap methods. (ClinicalTrials.gov: NCT01174550). 190 U.S. centers. 9649 U.S. patients enrolled in PROMISE between July 2010 and September 2013. Median follow-up was 25 months. Technical costs of the initial (outpatient) testing strategy were estimated from Premier Research Database data. Hospital-based costs were estimated using hospital bills and Medicare cost-charge ratios. Physician fees were taken from the Medicare Physician Fee Schedule. Costs were expressed in 2014 U.S. dollars, discounted at 3% annually, and estimated out to 3 years using inverse probability weighting methods. The mean initial testing costs were $174 for exercise electrocardiography; $404 for CTA; $501 to $514 for pharmacologic and exercise stress echocardiography, respectively; and $946 to $1132 for exercise and pharmacologic stress nuclear testing, respectively. Mean costs at 90 days were $2494 for the CTA strategy versus $2240 for the functional strategy (mean difference, $254 [95% CI, -$634 to $906]). The difference was associated with more revascularizations and catheterizations (4.25 per 100 patients) with CTA use. After 90 days, the mean cost difference between the groups out to 3 years remained small. Cost weights for test strategies were obtained from sources outside PROMISE. Computed tomography angiography and functional diagnostic testing strategies in patients with suspected CAD have similar costs through 3 years of follow-up. National Heart, Lung, and Blood Institute.
NASA Astrophysics Data System (ADS)
Javadi, Maryam; Shahrabi, Jamal
2014-03-01
The problems of facility location and the allocation of demand points to facilities are crucial research issues in spatial data analysis and urban planning. It is very important for an organization or governments to best locate its resources and facilities and efficiently manage resources to ensure that all demand points are covered and all the needs are met. Most of the recent studies, which focused on solving facility location problems by performing spatial clustering, have used the Euclidean distance between two points as the dissimilarity function. Natural obstacles, such as mountains and rivers, can have drastic impacts on the distance that needs to be traveled between two geographical locations. While calculating the distance between various supply chain entities (including facilities and demand points), it is necessary to take such obstacles into account to obtain better and more realistic results regarding location-allocation. In this article, new models were presented for location of urban facilities while considering geographical obstacles at the same time. In these models, three new distance functions were proposed. The first function was based on the analysis of shortest path in linear network, which was called SPD function. The other two functions, namely PD and P2D, were based on the algorithms that deal with robot geometry and route-based robot navigation in the presence of obstacles. The models were implemented in ArcGIS Desktop 9.2 software using the visual basic programming language. These models were evaluated using synthetic and real data sets. The overall performance was evaluated based on the sum of distance from demand points to their corresponding facilities. Because of the distance between the demand points and facilities becoming more realistic in the proposed functions, results indicated desired quality of the proposed models in terms of quality of allocating points to centers and logistic cost. Obtained results show promising improvements of the allocation, the logistics costs and the response time. It can also be inferred from this study that the P2D-based model and the SPD-based model yield similar results in terms of the facility location and the demand allocation. It is noted that the P2D-based model showed better execution time than the SPD-based model. Considering logistic costs, facility location and response time, the P2D-based model was appropriate choice for urban facility location problem considering the geographical obstacles.
Heuristic Approach for Configuration of a Grid-Tied Microgrid in Puerto Rico
NASA Astrophysics Data System (ADS)
Rodriguez, Miguel A.
The high rates of cost of electricity that consumers are being charged by the utility grid in Puerto Rico have created an energy crisis around the island. This situation is due to the island's dependence on imported fossil fuels. In order to aid in the transition from fossil-fuel based electricity into electricity from renewable and alternative sources, this research work focuses on reducing the cost of electricity for Puerto Rico through means of finding the optimal microgrid configuration for a set number of consumers from the residential sector. The Hybrid Optimization Modeling for Energy Renewables (HOMER) software, developed by NREL, is utilized as an aid in determining the optimal microgrid setting. The problem is also approached via convex optimization; specifically, an objective function C(t) is formulated in order to be minimized. The cost function depends on the energy supplied by the grid, the energy supplied by renewable sources, the energy not supplied due to outages, as well as any excess energy sold to the utility in a yearly manner. A term for considering the social cost of carbon is also considered in the cost function. Once the microgrid settings from HOMER are obtained, those are evaluated via the optimized function C( t), which will in turn assess the true optimality of the microgrid configuration. A microgrid to supply 10 consumers is considered; each consumer can possess a different microgrid configuration. The cost function C( t) is minimized, and the Net Present Value and Cost of Electricity are computed for each configuration, in order to assess the true feasibility. Results show that the greater the penetration of components into the microgrid, the greater the energy produced by the renewable sources in the microgrid, the greater the energy not supplied due to outages. The proposed method demonstrates that adding large amounts of renewable components in a microgrid does not necessarily translates into economic benefits for the consumer; in fact, there is a trade back between cost and addition of elements that must be considered. Any configurations which consider further increases in microgrid components will result in increased NPV and increased costs of electricity, which deem the configurations as unfeasible.
Sørensen, J; Primdahl, J; Horn, H C; Hørslev-Petersen, K
2015-01-01
To compare the cost-effectiveness of three types of follow-up for outpatients with stable low-activity rheumatoid arthritis (RA). In total, 287 patients were randomized to either planned rheumatologist consultations, shared care without planned consultations, or planned nurse consultations. Effectiveness measures included disease activity (Disease Activity Score based on 28 joint counts and C-reactive protein, DAS28-CRP), functional status (Health Assessment Questionnaire, HAQ), and health-related quality of life (EuroQol EQ-5D). Cost measures included activities in outpatient clinics and general practice, prescription and non-prescription medicine, dietary supplements, other health-care resources, and complementary and alternative care. Measures of effectiveness and costs were collected by self-reported questionnaires at inclusion and after 12 and 24 months. Incremental cost-effectiveness rates (ICERs) were estimated in comparison with rheumatologist consultations. Changes in disease activity, functional status, and health-related quality of life were not statistically significantly different for the three groups, although the mean scores were better for the shared care and nurse care groups compared with the rheumatologist group. Shared care and nurse care were non-significantly less costly than rheumatologist care. As both shared care and nurse care were associated with slightly better EQ-5D improvements and lower costs, they dominated rheumatologist care. At EUR 10,000 per quality-adjusted life year (QALY) threshold, shared care and nurse care were cost-effective with more than 90% probability. Nurse care was cost-effective in comparison with shared care with 75% probability. Shared care and nurse care seem to cost less but provide broadly similar health outcomes compared with rheumatologist outpatient care. However, it is still uncertain whether nurse care and shared care are cost-effective in comparison with rheumatologist outpatient care.
Voronoi Diagram Based Optimization of Dynamic Reactive Power Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Weihong; Sun, Kai; Qi, Junjian
2015-01-01
Dynamic var sources can effectively mitigate fault-induced delayed voltage recovery (FIDVR) issues or even voltage collapse. This paper proposes a new approach to optimization of the sizes of dynamic var sources at candidate locations by a Voronoi diagram based algorithm. It first disperses sample points of potential solutions in a searching space, evaluates a cost function at each point by barycentric interpolation for the subspaces around the point, and then constructs a Voronoi diagram about cost function values over the entire space. Accordingly, the final optimal solution can be obtained. Case studies on the WSCC 9-bus system and NPCC 140-busmore » system have validated that the new approach can quickly identify the boundary of feasible solutions in searching space and converge to the global optimal solution.« less
Slice-to-Volume Nonrigid Registration of Histological Sections to MR Images of the Human Brain
Osechinskiy, Sergey; Kruggel, Frithjof
2011-01-01
Registration of histological images to three-dimensional imaging modalities is an important step in quantitative analysis of brain structure, in architectonic mapping of the brain, and in investigation of the pathology of a brain disease. Reconstruction of histology volume from serial sections is a well-established procedure, but it does not address registration of individual slices from sparse sections, which is the aim of the slice-to-volume approach. This study presents a flexible framework for intensity-based slice-to-volume nonrigid registration algorithms with a geometric transformation deformation field parametrized by various classes of spline functions: thin-plate splines (TPS), Gaussian elastic body splines (GEBS), or cubic B-splines. Algorithms are applied to cross-modality registration of histological and magnetic resonance images of the human brain. Registration performance is evaluated across a range of optimization algorithms and intensity-based cost functions. For a particular case of histological data, best results are obtained with a TPS three-dimensional (3D) warp, a new unconstrained optimization algorithm (NEWUOA), and a correlation-coefficient-based cost function. PMID:22567290
Ant brood function as life preservers during floods.
Purcell, Jessica; Avril, Amaury; Jaffuel, Geoffrey; Bates, Sarah; Chapuisat, Michel
2014-01-01
Social organisms can surmount many ecological challenges by working collectively. An impressive example of such collective behavior occurs when ants physically link together into floating 'rafts' to escape from flooded habitat. However, raft formation may represent a social dilemma, with some positions posing greater individual risks than others. Here, we investigate the position and function of different colony members, and the costs and benefits of this functional geometry in rafts of the floodplain-dwelling ant Formica selysi. By causing groups of ants to raft in the laboratory, we observe that workers are distributed throughout the raft, queens are always in the center, and 100% of brood items are placed on the base. Through a series of experiments, we show that workers and brood are extremely resistant to submersion. Both workers and brood exhibit high survival rates after they have rafted, suggesting that occupying the base of the raft is not as costly as expected. The placement of all brood on the base of one cohesive raft confers several benefits: it preserves colony integrity, takes advantage of brood buoyancy, and increases the proportion of workers that immediately recover after rafting.
Incremental cost of postacute care in nursing homes.
Spector, William D; Limcangco, Maria Rhona; Ladd, Heather; Mukamel, Dana
2011-02-01
To determine whether the case mix index (CMI) based on the 53-Resource Utilization Groups (RUGs) captures all the cross-sectional variation in nursing home (NH) costs or whether NHs that have a higher percent of Medicare skilled care days (%SKILLED) have additional costs. DATA AND SAMPLE: Nine hundred and eighty-eight NHs in California in 2005. Data are from Medicaid cost reports, the Minimum Data Set, and the Economic Census. We estimate hybrid cost functions, which include in addition to outputs, case mix, ownership, wages, and %SKILLED. Two-stage least-square (2SLS) analysis was used to deal with the potential endogeneity of %SKILLED and CMI. On average 11 percent of NHs days were due to skilled care. Based on the 2SLS model, %SKILLED is associated with costs even when controlling for CMI. The marginal cost of a one percentage point increase in %SKILLED is estimated at U.S.$70,474 or about 1.2 percent of annual costs for the average cost facility. Subanalyses show that the increase in costs is mainly due to additional expenses for nontherapy ancillaries and rehabilitation. The 53-RUGs case mix does not account completely for all the variation in actual costs of care for postacute patients in NHs. © Health Research and Educational Trust.
Incremental Cost of Postacute Care in Nursing Homes
Spector, William D; Limcangco, Maria Rhona; Ladd, Heather; Mukamel, Dana A
2011-01-01
Objectives To determine whether the case mix index (CMI) based on the 53-Resource Utilization Groups (RUGs) captures all the cross-sectional variation in nursing home (NH) costs or whether NHs that have a higher percent of Medicare skilled care days (%SKILLED) have additional costs. Data and Sample Nine hundred and eighty-eight NHs in California in 2005. Data are from Medicaid cost reports, the Minimum Data Set, and the Economic Census. Research Design We estimate hybrid cost functions, which include in addition to outputs, case mix, ownership, wages, and %SKILLED. Two-stage least-square (2SLS) analysis was used to deal with the potential endogeneity of %SKILLED and CMI. Results On average 11 percent of NHs days were due to skilled care. Based on the 2SLS model, %SKILLED is associated with costs even when controlling for CMI. The marginal cost of a one percentage point increase in %SKILLED is estimated at U.S.$70,474 or about 1.2 percent of annual costs for the average cost facility. Subanalyses show that the increase in costs is mainly due to additional expenses for nontherapy ancillaries and rehabilitation. Conclusion The 53-RUGs case mix does not account completely for all the variation in actual costs of care for postacute patients in NHs. PMID:21029085
On a cost functional for H2/H(infinity) minimization
NASA Technical Reports Server (NTRS)
Macmartin, Douglas G.; Hall, Steven R.; Mustafa, Denis
1990-01-01
A cost functional is proposed and investigated which is motivated by minimizing the energy in a structure using only collocated feedback. Defined for an H(infinity)-norm bounded system, this cost functional also overbounds the H2 cost. Some properties of this cost functional are given, and preliminary results on the procedure for minimizing it are presented. The frequency domain cost functional is shown to have a time domain representation in terms of a Stackelberg non-zero sum differential game.
Zero Base Budgeting: A New Planning Tool for New Colleges.
ERIC Educational Resources Information Center
Adamson, Willie D.
Zero-base budgeting is presented as the functional alternative to the community college funding crisis which may be precipitated by passage in June 1978 of the Jarvis Amendment (Proposition 13) in California. Defined as the management of scarce resources on a cost/benefit basis to achieve pre-determined goals, zero-base budgeting emphasizes…
Method to fabricate functionalized conical nanopores
Small, Leo J.; Spoerke, Erik David; Wheeler, David R.
2016-07-12
A pressure-based chemical etch method is used to shape polymer nanopores into cones. By varying the pressure, the pore tip diameter can be controlled, while the pore base diameter is largely unaffected. The method provides an easy, low-cost approach for conically etching high density nanopores.
Exploring information systems outsourcing in U.S. hospital-based health care delivery systems.
Diana, Mark L
2009-12-01
The purpose of this study is to explore the factors associated with outsourcing of information systems (IS) in hospital-based health care delivery systems, and to determine if there is a difference in IS outsourcing activity based on the strategic value of the outsourced functions. IS sourcing behavior is conceptualized as a case of vertical integration. A synthesis of strategic management theory (SMT) and transaction cost economics (TCE) serves as the theoretical framework. The sample consists of 1,365 hospital-based health care delivery systems that own 3,452 hospitals operating in 2004. The findings indicate that neither TCE nor SMT predicted outsourcing better than the other did. The findings also suggest that health care delivery system managers may not be considering significant factors when making sourcing decisions, including the relative strategic value of the functions they are outsourcing. It is consistent with previous literature to suggest that the high cost of IS may be the main factor driving the outsourcing decision.
NASA Astrophysics Data System (ADS)
Jolivet, L.; Cohen, M.; Ruas, A.
2015-08-01
Landscape influences fauna movement at different levels, from habitat selection to choices of movements' direction. Our goal is to provide a development frame in order to test simulation functions for animal's movement. We describe our approach for such simulations and we compare two types of functions to calculate trajectories. To do so, we first modelled the role of landscape elements to differentiate between elements that facilitate movements and the ones being hindrances. Different influences are identified depending on landscape elements and on animal species. Knowledge were gathered from ecologists, literature and observation datasets. Second, we analysed the description of animal movement recorded with GPS at fine scale, corresponding to high temporal frequency and good location accuracy. Analysing this type of data provides information on the relation between landscape features and movements. We implemented an agent-based simulation approach to calculate potential trajectories constrained by the spatial environment and individual's behaviour. We tested two functions that consider space differently: one function takes into account the geometry and the types of landscape elements and one cost function sums up the spatial surroundings of an individual. Results highlight the fact that the cost function exaggerates the distances travelled by an individual and simplifies movement patterns. The geometry accurate function represents a good bottom-up approach for discovering interesting areas or obstacles for movements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiao, Hongzhu; Rao, N.S.V.; Protopopescu, V.
Regression or function classes of Euclidean type with compact support and certain smoothness properties are shown to be PAC learnable by the Nadaraya-Watson estimator based on complete orthonormal systems. While requiring more smoothness properties than typical PAC formulations, this estimator is computationally efficient, easy to implement, and known to perform well in a number of practical applications. The sample sizes necessary for PAC learning of regressions or functions under sup norm cost are derived for a general orthonormal system. The result covers the widely used estimators based on Haar wavelets, trignometric functions, and Daubechies wavelets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crawford, Aladsair J.; Viswanathan, Vilayanur V.; Stephenson, David E.
A robust performance-based cost model is developed for all-vanadium, iron-vanadium and iron chromium redox flow batteries. Systems aspects such as shunt current losses, pumping losses and thermal management are accounted for. The objective function, set to minimize system cost, allows determination of stack design and operating parameters such as current density, flow rate and depth of discharge (DOD). Component costs obtained from vendors are used to calculate system costs for various time frames. A 2 kW stack data was used to estimate unit energy costs and compared with model estimates for the same size electrodes. The tool has been sharedmore » with the redox flow battery community to both validate their stack data and guide future direction.« less
Parametric Cost Models for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney
2010-01-01
Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.
Military Base Realignments and Closures: Updated Costs and Savings Estimates from BRAC 2005
2012-06-29
AK 14.2 Realign medical functions at McChord Air Force Base, WA 13.7 Realign commodity management privatization 13.5 Close Kulis Air Guard...Realignment Act of 1990 by inserting a new section, § 2913, which established “military value” as the primary consideration for BRAC recommendations...Close Fort Monmouth, New Jersey ($1.1 billion increase). This recommendation closed Fort Monmouth and realigned various functions such as information
CMOS-based Stochastically Spiking Neural Network for Optimization under Uncertainties
2017-03-01
inverse tangent characteristics at varying input voltage (VIN) [Fig. 3], thereby it is suitable for Kernel function implementation. By varying bias...cost function/constraint variables are generated based on inverse transform on CDF. In Fig. 5, F-1(u) for uniformly distributed random number u [0, 1...extracts random samples of x varying with CDF of F(x). In Fig. 6, we present a successive approximation (SA) circuit to evaluate inverse
Estimating economic thresholds for pest control: an alternative procedure.
Ramirez, O A; Saunders, J L
1999-04-01
An alternative methodology to determine profit maximizing economic thresholds is developed and illustrated. An optimization problem based on the main biological and economic relations involved in determining a profit maximizing economic threshold is first advanced. From it, a more manageable model of 2 nonsimultaneous reduced-from equations is derived, which represents a simpler but conceptually and statistically sound alternative. The model recognizes that yields and pest control costs are a function of the economic threshold used. Higher (less strict) economic thresholds can result in lower yields and, therefore, a lower gross income from the sale of the product, but could also be less costly to maintain. The highest possible profits will be obtained by using the economic threshold that results in a maximum difference between gross income and pest control cost functions.
Biological filters and their use in potable water filtration systems in spaceflight conditions
NASA Astrophysics Data System (ADS)
Thornhill, Starla G.; Kumar, Manish
2018-05-01
Providing drinking water to space missions such as the International Space Station (ISS) is a costly requirement for human habitation. To limit the costs of water transport, wastewater is collected and purified using a variety of physical and chemical means. To date, sand-based biofilters have been designed to function against gravity, and biofilms have been shown to form in microgravity conditions. Development of a universal silver-recycling biological filter system that is able to function in both microgravity and full gravity conditions would reduce the costs incurred in removing organic contaminants from wastewater by limiting the energy and chemical inputs required. This paper aims to propose the use of a sand-substrate biofilter to replace chemical means of water purification on manned spaceflights.
Borisenko, Oleg; Haude, Michael; Hoppe, Uta C; Siminiak, Tomasz; Lipiecki, Janusz; Goldberg, Steve L; Mehta, Nawzer; Bouknight, Omari V; Bjessmo, Staffan; Reuter, David G
2015-05-14
To determine the cost-effectiveness of the percutaneous mitral valve repair (PMVR) using Carillon® Mitral Contour System® (Cardiac Dimensions Inc., Kirkland, WA, USA) in patients with congestive heart failure accompanied by moderate to severe functional mitral regurgitation (FMR) compared to the prolongation of optimal medical treatment (OMT). Cost-utility analysis using a combination of a decision tree and Markov process was performed. The clinical effectiveness was determined based on the results of the Transcatheter Implantation of Carillon Mitral Annuloplasty Device (TITAN) trial. The mean age of the target population was 62 years, 77% of the patients were males, 64% of the patients had severe FMR and all patients had New York Heart Association functional class III. The epidemiological, cost and utility data were derived from the literature. The analysis was performed from the German statutory health insurance perspective over 10-year time horizon. Over 10 years, the total cost was €36,785 in the PMVR arm and €18,944 in the OMT arm. However, PMVR provided additional benefits to patients with an 1.15 incremental quality-adjusted life years (QALY) and an 1.41 incremental life years. The percutaneous procedure was cost-effective in comparison to OMT with an incremental cost-effectiveness ratio of €15,533/QALY. Results were robust in the deterministic sensitivity analysis. In the probabilistic sensitivity analysis with a willingness-to-pay threshold of €35,000/QALY, PMVR had a 84 % probability of being cost-effective. Percutaneous mitral valve repair may be cost-effective in inoperable patients with FMR due to heart failure.
Adaptive non-linear control for cancer therapy through a Fokker-Planck observer.
Shakeri, Ehsan; Latif-Shabgahi, Gholamreza; Esmaeili Abharian, Amir
2018-04-01
In recent years, many efforts have been made to present optimal strategies for cancer therapy through the mathematical modelling of tumour-cell population dynamics and optimal control theory. In many cases, therapy effect is included in the drift term of the stochastic Gompertz model. By fitting the model with empirical data, the parameters of therapy function are estimated. The reported research works have not presented any algorithm to determine the optimal parameters of therapy function. In this study, a logarithmic therapy function is entered in the drift term of the Gompertz model. Using the proposed control algorithm, the therapy function parameters are predicted and adaptively adjusted. To control the growth of tumour-cell population, its moments must be manipulated. This study employs the probability density function (PDF) control approach because of its ability to control all the process moments. A Fokker-Planck-based non-linear stochastic observer will be used to determine the PDF of the process. A cost function based on the difference between a predefined desired PDF and PDF of tumour-cell population is defined. Using the proposed algorithm, the therapy function parameters are adjusted in such a manner that the cost function is minimised. The existence of an optimal therapy function is also proved. The numerical results are finally given to demonstrate the effectiveness of the proposed method.
Implementation of Arithmetic and Nonarithmetic Functions on a Label-free and DNA-based Platform
NASA Astrophysics Data System (ADS)
Wang, Kun; He, Mengqi; Wang, Jin; He, Ronghuan; Wang, Jianhua
2016-10-01
A series of complex logic gates were constructed based on graphene oxide and DNA-templated silver nanoclusters to perform both arithmetic and nonarithmetic functions. For the purpose of satisfying the requirements of progressive computational complexity and cost-effectiveness, a label-free and universal platform was developed by integration of various functions, including half adder, half subtractor, multiplexer and demultiplexer. The label-free system avoided laborious modification of biomolecules. The designed DNA-based logic gates can be implemented with readout of near-infrared fluorescence, and exhibit great potential applications in the field of bioimaging as well as disease diagnosis.
Modeling U.S. Air Force Occupational Health Costs
2009-03-01
for the 75th Aerospace Medicine Group, Hill Air Force Base, Utah. Bioenvironmental engineers sought a more robust cost comparison tool, allowing...to Major Feltenberger and Major Johns at the Air Force Medical Operations Agency and Captain Batchellor from the 75th Aerospace Medicine Squadron...resources on support functions is challenging, and rightly so. In a sense, commanders are fiduciaries to the taxpayers and must responsibly spend
The Einstein Suite: A Web-Based Tool for Rapid and Collaborative Engineering Design and Analysis
NASA Technical Reports Server (NTRS)
Palmer, Richard S.
1997-01-01
Taken together the components of the Einstein Suite provide two revolutionary capabilities - they have the potential to change the way engineering and financial engineering are performed by: (1) providing currently unavailable functionality, and (2) providing a 10-100 times improvement over currently available but impractical or costly functionality.
Natural selection for costly nutrient recycling in simulated microbial metacommunities.
Boyle, Richard A; Williams, Hywel T P; Lenton, Timothy M
2012-11-07
Recycling of essential nutrients occurs at scales from microbial communities to global biogeochemical cycles, often in association with ecological interactions in which two or more species utilise each others' metabolic by-products. However, recycling loops may be unstable; sequences of reactions leading to net recycling may be parasitised by side-reactions causing nutrient loss, while some reactions in any closed recycling loop are likely to be costly to participants. Here we examine the stability of nutrient recycling loops in an individual-based ecosystem model based on microbial functional types that differ in their metabolism. A supplied nutrient is utilised by a "source" functional type, generating a secondary nutrient that is subsequently used by two other types-a "mutualist" that regenerates the initial nutrient at a growth rate cost, and a "parasite" that produces a refractory waste product but does not incur any additional cost. The three functional types are distributed across a metacommunity in which separate patches are linked by a stochastic diffusive migration process. Regions of high mutualist abundance feature high levels of nutrient recycling and increased local population density leading to greater export of individuals, allowing the source-mutualist recycling loop to spread across the system. Individual-level selection favouring parasites is balanced by patch-level selection for high productivity, indirectly favouring mutualists due to the synergistic productivity benefits of the recycling loop they support. This suggests that multi-level selection may promote nutrient cycling and thereby help to explain the apparent ubiquity and stability of nutrient recycling in nature.
NASA Technical Reports Server (NTRS)
Keyes, David E.; Smooke, Mitchell D.
1987-01-01
A parallelized finite difference code based on the Newton method for systems of nonlinear elliptic boundary value problems in two dimensions is analyzed in terms of computational complexity and parallel efficiency. An approximate cost function depending on 15 dimensionless parameters is derived for algorithms based on stripwise and boxwise decompositions of the domain and a one-to-one assignment of the strip or box subdomains to processors. The sensitivity of the cost functions to the parameters is explored in regions of parameter space corresponding to model small-order systems with inexpensive function evaluations and also a coupled system of nineteen equations with very expensive function evaluations. The algorithm was implemented on the Intel Hypercube, and some experimental results for the model problems with stripwise decompositions are presented and compared with the theory. In the context of computational combustion problems, multiprocessors of either message-passing or shared-memory type may be employed with stripwise decompositions to realize speedup of O(n), where n is mesh resolution in one direction, for reasonable n.
NASA Astrophysics Data System (ADS)
Aydogdu, Ibrahim
2017-03-01
In this article, a new version of a biogeography-based optimization algorithm with Levy flight distribution (LFBBO) is introduced and used for the optimum design of reinforced concrete cantilever retaining walls under seismic loading. The cost of the wall is taken as an objective function, which is minimized under the constraints implemented by the American Concrete Institute (ACI 318-05) design code and geometric limitations. The influence of peak ground acceleration (PGA) on optimal cost is also investigated. The solution of the problem is attained by the LFBBO algorithm, which is developed by adding Levy flight distribution to the mutation part of the biogeography-based optimization (BBO) algorithm. Five design examples, of which two are used in literature studies, are optimized in the study. The results are compared to test the performance of the LFBBO and BBO algorithms, to determine the influence of the seismic load and PGA on the optimal cost of the wall.
Alrashdan, Abdalla; Momani, Amer; Ababneh, Tamador
2012-01-01
One of the most challenging problems facing healthcare providers is to determine the actual cost for their procedures, which is important for internal accounting and price justification to insurers. The objective of this paper is to find suitable categories to identify the diagnostic outpatient medical procedures and translate them from functional orientation to process orientation. A hierarchal task tree is developed based on a classification schema of procedural activities. Each procedure is seen as a process consisting of a number of activities. This makes a powerful foundation for activity-based cost/management implementation and provides enough information to discover the value-added and non-value-added activities that assist in process improvement and eventually may lead to cost reduction. Work measurement techniques are used to identify the standard time of each activity at the lowest level of the task tree. A real case study at a private hospital is presented to demonstrate the proposed methodology. © 2011 National Association for Healthcare Quality.
Menachemi, Nir; Burkhardt, Jeffrey; Shewchuk, Richard; Burke, Darrell; Brooks, Robert G
2007-01-01
Outsourcing of information technology (IT) functions is a popular strategy with both potential benefits and risks for hospitals. Anecdotal evidence, based on case studies, suggests that outsourcing may be associated with significant cost savings. However, no generalizable evidence exists to support such assertions. This study examines whether outsourcing IT functions is related to improved financial performance in hospitals. Primary survey data on IT outsourcing behavior were combined with secondary data on hospital financial performance. Regression analyses examined the relationship between outsourcing and various measures of financial performance while controlling for bed size, average patient acuity, geographic location, and overall IT adoption. Complete data from a total of 83 Florida hospitals were available for analyses. Findings suggest that the decision to outsource IT functions is not related to any of the hospital financial performance measures that were examined. Specifically, outsourcing of IT functions did not correlate with net inpatient revenue, net patient revenue, hospital expenses, total expenses, cash flow ratio, operating margin, or total margin. In most cases, IT outsourcing is not necessarily a cost-lowering strategy, but instead, a cost-neutral manner in which to accomplish an organizational strategy.
Soto, David; Rotshtein, Pia; Kanai, Ryota
2014-04-01
Recent research indicates that human attention appears inadvertently biased by items that match the contents of working memory (WM). WM-biases can lead to attentional costs when the memory content matches goal-irrelevant items and to attentional benefits when it matches the sought target. Here we used functional and structural MRI data to determine the neural basis of human variation in WM biases. We asked whether human variation in WM-benefits and WM-costs merely reflects the process of attentional capture by the contents of WM or whether variation in WM biases may be associated with distinct forms of cognitive control over internal WM signals based on selection goals. Human ability to use WM contents to facilitate selection was positively correlated with gray matter volume in the left superior posterior parietal cortex (PPC), while the ability to overcome interference by WM-matching distracters was associated with the left inferior PPC in the anterior IPS. Functional activity in the left PPC, measured by functional MRI, also predicted the magnitude of WM-costs on selection. Both structure and function of left PPC mediate the expression of WM biases in human visual attention. Copyright © 2013 Elsevier Inc. All rights reserved.
Efficient method of evaluation for Gaussian Hartree-Fock exchange operator for Gau-PBE functional
NASA Astrophysics Data System (ADS)
Song, Jong-Won; Hirao, Kimihiko
2015-07-01
We previously developed an efficient screened hybrid functional called Gaussian-Perdew-Burke-Ernzerhof (Gau-PBE) [Song et al., J. Chem. Phys. 135, 071103 (2011)] for large molecules and extended systems, which is characterized by the usage of a Gaussian function as a modified Coulomb potential for the Hartree-Fock (HF) exchange. We found that the adoption of a Gaussian HF exchange operator considerably decreases the calculation time cost of periodic systems while improving the reproducibility of the bandgaps of semiconductors. We present a distance-based screening scheme here that is tailored for the Gaussian HF exchange integral that utilizes multipole expansion for the Gaussian two-electron integrals. We found a new multipole screening scheme helps to save the time cost for the HF exchange integration by efficiently decreasing the number of integrals of, specifically, the near field region without incurring substantial changes in total energy. In our assessment on the periodic systems of seven semiconductors, the Gau-PBE hybrid functional with a new screening scheme has 1.56 times the time cost of a pure functional while the previous Gau-PBE was 1.84 times and HSE06 was 3.34 times.
Efficient method of evaluation for Gaussian Hartree-Fock exchange operator for Gau-PBE functional
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Jong-Won; Hirao, Kimihiko, E-mail: hirao@riken.jp
2015-07-14
We previously developed an efficient screened hybrid functional called Gaussian-Perdew–Burke–Ernzerhof (Gau-PBE) [Song et al., J. Chem. Phys. 135, 071103 (2011)] for large molecules and extended systems, which is characterized by the usage of a Gaussian function as a modified Coulomb potential for the Hartree-Fock (HF) exchange. We found that the adoption of a Gaussian HF exchange operator considerably decreases the calculation time cost of periodic systems while improving the reproducibility of the bandgaps of semiconductors. We present a distance-based screening scheme here that is tailored for the Gaussian HF exchange integral that utilizes multipole expansion for the Gaussian two-electron integrals.more » We found a new multipole screening scheme helps to save the time cost for the HF exchange integration by efficiently decreasing the number of integrals of, specifically, the near field region without incurring substantial changes in total energy. In our assessment on the periodic systems of seven semiconductors, the Gau-PBE hybrid functional with a new screening scheme has 1.56 times the time cost of a pure functional while the previous Gau-PBE was 1.84 times and HSE06 was 3.34 times.« less
Demystifying Results-Based Performance Measurement.
ERIC Educational Resources Information Center
Jorjani, Hamid
Many evaluators are convinced that Results-based Performance Measurement (RBPM) is an effective tool to improve service delivery and cost effectiveness in both public and private sectors. Successful RBPM requires self-directed and cross-functional work teams and the supporting infrastructure to make it work. There are many misconceptions and…
Timonen, L; Rantanen, T; Mäkinen, E; Timonen, T E; Törmäkangas, T; Sulkava, R
2008-12-01
The aim of this study was to analyze social welfare and healthcare costs and fall-related healthcare costs after a group-based exercise program. The 10-week exercise program, which started after discharge from the hospital, was designed to improve physical fitness, mood, and functional abilities in frail elderly women. Sixty-eight acutely hospitalized and mobility-impaired women (mean age 83.0, SD 3.9 years) were randomized into either group-based (intervention) or home exercise (control) groups. Information on costs was collected during 1 year after hospital discharge. There were no differences between the intervention and control groups in the mean individual healthcare costs: 4381 euros (SD 3829 euros) vs 3539 euros (SD 3967 euros), P=0.477, in the social welfare costs: 3336 euros (SD 4418 euros) vs 4073 euros (SD 5973 euros), P=0.770, or in the fall-related healthcare costs: 996 euros (SD 2612 euros) vs 306 euros (SD 915), P=0.314, respectively. This exercise intervention, which has earlier proved to be effective in improving physical fitness and mood, did not result in any financial savings in municipal costs. These results serve as a pilot study and further studies are needed to establish the cost-effectiveness of this exercise intervention for elderly people.
Cost-effective conservation of amphibian ecology and evolution
Campos, Felipe S.; Lourenço-de-Moraes, Ricardo; Llorente, Gustavo A.; Solé, Mirco
2017-01-01
Habitat loss is the most important threat to species survival, and the efficient selection of priority areas is fundamental for good systematic conservation planning. Using amphibians as a conservation target, we designed an innovative assessment strategy, showing that prioritization models focused on functional, phylogenetic, and taxonomic diversity can include cost-effectiveness–based assessments of land values. We report new key conservation sites within the Brazilian Atlantic Forest hot spot, revealing a congruence of ecological and evolutionary patterns. We suggest payment for ecosystem services through environmental set-asides on private land, establishing potential trade-offs for ecological and evolutionary processes. Our findings introduce additional effective area-based conservation parameters that set new priorities for biodiversity assessment in the Atlantic Forest, validating the usefulness of a novel approach to cost-effectiveness–based assessments of conservation value for other species-rich regions. PMID:28691084
Generalized Redistribute-to-the-Right Algorithm: Application to the Analysis of Censored Cost Data
CHEN, SHUAI; ZHAO, HONGWEI
2013-01-01
Medical cost estimation is a challenging task when censoring of data is present. Although researchers have proposed methods for estimating mean costs, these are often derived from theory and are not always easy to understand. We provide an alternative method, based on a replace-from-the-right algorithm, for estimating mean costs more efficiently. We show that our estimator is equivalent to an existing one that is based on the inverse probability weighting principle and semiparametric efficiency theory. We also propose an alternative method for estimating the survival function of costs, based on the redistribute-to-the-right algorithm, that was originally used for explaining the Kaplan–Meier estimator. We show that this second proposed estimator is equivalent to a simple weighted survival estimator of costs. Finally, we develop a more efficient survival estimator of costs, using the same redistribute-to-the-right principle. This estimator is naturally monotone, more efficient than some existing survival estimators, and has a quite small bias in many realistic settings. We conduct numerical studies to examine the finite sample property of the survival estimators for costs, and show that our new estimator has small mean squared errors when the sample size is not too large. We apply both existing and new estimators to a data example from a randomized cardiovascular clinical trial. PMID:24403869
Space-based solar power conversion and delivery systems study. Volume 5: Economic analysis
NASA Technical Reports Server (NTRS)
1977-01-01
Space-based solar power conversion and delivery systems are studied along with a variety of economic and programmatic issues relevant to their development and deployment. The costs, uncertainties and risks associated with the current photovoltaic Satellite Solar Power System (SSPS) configuration, and issues affecting the development of an economically viable SSPS development program are addressed. In particular, the desirability of low earth orbit (LEO) and geosynchronous (GEO) test satellites is examined and critical technology areas are identified. The development of SSPS unit production (nth item), and operation and maintenance cost models suitable for incorporation into a risk assessment (Monte Carlo) model (RAM) are reported. The RAM was then used to evaluate the current SSPS configuration expected costs and cost-risk associated with this configuration. By examining differential costs and cost-risk as a function of postulated technology developments, the critical technologies, that is, those which drive costs and/or cost-risk, are identified. It is shown that the key technology area deals with productivity in space, that is, the ability to fabricate and assemble large structures in space, not, as might be expected, with some hardware component technology.
Dual-Use Aspects of System Health Management
NASA Technical Reports Server (NTRS)
Owens, P. R.; Jambor, B. J.; Eger, G. W.; Clark, W. A.
1994-01-01
System Health Management functionality is an essential part of any space launch system. Health management functionality is an integral part of mission reliability, since it is needed to verify the reliability before the mission starts. Health Management is also a key factor in life cycle cost reduction and in increasing system availability. The degree of coverage needed by the system and the degree of coverage made available at a reasonable cost are critical parameters of a successful design. These problems are not unique to the launch vehicle world. In particular, the Intelligent Vehicle Highway System, commercial aircraft systems, train systems, and many types of industrial production facilities require various degrees of system health management. In all of these applications, too, the designers must balance the benefits and costs of health management in order to optimize costs. The importance of an integrated system is emphasized. That is, we present the case for considering health management as an integral part of system design, rather than functionality to be added on at the end of the design process. The importance of maintaining the system viewpoint is discussed in making hardware and software tradeoffs and in arriving at design decisions. We describe an approach to determine the parameters to be monitored in any system health management application. This approach is based on Design of Experiments (DOE), prototyping, failure modes and effects analyses, cost modeling and discrete event simulation. The various computer-based tools that facilitate the approach are discussed. The approach described originally was used to develop a fault tolerant avionics architecture for launch vehicles that incorporated health management as an integral part of the system. Finally, we discuss generalizing the technique to apply it to other domains. Several illustrations are presented.
Hyde, Christopher; Peters, Jaime; Bond, Mary; Rogers, Gabriel; Hoyle, Martin; Anderson, Rob; Jeffreys, Mike; Davis, Sarah; Thokala, Praveen; Moxham, Tiffany
2013-01-01
in 2007 the National Institute of Health and Clinical Excellence (NICE) restricted the use of acetylcholinesterase inhibitors and memantine. we conducted a health technology assessment (HTA) of the effectiveness and cost-effectiveness of donepezil, galantamine, rivastigmine and memantine for the treatment of AD to re-consider and up-date the evidence base used to inform the 2007 NICE decision. The systematic review of effectiveness targeted randomised controlled trials. A comprehensive search, including MEDLINE, Embase and the Cochrane Library, was conducted from January 2004 to March 2010. All key review steps were done by two reviewers. Random effects meta-analysis was conducted. The cost-effectiveness was assessed using a cohort-based model with three health states: pre-institutionalised, institutionalised and dead. The perspective was NHS and Personal Social Services and the cost year 2009. confidence about the size and statistical significance of the estimates of effect of galantamine, rivastigmine and memantine improved on function and global impact in particular. Cost-effectiveness also changed. For donepezil, galantamine and rivastigmine, the incremental cost per quality-adjusted life year (QALY) in 2004 was above £50,000; in 2010 the same drugs 'dominated' best supportive care (improved clinical outcome at reduced cost). This was primarily because of changes in the modelled costs of introducing the drugs. For memantine, the cost-effectiveness also improved from a range of £37-53,000 per QALY gained to a base-case of £32,000. there has been a change in the evidence base between 2004 and 2010 consistent with the change in NICE guidance. Further evolution in cost-effectiveness estimates is possible particularly if there are changes in drug prices.
Luma-chroma space filter design for subpixel-based monochrome image downsampling.
Fang, Lu; Au, Oscar C; Cheung, Ngai-Man; Katsaggelos, Aggelos K; Li, Houqiang; Zou, Feng
2013-10-01
In general, subpixel-based downsampling can achieve higher apparent resolution of the down-sampled images on LCD or OLED displays than pixel-based downsampling. With the frequency domain analysis of subpixel-based downsampling, we discover special characteristics of the luma-chroma color transform choice for monochrome images. With these, we model the anti-aliasing filter design for subpixel-based monochrome image downsampling as a human visual system-based optimization problem with a two-term cost function and obtain a closed-form solution. One cost term measures the luminance distortion and the other term measures the chrominance aliasing in our chosen luma-chroma space. Simulation results suggest that the proposed method can achieve sharper down-sampled gray/font images compared with conventional pixel and subpixel-based methods, without noticeable color fringing artifacts.
2011-01-01
Background Concomitant chemo-radiotherapy (CCRT) has become an indispensable organ, but not always function preserving treatment modality for advanced head and neck cancer. To prevent/limit the functional side effects of CCRT, special exercise programs are increasingly explored. This study presents cost-effectiveness analyses of a preventive (swallowing) exercise program (PREP) compared to usual care (UC) from a health care perspective. Methods A Markov decision model of PREP versus UC was developed for CCRT in advanced head and neck cancer. Main outcome variables were tube dependency at one-year and number of post-CCRT hospital admission days. Primary outcome was costs per quality adjusted life years (cost/QALY), with an incremental cost-effectiveness ratio (ICER) as outcome parameter. The Expected Value of Perfect Information (EVPI) was calculated to obtain the value of further research. Results PREP resulted in less tube dependency (3% and 25%, respectively), and in fewer hospital admission days than UC (3.2 and 4.5 days respectively). Total costs for UC amounted to €41,986 and for PREP to €42,271. Quality adjusted life years for UC amounted to 0.68 and for PREP to 0.77. Based on costs per QALY, PREP has a higher probability of being cost-effective as long as the willingness to pay threshold for 1 additional QALY is at least €3,200/QALY. At the prevailing threshold of €20,000/QALY the probability for PREP being cost-effective compared to UC was 83%. The EVPI demonstrated potential value in undertaking additional research to reduce the existing decision uncertainty. Conclusions Based on current evidence, PREP for CCRT in advanced head and neck cancer has the higher probability of being cost-effective when compared to UC. Moreover, the majority of sensitivity analyses produced ICERs that are well below the prevailing willingness to pay threshold for an additional QALY (range from dominance till €45,906/QALY). PMID:22051143
NASA Astrophysics Data System (ADS)
Elfarnawany, Mai; Alam, S. Riyahi; Agrawal, Sumit K.; Ladak, Hanif M.
2017-02-01
Cochlear implant surgery is a hearing restoration procedure for patients with profound hearing loss. In this surgery, an electrode is inserted into the cochlea to stimulate the auditory nerve and restore the patient's hearing. Clinical computed tomography (CT) images are used for planning and evaluation of electrode placement, but their low resolution limits the visualization of internal cochlear structures. Therefore, high resolution micro-CT images are used to develop atlas-based segmentation methods to extract these nonvisible anatomical features in clinical CT images. Accurate registration of the high and low resolution CT images is a prerequisite for reliable atlas-based segmentation. In this study, we evaluate and compare different non-rigid B-spline registration parameters using micro-CT and clinical CT images of five cadaveric human cochleae. The varying registration parameters are cost function (normalized correlation (NC), mutual information and mean square error), interpolation method (linear, windowed-sinc and B-spline) and sampling percentage (1%, 10% and 100%). We compare the registration results visually and quantitatively using the Dice similarity coefficient (DSC), Hausdorff distance (HD) and absolute percentage error in cochlear volume. Using MI or MSE cost functions and linear or windowed-sinc interpolation resulted in visually undesirable deformation of internal cochlear structures. Quantitatively, the transforms using 100% sampling percentage yielded the highest DSC and smallest HD (0.828+/-0.021 and 0.25+/-0.09mm respectively). Therefore, B-spline registration with cost function: NC, interpolation: B-spline and sampling percentage: moments 100% can be the foundation of developing an optimized atlas-based segmentation algorithm of intracochlear structures in clinical CT images.
Diedrich, Karl T; Roberts, John A; Schmidt, Richard H; Parker, Dennis L
2012-12-01
Attributes like length, diameter, and tortuosity of tubular anatomical structures such as blood vessels in medical images can be measured from centerlines. This study develops methods for comparing the accuracy and stability of centerline algorithms. Sample data included numeric phantoms simulating arteries and clinical human brain artery images. Centerlines were calculated from segmented phantoms and arteries with shortest paths centerline algorithms developed with different cost functions. The cost functions were the inverse modified distance from edge (MDFE(i) ), the center of mass (COM), the binary-thinned (BT)-MDFE(i) , and the BT-COM. The accuracy of the centerline algorithms were measured by the root mean square error from known centerlines of phantoms. The stability of the centerlines was measured by starting the centerline tree from different points and measuring the differences between trees. The accuracy and stability of the centerlines were visualized by overlaying centerlines on vasculature images. The BT-COM cost function centerline was the most stable in numeric phantoms and human brain arteries. The MDFE(i) -based centerline was most accurate in the numeric phantoms. The COM-based centerline correctly handled the "kissing" artery in 16 of 16 arteries in eight subjects whereas the BT-COM was correct in 10 of 16 and MDFE(i) was correct in 6 of 16. The COM-based centerline algorithm was selected for future use based on the ability to handle arteries where the initial binary vessels segmentation exhibits closed loops. The selected COM centerline was found to measure numerical phantoms to within 2% of the known length. Copyright © 2012 Wiley Periodicals, Inc.
Building devices from colloidal quantum dots.
Kagan, Cherie R; Lifshitz, Efrat; Sargent, Edward H; Talapin, Dmitri V
2016-08-26
The continued growth of mobile and interactive computing requires devices manufactured with low-cost processes, compatible with large-area and flexible form factors, and with additional functionality. We review recent advances in the design of electronic and optoelectronic devices that use colloidal semiconductor quantum dots (QDs). The properties of materials assembled of QDs may be tailored not only by the atomic composition but also by the size, shape, and surface functionalization of the individual QDs and by the communication among these QDs. The chemical and physical properties of QD surfaces and the interfaces in QD devices are of particular importance, and these enable the solution-based fabrication of low-cost, large-area, flexible, and functional devices. We discuss challenges that must be addressed in the move to solution-processed functional optoelectronic nanomaterials. Copyright © 2016, American Association for the Advancement of Science.
Hospital consolidation and costs: another look at the evidence.
Dranove, David; Lindrooth, Richard
2003-11-01
We investigate whether pairwise hospital consolidation leads to cost savings. We use a unified empirical methodology to assess both systems and mergers. Our comparison group for each consolidation consists of 10 'pseudo-mergers' chosen based on propensity scores. Cost function estimates reveal that consolidation into systems does not generate savings, even after 4 years. Mergers in which hospitals consolidate financial reporting and licenses generate savings of approximately 14%: 2, 3, and 4 years after merger. The system consolidation and merger results are very robust to changes in the specification and the sample.
High efficiency low cost monolithic module for SARSAT distress beacons
NASA Technical Reports Server (NTRS)
Petersen, Wendell C.; Siu, Daniel P.
1992-01-01
The program objectives were to develop a highly efficient, low cost RF module for SARSAT beacons; achieve significantly lower battery current drain, amount of heat generated, and size of battery required; utilize MMIC technology to improve efficiency, reliability, packaging, and cost; and provide a technology database for GaAs based UHF RF circuit architectures. Presented in viewgraph form are functional block diagrams of the SARSAT distress beacon and beacon RF module as well as performance goals, schematic diagrams, predicted performances, and measured performances for the phase modulator and power amplifier.
Mahmoudzadeh, Amir Pasha; Kashou, Nasser H.
2013-01-01
Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method. PMID:24000283
Mahmoudzadeh, Amir Pasha; Kashou, Nasser H
2013-01-01
Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.
Sail Plan Configuration Optimization for a Modern Clipper Ship
NASA Astrophysics Data System (ADS)
Gerritsen, Margot; Doyle, Tyler; Iaccarino, Gianluca; Moin, Parviz
2002-11-01
We investigate the use of gradient-based and evolutionary algorithms for sail shape optimization. We present preliminary results for the optimization of sheeting angles for the rig of the future three-masted clipper yacht Maltese Falcon. This yacht will be equipped with square-rigged masts made up of yards of circular arc cross sections. This design is especially attractive for megayachts because it provides a large sail area while maintaining aerodynamic and structural efficiency. The rig remains almost rigid in a large range of wind conditions and therefore a simple geometrical model can be constructed without accounting for the true flying shape. The sheeting angle optimization studies are performed using both gradient-based cost function minimization and evolutionary algorithms. The fluid flow is modeled by the Reynolds-averaged Navier-Stokes equations with the Spallart-Allmaras turbulence model. Unstructured non-conforming grids are used to increase robustness and computational efficiency. The optimization process is automated by integrating the system components (geometry construction, grid generation, flow solver, force calculator, optimization). We compare the optimization results to those done previously by user-controlled parametric studies using simple cost functions and user intuition. We also investigate the effectiveness of various cost functions in the optimization (driving force maximization, ratio of driving force to heeling force maximization).
National law enforcement telecommunications network
NASA Technical Reports Server (NTRS)
Reilly, N. B.; Garrison, G. W.; Sohn, R. L.; Gallop, D. L.; Goldstein, B. L.
1975-01-01
Alternative approaches are analyzed to a National Law Enforcement Telecommunications Network (NALECOM) designed to service all state-to-state and state-to-national criminal justice communications traffic needs in the United States. Network topology options were analyzed, and equipment and personnel requirements for each option were defined in accordance with NALECOM functional specifications and design guidelines. Evaluation criteria were developed and applied to each of the options leading to specific conclusions. Detailed treatments of methods for determining traffic requirements, communication line costs, switcher configurations and costs, microwave costs, satellite system configurations and costs, facilities, operations and engineering costs, network delay analysis and network availability analysis are presented. It is concluded that a single regional switcher configuration is the optimum choice based on cost and technical factors. A two-region configuration is competitive. Multiple-region configurations are less competitive due to increasing costs without attending benefits.
Economic lot sizing in a production system with random demand
NASA Astrophysics Data System (ADS)
Lee, Shine-Der; Yang, Chin-Ming; Lan, Shu-Chuan
2016-04-01
An extended economic production quantity model that copes with random demand is developed in this paper. A unique feature of the proposed study is the consideration of transient shortage during the production stage, which has not been explicitly analysed in existing literature. The considered costs include set-up cost for the batch production, inventory carrying cost during the production and depletion stages in one replenishment cycle, and shortage cost when demand cannot be satisfied from the shop floor immediately. Based on renewal reward process, a per-unit-time expected cost model is developed and analysed. Under some mild condition, it can be shown that the approximate cost function is convex. Computational experiments have demonstrated that the average reduction in total cost is significant when the proposed lot sizing policy is compared with those with deterministic demand.
Longitudinal study of effects of patient characteristics on direct costs in Alzheimer disease.
Zhu, C W; Scarmeas, N; Torgan, R; Albert, M; Brandt, J; Blacker, D; Sano, M; Stern, Y
2006-09-26
To estimate long-term trajectories of direct cost of caring for patients with Alzheimer disease (AD) and examine the effects of patients' characteristics on cost longitudinally. The sample is drawn from the Predictors Study, a large, multicenter cohort of patients with probable AD, prospectively followed up annually for up to 7 years in three university-based AD centers in the United States. Random effects models estimated the effects of patients' clinical and sociodemographic characteristics on direct cost of care. Direct cost included cost associated with medical and nonmedical care. Clinical characteristics included cognitive status (measured by Mini-Mental State Examination), functional capacity (measured by Blessed Dementia Rating Scale [BDRS]), psychotic symptoms, behavioral problems, depressive symptoms, extrapyramidal signs, and comorbidities. The model also controlled for patients' sex, age, and living arrangements. Total direct cost increased from approximately 9,239 dollars per patient per year at baseline, when all patients were at the early stages of the disease, to 19,925 dollars by year 4. After controlling for other variables, a one-point increase in the BDRS score increased total direct cost by 7.7%. One more comorbid condition increased total direct cost by 14.3%. Total direct cost was 20.8% lower for patients living at home compared with those living in an institutional setting. Total direct cost of caring for patients with Alzheimer disease increased substantially over time. Much of the cost increases were explained by patients' clinical and demographic variables. Comorbidities and functional capacity were associated with higher direct cost over time.
A Low Cost Micro-Computer Based Local Area Network for Medical Office and Medical Center Automation
Epstein, Mel H.; Epstein, Lynn H.; Emerson, Ron G.
1984-01-01
A Low Cost Micro-computer based Local Area Network for medical office automation is described which makes use of an array of multiple and different personal computers interconnected by a local area network. Each computer on the network functions as fully potent workstations for data entry and report generation. The network allows each workstation complete access to the entire database. Additionally, designated computers may serve as access ports for remote terminals. Through “Gateways” the network may serve as a front end for a large mainframe, or may interface with another network. The system provides for the medical office environment the expandability and flexibility of a multi-terminal mainframe system at a far lower cost without sacrifice of performance.
Technoeconomic study on steam explosion application in biomass processing.
Zimbardi, Francesco; Ricci, Esmeralda; Braccio, Giacobbe
2002-01-01
This work is based on the data collected during trials of a continuous steam explosion (SE) plant, with a treatment capacity of about 350 kg/h, including the biomass fractionation section. The energy and water consumption, equipment cost, and manpower needed to run this plant have been used as the base case for a techno-economic evaluation of productive plants. Three processing plant configurations have been considered: (I) SE pretreatment only; (II) SE followed by the hemicellulose extraction; (III) SE followed by the sequential hemicellulose and lignin extractions. The biomass treatment cost has been evaluated as a function of the plant scale. For each configuration, variable and fixed cost breakdown has been detailed in the case of a 50,000 t/y plant.
NASA Technical Reports Server (NTRS)
Harney, A. G.; Raphael, L.; Warren, S.; Yakura, J. K.
1972-01-01
A systematic and standardized procedure for estimating life cycle costs of solid rocket motor booster configurations. The model consists of clearly defined cost categories and appropriate cost equations in which cost is related to program and hardware parameters. Cost estimating relationships are generally based on analogous experience. In this model the experience drawn on is from estimates prepared by the study contractors. Contractors' estimates are derived by means of engineering estimates for some predetermined level of detail of the SRM hardware and program functions of the system life cycle. This method is frequently referred to as bottom-up. A parametric cost analysis is a useful technique when rapid estimates are required. This is particularly true during the planning stages of a system when hardware designs and program definition are conceptual and constantly changing as the selection process, which includes cost comparisons or trade-offs, is performed. The use of cost estimating relationships also facilitates the performance of cost sensitivity studies in which relative and comparable cost comparisons are significant.
Reliability, Risk and Cost Trade-Offs for Composite Designs
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Singhal, Surendra N.; Chamis, Christos C.
1996-01-01
Risk and cost trade-offs have been simulated using a probabilistic method. The probabilistic method accounts for all naturally-occurring uncertainties including those in constituent material properties, fabrication variables, structure geometry and loading conditions. The probability density function of first buckling load for a set of uncertain variables is computed. The probabilistic sensitivity factors of uncertain variables to the first buckling load is calculated. The reliability-based cost for a composite fuselage panel is defined and minimized with respect to requisite design parameters. The optimization is achieved by solving a system of nonlinear algebraic equations whose coefficients are functions of probabilistic sensitivity factors. With optimum design parameters such as the mean and coefficient of variation (representing range of scatter) of uncertain variables, the most efficient and economical manufacturing procedure can be selected. In this paper, optimum values of the requisite design parameters for a predetermined cost due to failure occurrence are computationally determined. The results for the fuselage panel analysis show that the higher the cost due to failure occurrence, the smaller the optimum coefficient of variation of fiber modulus (design parameter) in longitudinal direction.
Cost-Effectiveness of a Community Exercise and Nutrition Program for Older Adults: Texercise Select
Akanni, Olufolake (Odufuwa); Smith, Matthew Lee; Ory, Marcia G.
2017-01-01
The wide-spread dissemination of evidence-based programs that can improve health outcomes among older populations often requires an understanding of factors influencing community adoption of such programs. One such program is Texercise Select, a community-based health promotion program previously shown to improve functional health, physical activity, nutritional habits and quality of the life among older adults. This paper assesses the cost-effectiveness of Texercise Select in the context of supportive environments to facilitate its delivery and statewide sustainability. Participants were surveyed using self-reported instruments distributed at program baseline and conclusion. Program costs were based on actual direct costs of program implementation and included costs of recruitment and outreach, personnel costs and participant incentives. Program effectiveness was measured using quality-adjusted life year (QALY) gained, as well as health outcomes, such as healthy days, weekly physical activity and Timed Up-and-Go (TUG) test scores. Preference-based EuroQol (EQ-5D) scores were estimated from the number of healthy days reported by participants and converted into QALYs. There was a significant increase in the number of healthy days (p < 0.05) over the 12-week program. Cost-effectiveness ratios ranged from $1374 to $1452 per QALY gained. The reported cost-effective ratios are well within the common cost-effectiveness threshold of $50,000 for a gained QALY. Some sociodemographic differences were also observed in program impact and cost. Non-Hispanic whites experienced significant improvements in healthy days from baseline to the follow-up period and had higher cost-effectiveness ratios. Results indicate that the Texercise Select program is a cost-effective strategy for increasing physical activity and improving healthy dietary practices among older adults as compared to similar health promotion interventions. In line with the significant improvement in healthy days, physical activity and nutrition-related outcomes among participants, this study supports the use of Texercise Select as an intervention with substantial health and cost benefits. PMID:28531094
Oddy, Michael; da Silva Ramos, Sara
2013-01-01
The aim of this study was to investigate the cost-benefits of a residential post-acute neurobehavioural rehabilitation programme and its effects on care needs and social participation of adults with acquired brain injury. Retrospective multi-centre design. Data on occupation, adaptability and level of support required were collected at admission, discharge and 6-months follow-up. Cost analysis was performed on cost estimates based on level of support. Significant gains were observed in all areas of functioning, with individuals progressing towards higher levels of independence and more participation in society upon discharge. Cost-benefits of up to £1.13 million were demonstrated for individuals admitted to rehabilitation within a year of sustaining a brain injury and of up to £0.86 million for those admitted more than 1 year after injury. Functional gains and reductions in levels of care required upon discharge were maintained 6 months later. These results demonstrate that post-acute neurobehavioural rehabilitation can have a positive impact on the lives of individuals with brain injury and that the associated costs are off-set by significant savings in the longer-term.
The clinical and cost-benefits of investing in neurobehavioural rehabilitation: A multi-centre study
Oddy, Michael
2013-01-01
Primary objective The aim of this study was to investigate the cost-benefits of a residential post-acute neurobehavioural rehabilitation programme and its effects on care needs and social participation of adults with acquired brain injury. Research design Retrospective multi-centre design. Methods and procedures Data on occupation, adaptability and level of support required were collected at admission, discharge and 6-months follow-up. Cost analysis was performed on cost estimates based on level of support. Main outcomes and results Significant gains were observed in all areas of functioning, with individuals progressing towards higher levels of independence and more participation in society upon discharge. Conclusions Cost-benefits of up to £1.13 million were demonstrated for individuals admitted to rehabilitation within a year of sustaining a brain injury and of up to £0.86 million for those admitted more than 1 year after injury. Functional gains and reductions in levels of care required upon discharge were maintained 6 months later. These results demonstrate that post-acute neurobehavioural rehabilitation can have a positive impact on the lives of individuals with brain injury and that the associated costs are off-set by significant savings in the longer-term. PMID:24087973
Benefits and costs of low thrust propulsion systems
NASA Technical Reports Server (NTRS)
Robertson, R. I.; Rose, L. J.; Maloy, J. E.
1983-01-01
The results of costs/benefits analyses of three chemical propulsion systems that are candidates for transferring high density, low volume STS payloads from LEO to GEO are reported. Separate algorithms were developed for benefits and costs of primary propulsion systems (PPS) as functions of the required thrust levels. The life cycle costs of each system were computed based on the developmental, production, and deployment costs. A weighted criteria rating approach was taken for the benefits, with each benefit assigned a value commensurate to its relative worth to the overall system. Support costs were included in the costs modeling. Reference missions from NASA, commercial, and DoD catalog payloads were examined. The program was concluded reliable and flexible for evaluating benefits and costs of launch and orbit transfer for any catalog mission, with the most beneficial PPS being a dedicated low thrust configuration using the RL-10 system.
Weight and the Future of Space Flight Hardware Cost Modeling
NASA Technical Reports Server (NTRS)
Prince, Frank A.
2003-01-01
Weight has been used as the primary input variable for cost estimating almost as long as there have been parametric cost models. While there are good reasons for using weight, serious limitations exist. These limitations have been addressed by multi-variable equations and trend analysis in models such as NAFCOM, PRICE, and SEER; however, these models have not be able to address the significant time lags that can occur between the development of similar space flight hardware systems. These time lags make the cost analyst's job difficult because insufficient data exists to perform trend analysis, and the current set of parametric models are not well suited to accommodating process improvements in space flight hardware design, development, build and test. As a result, people of good faith can have serious disagreement over the cost for new systems. To address these shortcomings, new cost modeling approaches are needed. The most promising approach is process based (sometimes called activity) costing. Developing process based models will require a detailed understanding of the functions required to produce space flight hardware combined with innovative approaches to estimating the necessary resources. Particularly challenging will be the lack of data at the process level. One method for developing a model is to combine notional algorithms with a discrete event simulation and model changes to the total cost as perturbations to the program are introduced. Despite these challenges, the potential benefits are such that efforts should be focused on developing process based cost models.
Optimal network alignment with graphlet degree vectors.
Milenković, Tijana; Ng, Weng Leong; Hayes, Wayne; Przulj, Natasa
2010-06-30
Important biological information is encoded in the topology of biological networks. Comparative analyses of biological networks are proving to be valuable, as they can lead to transfer of knowledge between species and give deeper insights into biological function, disease, and evolution. We introduce a new method that uses the Hungarian algorithm to produce optimal global alignment between two networks using any cost function. We design a cost function based solely on network topology and use it in our network alignment. Our method can be applied to any two networks, not just biological ones, since it is based only on network topology. We use our new method to align protein-protein interaction networks of two eukaryotic species and demonstrate that our alignment exposes large and topologically complex regions of network similarity. At the same time, our alignment is biologically valid, since many of the aligned protein pairs perform the same biological function. From the alignment, we predict function of yet unannotated proteins, many of which we validate in the literature. Also, we apply our method to find topological similarities between metabolic networks of different species and build phylogenetic trees based on our network alignment score. The phylogenetic trees obtained in this way bear a striking resemblance to the ones obtained by sequence alignments. Our method detects topologically similar regions in large networks that are statistically significant. It does this independent of protein sequence or any other information external to network topology.
Plans-Rubió, Pedro
2004-01-01
To use the social welfare function to decide on allocation of resources between smoking cessation methods and lovastatin treatment of hypercholesterolaemia for the primary prevention of coronary heart disease. Three smoking cessation therapies (medical advice, nicotine gum and nicotine patch) were considered in smokers, and lovastatin 20, 40 and 80 mg/day was considered in individuals with hypercholesterolaemia (total cholesterol > 7.24 mmol/L [> 270 mg/dL]). Multiple logistic regression analysis was used to obtain parameter epsilon determining the exact form of the social welfare function in Catalonia, Spain. The preferable strategy was to give higher priority to the intervention that used one smoking cessation method and lovastatin treatment for hypercholesterolaemia and that was associated with a value of epsilon consistent with the social welfare function. A value of 1.58 (95% CI: 0.75-2.84) was obtained for parameter epsilon of the social welfare function, showing a nonutilitarian form. A higher priority should be given, based on the social welfare function, to the intervention using medical advice for smoking cessation and lovastatin 20-80 mg/day for hypercholesterolaemia, since this approach was associated with epsilon values of 2.8-2.9 in men and 1.8-2.4 in women, while interventions using nicotine substitution therapies were associated with epsilon values of < 0.9 in men and < 0.4 in women. The cost of treating all smokers and individuals with hypercholesterolaemia was 35% lower using medical advice for smoking cessation and lovastatin 20 mg/day, which was associated with epsilon values of 2.9 in men and 2.4 in women, than using a utilitarian solution consisting of nicotine patches for smoking cessation and lovastatin 20 mg/day. These results show that higher priority should be given to lovastatin treatment of hypercholesterolaemia than to nicotine substitution treatments for smoking cessation, based on cost effectiveness and the social welfare function. The study also showed the applicability of this method to decisions about resource allocation between competing treatments when society has a nonutilitarian social welfare function.
Innovative concepts for marginal fields (advanced monotower developments)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, M.T.; Marks, V.E.
1995-12-01
The braced monotower provides a safe, functional and cost effective solution for topsides up to 500 tonnes, with up to 8 wells and standing in water depths of up to 70 meters. It is both simple in concept and structurally efficient. The superstructure is supported by a single column which is stayed by three symmetrically orientated legs. A broad mudline base is also provided to limit pile loads. The final concept offers complete protection to the risers and conductors from ship impact, as all appurtenances are housed within the central column. The basic design philosophy of the low intervention platformmore » is to minimize the onboard equipment to that vitally needed to produce hydrocarbon. The concept eliminates the life support functions that on a normal North Sea platform can contribute up to 50% of the topside dry weight. A system of Zero Based Engineering is used that ensures each item of equipment contributes more to the NPV of the platform than the fully built-up through life cost. This effectively eliminates the operator preference factor and the ``culture`` cost.« less
Seamless interworking architecture for WBAN in heterogeneous wireless networks with QoS guarantees.
Khan, Pervez; Ullah, Niamat; Ullah, Sana; Kwak, Kyung Sup
2011-10-01
The IEEE 802.15.6 standard is a communication standard optimized for low-power and short-range in-body/on-body nodes to serve a variety of medical, consumer electronics and entertainment applications. Providing high mobility with guaranteed Quality of Service (QoS) to a WBAN user in heterogeneous wireless networks is a challenging task. A WBAN uses a Personal Digital Assistant (PDA) to gather data from body sensors and forwards it to a remote server through wide range wireless networks. In this paper, we present a coexistence study of WBAN with Wireless Local Area Networks (WLAN) and Wireless Wide Area Networks (WWANs). The main issue is interworking of WBAN in heterogenous wireless networks including seamless handover, QoS, emergency services, cooperation and security. We propose a Seamless Interworking Architecture (SIA) for WBAN in heterogenous wireless networks based on a cost function. The cost function is based on power consumption and data throughput costs. Our simulation results show that the proposed scheme outperforms typical approaches in terms of throughput, delay and packet loss rate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alptekin, Gokhan; Jayaraman, Ambalavanan; Dietz, Steven
In this project TDA Research, Inc (TDA) has developed a new post combustion carbon capture technology based on a vacuum swing adsorption system that uses a steam purge and demonstrated its technical feasibility and economic viability in laboratory-scale tests and tests in actual coal derived flue gas. TDA uses an advanced physical adsorbent to selectively remove CO 2 from the flue gas. The sorbent exhibits a much higher affinity for CO 2 than N 2, H 2O or O 2, enabling effective CO 2 separation from the flue gas. We also carried out a detailed process design and analysis ofmore » the new system as part of both sub-critical and super-critical pulverized coal fired power plants. The new technology uses a low cost, high capacity adsorbent that selectively removes CO 2 in the presence of moisture at the flue gas temperature without a need for significant cooling of the flue gas or moisture removal. The sorbent is based on a TDA proprietary mesoporous carbon that consists of surface functionalized groups that remove CO 2 via physical adsorption. The high surface area and favorable porosity of the sorbent also provides a unique platform to introduce additional functionality, such as active groups to remove trace metals (e.g., Hg, As). In collaboration with the Advanced Power and Energy Program of the University of California, Irvine (UCI), TDA developed system simulation models using Aspen PlusTM simulation software to assess the economic viability of TDA’s VSA-based post-combustion carbon capture technology. The levelized cost of electricity including the TS&M costs for CO 2 is calculated as $116.71/MWh and $113.76/MWh for TDA system integrated with sub-critical and super-critical pulverized coal fired power plants; much lower than the $153.03/MWhand $147.44/MWh calculated for the corresponding amine based systems. The cost of CO 2 captured for TDA’s VSA based system is $38.90 and $39.71 per tonne compared to $65.46 and $66.56 per tonne for amine based system on 2011 $ basis, providing 40% lower cost of CO 2 captured. In this analysis we have used a sorbent life of 4 years. If a longer sorbent life can be maintained (which is not unreasonable for fixed bed commercial PSA systems), this would lower the cost of CO 2 captured by $0.05 per tonne (e.g., to $38.85 and $39.66 per tonne at 5 years sorbent replacement). These system analysis results suggest that TDA’s VSA-based post-combustion capture technology can substantially improve the power plant’s thermal performance while achieving near zero emissions, including greater than 90% carbon capture. The higher net plant efficiency and lower capital and operating costs results in a substantial reduction in the cost of carbon capture and cost of electricity for the power plant equipped with TDA’s technology.« less
Gorban, A N; Mirkes, E M; Zinovyev, A
2016-12-01
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0
Zhang, Huaguang; Qu, Qiuxia; Xiao, Geyang; Cui, Yang
2018-06-01
Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN) is applied to obtain the approximate optimal control law for the auxiliary system. Lyapunov techniques are used to demonstrate the convergence of the NN weight errors. In addition, the derived approximate optimal control is verified to guarantee the sliding mode dynamics system to be stable in the sense of uniform ultimate boundedness. Some simulation results are presented to verify the feasibility of the proposed control scheme.
2015-06-01
Martial Arts Program MCRD Marine Corps Recruit Depot PCO Property Control Office PSC permanent change of station PSE personnel support...office supplies and materials required for the operations office to function. The Property Control Office ( PCO ) is another cost under the base...operations subcategory. PCO supports the Marines with non-deployable equipment. PCO Garrison Property, PSE, collateral equipment (CE) and food preparation
Bartosz, Krzysztof; Denkowski, Zdzisław; Kalita, Piotr
In this paper the sensitivity of optimal solutions to control problems described by second order evolution subdifferential inclusions under perturbations of state relations and of cost functionals is investigated. First we establish a new existence result for a class of such inclusions. Then, based on the theory of sequential [Formula: see text]-convergence we recall the abstract scheme concerning convergence of minimal values and minimizers. The abstract scheme works provided we can establish two properties: the Kuratowski convergence of solution sets for the state relations and some complementary [Formula: see text]-convergence of the cost functionals. Then these two properties are implemented in the considered case.
Cyber-Physical Attacks With Control Objectives
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2017-08-18
This study studies attackers with control objectives against cyber-physical systems (CPSs). The goal of the attacker is to counteract the CPS's controller and move the system to a target state while evading detection. We formulate a cost function that reflects the attacker's goals, and, using dynamic programming, we show that the optimal attack strategy reduces to a linear feedback of the attacker's state estimate. By changing the parameters of the cost function, we show how an attacker can design optimal attacks to balance the control objective and the detection avoidance objective. In conclusion, we provide a numerical illustration based onmore » a remotely controlled helicopter under attack.« less
Minimum noise impact aircraft trajectories
NASA Technical Reports Server (NTRS)
Jacobson, I. D.; Melton, R. G.
1981-01-01
Numerical optimization is used to compute the optimum flight paths, based upon a parametric form that implicitly includes some of the problem restrictions. The other constraints are formulated as penalties in the cost function. Various aircraft on multiple trajectores (landing and takeoff) can be considered. The modular design employed allows for the substitution of alternate models of the population distribution, aircraft noise, flight paths, and annoyance, or for the addition of other features (e.g., fuel consumption) in the cost function. A reduction in the required amount of searching over local minima was achieved through use of the presence of statistical lateral dispersion in the flight paths.
Cyber-Physical Attacks With Control Objectives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
This study studies attackers with control objectives against cyber-physical systems (CPSs). The goal of the attacker is to counteract the CPS's controller and move the system to a target state while evading detection. We formulate a cost function that reflects the attacker's goals, and, using dynamic programming, we show that the optimal attack strategy reduces to a linear feedback of the attacker's state estimate. By changing the parameters of the cost function, we show how an attacker can design optimal attacks to balance the control objective and the detection avoidance objective. In conclusion, we provide a numerical illustration based onmore » a remotely controlled helicopter under attack.« less
Modelling and genetic algorithm based optimisation of inverse supply chain
NASA Astrophysics Data System (ADS)
Bányai, T.
2009-04-01
The design and control of recycling systems of products with environmental risk have been discussed in the world already for a long time. The main reasons to address this subject are the followings: reduction of waste volume, intensification of recycling of materials, closing the loop, use of less resource, reducing environmental risk [1, 2]. The development of recycling systems is based on the integrated solution of technological and logistic resources and know-how [3]. However the financial conditions of recycling systems is partly based on the recovery, disassembly and remanufacturing options of the used products [4, 5, 6], but the investment and operation costs of recycling systems can be characterised with high logistic costs caused by the geographically wide collection system with more collection level and a high number of operation points of the inverse supply chain. The reduction of these costs is a popular area of the logistics researches. These researches include the design and implementation of comprehensive environmental waste and recycling program to suit business strategies (global system), design and supply all equipment for production line collection (external system), design logistics process to suit the economical and ecological requirements (external system) [7]. To the knowledge of the author, there has been no research work on supply chain design problems that purpose is the logistics oriented optimisation of inverse supply chain in the case of non-linear total cost function consisting not only operation costs but also environmental risk cost. The antecedent of this research is, that the author has taken part in some research projects in the field of closed loop economy ("Closing the loop of electr(on)ic products and domestic appliances from product planning to end-of-life technologies), environmental friendly disassembly (Concept for logistical and environmental disassembly technologies) and design of recycling systems of household appliances (Recycling of household appliances with emphasis on reuse options). The purpose of this paper is the presentation of a possible method for avoiding the unnecessary environmental risk and landscape use through unprovoked large supply chain of collection systems of recycling processes. In the first part of the paper the author presents the mathematical model of recycling related collection systems (applied especially for wastes of electric and electronic products) and in the second part of the work a genetic algorithm based optimisation method will be demonstrated, by the aid of which it is possible to determine the optimal structure of the inverse supply chain from the point of view economical, ecological and logistic objective functions. The model of the inverse supply chain is based on a multi-level, hierarchical collection system. In case of this static model it is assumed that technical conditions are permanent. The total costs consist of three parts: total infrastructure costs, total material handling costs and environmental risk costs. The infrastructure-related costs are dependent only on the specific fixed costs and the specific unit costs of the operation points (collection, pre-treatment, treatment, recycling and reuse plants). The costs of warehousing and transportation are represented by the material handling related costs. The most important factors determining the level of environmental risk cost are the number of out of time recycled (treated or reused) products, the number of supply chain objects and the length of transportation routes. The objective function is the minimization of the total cost taking into consideration the constraints. However a lot of research work discussed the design of supply chain [8], but most of them concentrate on linear cost functions. In the case of this model non-linear cost functions were used. The non-linear cost functions and the possible high number of objects of the inverse supply chain leaded to the problem of choosing a possible solution method. By the aid of analytical methods, the problem can not be solved, so a genetic algorithm based heuristic optimisation method was chosen to find the optimal solution. The input parameters of the optimisation are the followings: specific fixed, unit and environmental risk costs of the collection points of the inverse supply chain, specific warehousing and transportation costs and environmental risk costs of transportation. The output parameters are the followings: the number of objects in the different hierarchical levels of the collection system, infrastructure costs, logistics costs and environmental risk costs from used infrastructures, transportation and number of products recycled out of time. The next step of the research work was the application of the above mentioned method. The developed application makes it possible to define the input parameters of the real system, the graphical view of the chosen optimal solution in the case of the given input parameters, graphical view of the cost structure of the optimal solution, determination of the parameters of the algorithm (e.g. number of individuals, operators and termination conditions). The sensibility analysis of the objective function and the test results showed that the structure of the inverse supply chain depends on the proportion of the specific costs. Especially the proportion of the specific environmental risk costs influences the structure of the system and the number of objects at each hierarchical level of the collection system. The sensitivity analysis of the total cost function was performed in three cases. In the first case the effect of the proportion of specific infrastructure and logistics costs were analysed. If the infrastructure costs are significantly lower than the total costs of warehousing and transportation, then almost all objects of the first hierarchical level of the collection (collection directly from the users) were set up. In the other case of the proportion of costs the first level of the collection is not necessary, because it is replaceable by the more expensive transportation directly to the objects of the second or lower hierarchical level. In the second case the effect of the proportion of the logistics and environmental risk costs were analysed. In this case the analysis resulted to the followings: if the logistics costs are significantly higher than the total environmental risk costs, then because of the constant infrastructure costs the preference of logistics operations depends on the proportion of the environmental risk costs caused by of out of time recycled products and transportation. In the third case of the analysis the effect of the proportion of infrastructure and environmental risk costs were examined. If the infrastructure costs are significantly lower than the environmental risk costs, then almost all objects of the first hierarchical level of the collection (collection directly from the users) were set up. In the other case of the proportion of costs the first collection phase will be shifted near to the last hierarchical level of the supply chain to avoid a very high infrastructure set up and operation cost. The advantages of the presented model and solution method can be summarised in the followings: the model makes it possible to decide the structure of the inverse supply chain (which object to open or close); reduces infrastructure cost, especially for supply chain with high specific fixed costs; reduces the environmental risk cost through finding an optimal balance between number of objects of the system and out of time recycled products, reduces the logistics costs through determining the optimal quantitative parameters of material flow operations. The future of this research work is the use of differentiated lead-time, which makes it possible to take into consideration the above mentioned non-linear infrastructure, transportation, warehousing and environmental risk costs in the case of a given product portfolio segmented by lead-time. This publication was supported by the National Office for Research and Technology within the frame of Pázmány Péter programme. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Office for Research and Technology. Literature: [1] H. F. Lund: McGraw-Hill Recycling Handbook. McGraw-Hill. 2000. [2] P. T. Williams: Waste Treatment and Disposal. John Wiley and Sons Ltd. 2005. [3] M. Christopher: Logistics & Supply Chain Management: creating value-adding networks. Pearson Education [4] A. Gungor, S. M. Gupta: Issues in environmentally conscious manufacturing and product recovery: a survey. Computers & Industrial Engineering. Volume 36. Issue 4. 1999. pp. 811-853. [5] H. C. Zhang, T. C. Kuo, H. Lu, S. H. Huang: Environmentally conscious design and manufacturing: A state-of-the-art survey. Journal of Manufacturing Systems. Volume 16. Issue 5. 1997. pp. 352-371. [6] P. Veerakamolmal, S. Gupta: Design for Disassembly, Reuse, and Recycling. Green Electronics/Green Bottom Line. 2000. pp. 69-82. [7] A. Rushton, P. Croucher, P. Baker: The Handbook of Logistics and Distribution Management. Kogan P.page Limited. 2006. [8] H. Stadtler, C. Kilger: Supply Chain Management and Advanced Planning: Concepts, Models, Software, and Case Studies. Springer. 2005.
Jessep, Sally A; Walsh, Nicola E; Ratcliffe, Julie; Hurley, Michael V
2009-06-01
Chronic knee pain is a major cause of disability in the elderly. Management guidelines recommend exercise and self-management interventions as effective treatments. The authors previously described a rehabilitation programme integrating exercise and self-management [Enabling Self-management and Coping with Arthritic knee Pain through Exercise (ESCAPE-knee pain)] that produced short-term improvements in pain and physical function, but sustaining these improvements is difficult. Moreover, the programme is untried in clinical environments, where it would ultimately be delivered. To establish the feasibility of ESCAPE-knee pain and compare its clinical effectiveness and costs with outpatient physiotherapy. Pragmatic, randomised controlled trial. Outpatient physiotherapy department and community centre. Sixty-four people with chronic knee pain. Outpatient physiotherapy compared with ESCAPE-knee pain. The primary outcome was physical function assessed using the Western Ontario and McMaster Universities Osteoarthritis Index. Secondary outcomes included pain, objective functional performance, anxiety, depression, exercise-related health beliefs and healthcare utilisation. All outcomes were assessed at baseline and 12 months after completing the interventions (primary endpoint). ANCOVA investigated between-group differences. Both groups demonstrated similar improvements in clinical outcomes. Outpatient physiotherapy cost pound 130 per person and the healthcare utilisation costs of participants over 1 year were pound 583. The ESCAPE-knee pain programme cost pound 64 per person and the healthcare utilisation costs of participants over 1 year were pound 320. ESCAPE-knee pain can be delivered as a community-based integrated rehabilitation programme for people with chronic knee pain. Both ESCAPE-knee pain and outpatient physiotherapy produced sustained physical and psychosocial benefits, but ESCAPE-knee pain cost less and was more cost-effective.
Make-up wells drilling cost in financial model for a geothermal project
NASA Astrophysics Data System (ADS)
Oktaviani Purwaningsih, Fitri; Husnie, Ruly; Afuar, Waldy; Abdurrahman, Gugun
2017-12-01
After commissioning of a power plant, geothermal reservoir will encounter pressure decline, which will affect wells productivity. Therefore, further drilling is carried out to enhance steam production. Make-up wells are production wells drilled inside an already confirmed reservoir to maintain steam production in a certain level. Based on Sanyal (2004), geothermal power cost consists of three components, those are capital cost, O&M cost and make-up drilling cost. The make-up drilling cost component is a major part of power cost which will give big influence in a whole economical value of the project. The objective of this paper it to analyse the make-up wells drilling cost component in financial model of a geothermal power project. The research will calculate make-up wells requirements, drilling costs as a function of time and how they influence the financial model and affect the power cost. The best scenario in determining make-up wells strategy in relation with the project financial model would be the result of this research.
An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Weixuan, E-mail: weixuan.li@usc.edu; Lin, Guang, E-mail: guang.lin@pnnl.gov; Zhang, Dongxiao, E-mail: dxz@pku.edu.cn
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functionsmore » is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
A study of parameter identification
NASA Technical Reports Server (NTRS)
Herget, C. J.; Patterson, R. E., III
1978-01-01
A set of definitions for deterministic parameter identification ability were proposed. Deterministic parameter identificability properties are presented based on four system characteristics: direct parameter recoverability, properties of the system transfer function, properties of output distinguishability, and uniqueness properties of a quadratic cost functional. Stochastic parameter identifiability was defined in terms of the existence of an estimation sequence for the unknown parameters which is consistent in probability. Stochastic parameter identifiability properties are presented based on the following characteristics: convergence properties of the maximum likelihood estimate, properties of the joint probability density functions of the observations, and properties of the information matrix.
What Do Cost Functions Tell Us about the Cost of an Adequate Education?
ERIC Educational Resources Information Center
Costrell, Robert M.; Hanushek, Eric; Loeb, Susanna
2008-01-01
Econometric cost functions have begun to appear in education adequacy cases with greater frequency. Cost functions are superficially attractive because they give the impression of objectivity, holding out the promise of scientifically estimating the cost of achieving specified levels of performance from actual data on spending. By contrast, the…
Evensen, Stig; Wisløff, Torbjørn; Lystad, June Ullevoldsæter; Bull, Helen; Ueland, Torill; Falkum, Erik
2016-03-01
Schizophrenia is associated with recurrent hospitalizations, need for long-term community support, poor social functioning, and low employment rates. Despite the wide- ranging financial and social burdens associated with the illness, there is great uncertainty regarding prevalence, employment rates, and the societal costs of schizophrenia. The current study investigates 12-month prevalence of patients treated for schizophrenia, employment rates, and cost of schizophrenia using a population-based top-down approach. Data were obtained from comprehensive and mandatory health and welfare registers in Norway. We identified a 12-month prevalence of 0.17% for the entire population. The employment rate among working-age individuals was 10.24%. The societal costs for the 12-month period were USD 890 million. The average cost per individual with schizophrenia was USD 106 thousand. Inpatient care and lost productivity due to high unemployment represented 33% and 29%, respectively, of the total costs. The use of mandatory health and welfare registers enabled a unique and informative analysis on true population-based datasets. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.
National Stormwater Calculator: Low Impact Development ...
The National Stormwater Calculator (NSC) makes it easy to estimate runoff reduction when planning a new development or redevelopment site with low impact development (LID) stormwater controls. The Calculator is currently deployed as a Windows desktop application. The Calculator is organized as a wizard style application that walks the user through the steps necessary to perform runoff calculations on a single urban sub-catchment of 10 acres or less in size. Using an interactive map, the user can select the sub-catchment location and the Calculator automatically acquires hydrologic data for the site.A new LID cost estimation module has been developed for the Calculator. This project involved programming cost curves into the existing Calculator desktop application. The integration of cost components of LID controls into the Calculator increases functionality and will promote greater use of the Calculator as a stormwater management and evaluation tool. The addition of the cost estimation module allows planners and managers to evaluate LID controls based on comparison of project cost estimates and predicted LID control performance. Cost estimation is accomplished based on user-identified size (or auto-sizing based on achieving volume control or treatment of a defined design storm), configuration of the LID control infrastructure, and other key project and site-specific variables, including whether the project is being applied as part of new development or redevelopm
NASA Technical Reports Server (NTRS)
Maynard, O. E.; Brown, W. C.; Edwards, A.; Haley, J. T.; Meltz, G.; Howell, J. M.; Nathan, A.
1975-01-01
The microwave rectifier technology, approaches to the receiving antenna, topology of rectenna circuits, assembly and construction, ROM cost estimates are discussed. Analyses and cost estimates for the equipment required to transmit the ground power to an external user. Noise and harmonic considerations are presented for both the amplitron and klystron and interference limits are identified and evaluated. The risk assessment discussion is discussed wherein technology risks are rated and ranked with regard to their importance in impacting the microwave power transmission system. The system analyses and evaluation are included of parametric studies of system relationships pertaining to geometry, materials, specific cost, specific weight, efficiency, converter packing, frequency selection, power distribution, power density, power output magnitude, power source, transportation and assembly. Capital costs per kW and energy costs as a function of rate of return, power source and transportation costs as well as build cycle time are presented. The critical technology and ground test program are discussed along with ROM costs and schedule. The orbital test program with associated critical technology and ground based program based on full implementation of the defined objectives is discussed.
U.S. Balance-of-Station Cost Drivers and Sensitivities (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maples, B.
2012-10-01
With balance-of-system (BOS) costs contributing up to 70% of the installed capital cost, it is fundamental to understanding the BOS costs for offshore wind projects as well as potential cost trends for larger offshore turbines. NREL developed a BOS model using project cost estimates developed by GL Garrad Hassan. Aspects of BOS covered include engineering and permitting, ports and staging, transportation and installation, vessels, foundations, and electrical. The data introduce new scaling relationships for each BOS component to estimate cost as a function of turbine parameters and size, project parameters and size, and soil type. Based on the new BOSmore » model, an analysis to understand the non‐turbine costs has been conducted. This analysis establishes a more robust baseline cost estimate, identifies the largest cost components of offshore wind project BOS, and explores the sensitivity of the levelized cost of energy to permutations in each BOS cost element. This presentation shows results from the model that illustrates the potential impact of turbine size and project size on the cost of energy from U.S. offshore wind plants.« less
System and method for key generation in security tokens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Philip G.; Humble, Travis S.; Paul, Nathanael R.
Functional randomness in security tokens (FRIST) may achieve improved security in two-factor authentication hardware tokens by improving on the algorithms used to securely generate random data. A system and method in one embodiment according to the present invention may allow for security of a token based on storage cost and computational security. This approach may enable communication where security is no longer based solely on onetime pads (OTPs) generated from a single cryptographic function (e.g., SHA-256).
Finite-fault source inversion using adjoint methods in 3D heterogeneous media
NASA Astrophysics Data System (ADS)
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-04-01
Accounting for lateral heterogeneities in the 3D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3D heterogeneity in source inversion involves pre-computing 3D Green's functions, which requires a number of 3D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense datasets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3D heterogeneous velocity model. The velocity model comprises a uniform background and a 3D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3D velocity model are performed for two different station configurations, a dense and a sparse network with 1 km and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.
Finite-fault source inversion using adjoint methods in 3-D heterogeneous media
NASA Astrophysics Data System (ADS)
Somala, Surendra Nadh; Ampuero, Jean-Paul; Lapusta, Nadia
2018-07-01
Accounting for lateral heterogeneities in the 3-D velocity structure of the crust is known to improve earthquake source inversion, compared to results based on 1-D velocity models which are routinely assumed to derive finite-fault slip models. The conventional approach to include known 3-D heterogeneity in source inversion involves pre-computing 3-D Green's functions, which requires a number of 3-D wave propagation simulations proportional to the number of stations or to the number of fault cells. The computational cost of such an approach is prohibitive for the dense data sets that could be provided by future earthquake observation systems. Here, we propose an adjoint-based optimization technique to invert for the spatio-temporal evolution of slip velocity. The approach does not require pre-computed Green's functions. The adjoint method provides the gradient of the cost function, which is used to improve the model iteratively employing an iterative gradient-based minimization method. The adjoint approach is shown to be computationally more efficient than the conventional approach based on pre-computed Green's functions in a broad range of situations. We consider data up to 1 Hz from a Haskell source scenario (a steady pulse-like rupture) on a vertical strike-slip fault embedded in an elastic 3-D heterogeneous velocity model. The velocity model comprises a uniform background and a 3-D stochastic perturbation with the von Karman correlation function. Source inversions based on the 3-D velocity model are performed for two different station configurations, a dense and a sparse network with 1 and 20 km station spacing, respectively. These reference inversions show that our inversion scheme adequately retrieves the rise time when the velocity model is exactly known, and illustrates how dense coverage improves the inference of peak-slip velocities. We investigate the effects of uncertainties in the velocity model by performing source inversions based on an incorrect, homogeneous velocity model. We find that, for velocity uncertainties that have standard deviation and correlation length typical of available 3-D crustal models, the inverted sources can be severely contaminated by spurious features even if the station density is high. When data from thousand or more receivers is used in source inversions in 3-D heterogeneous media, the computational cost of the method proposed in this work is at least two orders of magnitude lower than source inversion based on pre-computed Green's functions.
Cost effectiveness of conventional versus LANDSAT use data for hydrologic modeling
NASA Technical Reports Server (NTRS)
George, T. S.; Taylor, R. S.
1982-01-01
Six case studies were analyzed to investigate the cost effectiveness of using land use data obtained from LANDSAT as opposed to conventionally obtained data. A procedure was developed to determine the relative effectiveness of the two alternative means of acquiring data for hydrological modelling. The cost of conventionally acquired data ranged between $3,000 and $16,000 for the six test basins. Information based on LANDSAT imagery cost between $2,000 and $5,000. Results of the effectiveness analysis shows the differences between the two methods are insignificant. From the cost comparison and the act that each method, conventional and LANDSAT, is shown to be equally effective in developing land use data for hydrologic studies, the cost effectiveness of the conventional or LANDSAT method is found to be a function of basin size for the six test watersheds analyzed. The LANDSAT approach is cost effective for areas containing more than 10 square miles.
Routing and Scheduling Optimization Model of Sea Transportation
NASA Astrophysics Data System (ADS)
barus, Mika debora br; asyrafy, Habib; nababan, Esther; mawengkang, Herman
2018-01-01
This paper examines the routing and scheduling optimization model of sea transportation. One of the issues discussed is about the transportation of ships carrying crude oil (tankers) which is distributed to many islands. The consideration is the cost of transportation which consists of travel costs and the cost of layover at the port. Crude oil to be distributed consists of several types. This paper develops routing and scheduling model taking into consideration some objective functions and constraints. The formulation of the mathematical model analyzed is to minimize costs based on the total distance visited by the tanker and minimize the cost of the ports. In order for the model of the problem to be more realistic and the cost calculated to be more appropriate then added a parameter that states the multiplier factor of cost increases as the charge of crude oil is filled.
Ant Brood Function as Life Preservers during Floods
Purcell, Jessica; Avril, Amaury; Jaffuel, Geoffrey; Bates, Sarah; Chapuisat, Michel
2014-01-01
Social organisms can surmount many ecological challenges by working collectively. An impressive example of such collective behavior occurs when ants physically link together into floating ‘rafts’ to escape from flooded habitat. However, raft formation may represent a social dilemma, with some positions posing greater individual risks than others. Here, we investigate the position and function of different colony members, and the costs and benefits of this functional geometry in rafts of the floodplain-dwelling ant Formica selysi. By causing groups of ants to raft in the laboratory, we observe that workers are distributed throughout the raft, queens are always in the center, and 100% of brood items are placed on the base. Through a series of experiments, we show that workers and brood are extremely resistant to submersion. Both workers and brood exhibit high survival rates after they have rafted, suggesting that occupying the base of the raft is not as costly as expected. The placement of all brood on the base of one cohesive raft confers several benefits: it preserves colony integrity, takes advantage of brood buoyancy, and increases the proportion of workers that immediately recover after rafting. PMID:24586600
Econometrics of inventory holding and shortage costs: the case of refined gasoline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krane, S.D.
1985-01-01
This thesis estimates a model of a firm's optimal inventory and production behavior in order to investigate the link between the role of inventories in the business cycle and the microeconomic incentives for holding stocks of finished goods. The goal is to estimate a set of structural cost function parameters that can be used to infer the optimal cyclical response of inventories and production to shocks in demand. To avoid problems associated with the use of value based aggregate inventory data, an industry level physical unit data set for refined motor gasoline is examined. The Euler equations for a refiner'smore » multiperiod decision problem are estimated using restrictions imposed by the rational expectations hypothesis. The model also embodies the fact that, in most periods, the level of shortages will be zero, and even when positive, the shortages are not directly observable in the data set. These two concerns lead us to use a generalized method of moments estimation technique on a functional form that resembles the formulation of a Tobit problem. The estimation results are disappointing; the model and data yield coefficient estimates incongruous with the cost function interpretations of the structural parameters. These is only some superficial evidence that production smoothing is significant and that marginal inventory shortage costs increase at a faster rate than do marginal holding costs.« less
Delgado, Juan F; Oliva, Juan; Llano, Miguel; Pascual-Figal, Domingo; Grillo, José J; Comín-Colet, Josep; Díaz, Beatriz; Martínez de La Concha, León; Martí, Belén; Peña, Luz M
2014-08-01
Chronic heart failure is associated with high mortality and utilization of health care and social resources. The objective of this study was to quantify the use of health care and nonhealth care resources and identify variables that help to explain variability in their costs in Spain. This prospective, multicenter, observational study with a 12-month follow-up period included 374 patients with symptomatic heart failure recruited from specialized cardiology clinics. Information was collected on the socioeconomic characteristics of patients and caregivers, health status, health care resources, and professional and nonprofessional caregiving. The monetary cost of the resources used in caring for the health of these patients was evaluated, differentiating among functional classes. The estimated total cost for the 1-year follow-up ranged from € 12,995 to € 18,220, depending on the scenario chosen (base year, 2010). The largest cost item was informal caregiving (59.1%-69.8% of the total cost), followed by health care costs (26.7%- 37.4%), and professional care (3.5%). Of the total health care costs, the largest item corresponded to hospital costs, followed by medication. Total costs differed significantly between patients in functional class II and those in classes III or IV. Heart failure is a disease that requires the mobilization of a considerable amount of resources. The largest item corresponds to informal care. Both health care and nonhealth care costs are higher in the population with more advanced disease. Copyright © 2013 Sociedad Española de Cardiología. Published by Elsevier Espana. All rights reserved.
The cost of sustaining a patient-centered medical home: experience from 2 states.
Magill, Michael K; Ehrenberger, David; Scammon, Debra L; Day, Julie; Allen, Tatiana; Reall, Andreu J; Sides, Rhonda W; Kim, Jaewhan
2015-09-01
As medical practices transform to patient-centered medical homes (PCMHs), it is important to identify the ongoing costs of maintaining these "advanced primary care" functions. A key required input is personnel effort. This study's objective was to assess direct personnel costs to practices associated with the staffing necessary to deliver PCMH functions as outlined in the National Committee for Quality Assurance Standards. We developed a PCMH cost dimensions tool to assess costs associated with activities uniquely required to maintain PCMH functions. We interviewed practice managers, nurse supervisors, and medical directors in 20 varied primary care practices in 2 states, guided by the tool. Outcome measures included categories of staff used to perform various PCMH functions, time and personnel costs, and whether practices were delivering PCMH functions. Costs per full-time equivalent primary care clinician associated with PCMH functions varied across practices with an average of $7,691 per month in Utah practices and $9,658 in Colorado practices. PCMH incremental costs per encounter were $32.71 in Utah and $36.68 in Colorado. The average estimated cost per member per month for an assumed panel of 2,000 patients was $3.85 in Utah and $4.83 in Colorado. Identifying costs of maintaining PCMH functions will contribute to effective payment reform and to sustainability of transformation. Maintenance and ongoing support of PCMH functions require additional time and new skills, which may be provided by existing staff, additional staff, or both. Adequate compensation for ongoing and substantial incremental costs is critical for practices to sustain PCMH functions. © 2015 Annals of Family Medicine, Inc.
The Cost of Sustaining a Patient-Centered Medical Home: Experience From 2 States
Magill, Michael K.; Ehrenberger, David; Scammon, Debra L.; Day, Julie; Allen, Tatiana; Reall, Andreu J.; Sides, Rhonda W.; Kim, Jaewhan
2015-01-01
PURPOSE As medical practices transform to patient-centered medical homes (PCMHs), it is important to identify the ongoing costs of maintaining these “advanced primary care” functions. A key required input is personnel effort. This study’s objective was to assess direct personnel costs to practices associated with the staffing necessary to deliver PCMH functions as outlined in the National Committee for Quality Assurance Standards. METHODS We developed a PCMH cost dimensions tool to assess costs associated with activities uniquely required to maintain PCMH functions. We interviewed practice managers, nurse supervisors, and medical directors in 20 varied primary care practices in 2 states, guided by the tool. Outcome measures included categories of staff used to perform various PCMH functions, time and personnel costs, and whether practices were delivering PCMH functions. RESULTS Costs per full-time equivalent primary care clinician associated with PCMH functions varied across practices with an average of $7,691 per month in Utah practices and $9,658 in Colorado practices. PCMH incremental costs per encounter were $32.71 in Utah and $36.68 in Colorado. The average estimated cost per member per month for an assumed panel of 2,000 patients was $3.85 in Utah and $4.83 in Colorado. CONCLUSIONS Identifying costs of maintaining PCMH functions will contribute to effective payment reform and to sustainability of transformation. Maintenance and ongoing support of PCMH functions require additional time and new skills, which may be provided by existing staff, additional staff, or both. Adequate compensation for ongoing and substantial incremental costs is critical for practices to sustain PCMH functions. PMID:26371263
[Electronic versus paper-based patient records: a cost-benefit analysis].
Neubauer, A S; Priglinger, S; Ehrt, O
2001-11-01
The aim of this study is to compare the costs and benefits of electronic, paperless patient records with the conventional paper-based charts. Costs and benefits of planned electronic patient records are calculated for a University eye hospital with 140 beds. Benefit is determined by direct costs saved by electronic records. In the example shown, the additional benefits of electronic patient records, as far as they can be quantified total 192,000 DM per year. The costs of the necessary investments are 234,000 DM per year when using a linear depreciation over 4 years. In total, there are additional annual costs for electronic patient records of 42,000 DM. Different scenarios were analyzed. By increasing the time of depreciation to 6 years, the cost deficit reduces to only approximately 9,000 DM. Increased wages reduce the deficit further while the deficit increases with a loss of functions of the electronic patient record. However, several benefits of electronic records regarding research, teaching, quality control and better data access cannot be easily quantified and would greatly increase the benefit to cost ratio. Only part of the advantages of electronic patient records can easily be quantified in terms of directly saved costs. The small cost deficit calculated in this example is overcompensated by several benefits, which can only be enumerated qualitatively due to problems in quantification.
Imaging performance of an isotropic negative dielectric constant slab.
Shivanand; Liu, Huikan; Webb, Kevin J
2008-11-01
The influence of material and thickness on the subwavelength imaging performance of a negative dielectric constant slab is studied. Resonance in the plane-wave transfer function produces a high spatial frequency ripple that could be useful in fabricating periodic structures. A cost function based on the plane-wave transfer function provides a useful metric to evaluate the planar slab lens performance, and using this, the optimal slab dielectric constant can be determined.
Williams, Claire; Lewsey, James D; Briggs, Andrew H; Mackay, Daniel F
2017-05-01
This tutorial provides a step-by-step guide to performing cost-effectiveness analysis using a multi-state modeling approach. Alongside the tutorial, we provide easy-to-use functions in the statistics package R. We argue that this multi-state modeling approach using a package such as R has advantages over approaches where models are built in a spreadsheet package. In particular, using a syntax-based approach means there is a written record of what was done and the calculations are transparent. Reproducing the analysis is straightforward as the syntax just needs to be run again. The approach can be thought of as an alternative way to build a Markov decision-analytic model, which also has the option to use a state-arrival extended approach. In the state-arrival extended multi-state model, a covariate that represents patients' history is included, allowing the Markov property to be tested. We illustrate the building of multi-state survival models, making predictions from the models and assessing fits. We then proceed to perform a cost-effectiveness analysis, including deterministic and probabilistic sensitivity analyses. Finally, we show how to create 2 common methods of visualizing the results-namely, cost-effectiveness planes and cost-effectiveness acceptability curves. The analysis is implemented entirely within R. It is based on adaptions to functions in the existing R package mstate to accommodate parametric multi-state modeling that facilitates extrapolation of survival curves.
Ademi, Zanfina; Pfeil, Alena M; Hancock, Elizabeth; Trueman, David; Haroun, Rola Haroun; Deschaseaux, Celine; Schwenkglenks, Matthias
2017-11-29
We aimed to assess the cost effectiveness of sacubitril/valsartan compared to angiotensin-converting enzyme inhibitors (ACEIs) for the treatment of individuals with chronic heart failure and reduced-ejection fraction (HFrEF) from the perspective of the Swiss health care system. The cost-effectiveness analysis was implemented as a lifelong regression-based cohort model. We compared sacubitril/valsartan with enalapril in chronic heart failure patients with HFrEF and New York-Heart Association Functional Classification II-IV symptoms. Regression models based on the randomised clinical phase III PARADIGM-HF trials were used to predict events (all-cause mortality, hospitalisations, adverse events and quality of life) for each treatment strategy modelled over the lifetime horizon, with adjustments for patient characteristics. Unit costs were obtained from Swiss public sources for the year 2014, and costs and effects were discounted by 3%. The main outcome of interest was the incremental cost-effectiveness ratio (ICER), expressed as cost per quality-adjusted life years (QALYs) gained. Deterministic sensitivity analysis (DSA) and scenario and probabilistic sensitivity analysis (PSA) were performed. In the base-case analysis, the sacubitril/valsartan strategy showed a decrease in the number of hospitalisations (6.0% per year absolute reduction) and lifetime hospital costs by 8.0% (discounted) when compared with enalapril. Sacubitril/valsartan was predicted to improve overall and quality-adjusted survival by 0.50 years and 0.42 QALYs, respectively. Additional net-total costs were CHF 10 926. This led to an ICER of CHF 25 684. In PSA, the probability of sacubitril/valsartan being cost-effective at thresholds of CHF 50 000 was 99.0%. The treatment of HFrEF patients with sacubitril/valsartan versus enalapril is cost effective, if a willingness-to-pay threshold of CHF 50 000 per QALY gained ratio is assumed.
Wylde, Vikki; Artz, Neil; Marques, Elsa; Lenguerrand, Erik; Dixon, Samantha; Beswick, Andrew D; Burston, Amanda; Murray, James; Parwez, Tarique; Blom, Ashley W; Gooberman-Hill, Rachael
2016-06-13
Primary total knee replacement is a common operation that is performed to provide pain relief and restore functional ability. Inpatient physiotherapy is routinely provided after surgery to enhance recovery prior to hospital discharge. However, international variation exists in the provision of outpatient physiotherapy after hospital discharge. While evidence indicates that outpatient physiotherapy can improve short-term function, the longer term benefits are unknown. The aim of this randomised controlled trial is to evaluate the long-term clinical effectiveness and cost-effectiveness of a 6-week group-based outpatient physiotherapy intervention following knee replacement. Two hundred and fifty-six patients waiting for knee replacement because of osteoarthritis will be recruited from two orthopaedic centres. Participants randomised to the usual-care group (n = 128) will be given a booklet about exercise and referred for physiotherapy if deemed appropriate by the clinical care team. The intervention group (n = 128) will receive the same usual care and additionally be invited to attend a group-based outpatient physiotherapy class starting 6 weeks after surgery. The 1-hour class will be run on a weekly basis over 6 weeks and will involve task-orientated and individualised exercises. The primary outcome will be the Lower Extremity Functional Scale at 12 months post-operative. Secondary outcomes include: quality of life, knee pain and function, depression, anxiety and satisfaction. Data collection will be by questionnaire prior to surgery and 3, 6 and 12 months after surgery and will include a resource-use questionnaire to enable a trial-based economic evaluation. Trial participation and satisfaction with the classes will be evaluated through structured telephone interviews. The primary statistical and economic analyses will be conducted on an intention-to-treat basis with and without imputation of missing data. The primary economic result will estimate the incremental cost per quality-adjusted life year gained from this intervention from a National Health Services (NHS) and personal social services perspective. This research aims to benefit patients and the NHS by providing evidence on the long-term effectiveness and cost-effectiveness of outpatient physiotherapy after knee replacement. If the intervention is found to be effective and cost-effective, implementation into clinical practice could lead to improvement in patients' outcomes and improved health care resource efficiency. ISRCTN32087234 , registered on 11 February 2015.
Modeling Operations Costs for Human Exploration Architectures
NASA Technical Reports Server (NTRS)
Shishko, Robert
2013-01-01
Operations and support (O&S) costs for human spaceflight have not received the same attention in the cost estimating community as have development costs. This is unfortunate as O&S costs typically comprise a majority of life-cycle costs (LCC) in such programs as the International Space Station (ISS) and the now-cancelled Constellation Program. Recognizing this, the Constellation Program and NASA HQs supported the development of an O&S cost model specifically for human spaceflight. This model, known as the Exploration Architectures Operations Cost Model (ExAOCM), provided the operations cost estimates for a variety of alternative human missions to the moon, Mars, and Near-Earth Objects (NEOs) in architectural studies. ExAOCM is philosophically based on the DoD Architecture Framework (DoDAF) concepts of operational nodes, systems, operational functions, and milestones. This paper presents some of the historical background surrounding the development of the model, and discusses the underlying structure, its unusual user interface, and lastly, previous examples of its use in the aforementioned architectural studies.
Internet-based videoconferencing and data collaboration for the imaging community.
Poon, David P; Langkals, John W; Giesel, Frederik L; Knopp, Michael V; von Tengg-Kobligk, Hendrik
2011-01-01
Internet protocol-based digital data collaboration with videoconferencing is not yet well utilized in the imaging community. Videoconferencing, combined with proven low-cost solutions, can provide reliable functionality and speed, which will improve rapid, time-saving, and cost-effective communications, within large multifacility institutions or globally with the unlimited reach of the Internet. The aim of this project was to demonstrate the implementation of a low-cost hardware and software setup that facilitates global data collaboration using WebEx and GoToMeeting Internet protocol-based videoconferencing software. Both products' features were tested and evaluated for feasibility across 2 different Internet networks, including a video quality and recording assessment. Cross-compatibility with an Apple OS is also noted in the evaluations. Departmental experiences with WebEx pertaining to clinical trials are also described. Real-time remote presentation of dynamic data was generally consistent across platforms. A reliable and inexpensive hardware and software setup for complete Internet-based data collaboration/videoconferencing can be achieved.
A classical density-functional theory for describing water interfaces.
Hughes, Jessica; Krebs, Eric J; Roundy, David
2013-01-14
We develop a classical density functional for water which combines the White Bear fundamental-measure theory (FMT) functional for the hard sphere fluid with attractive interactions based on the statistical associating fluid theory variable range (SAFT-VR). This functional reproduces the properties of water at both long and short length scales over a wide range of temperatures and is computationally efficient, comparable to the cost of FMT itself. We demonstrate our functional by applying it to systems composed of two hard rods, four hard rods arranged in a square, and hard spheres in water.
Improved patch-based learning for image deblurring
NASA Astrophysics Data System (ADS)
Dong, Bo; Jiang, Zhiguo; Zhang, Haopeng
2015-05-01
Most recent image deblurring methods only use valid information found in input image as the clue to fill the deblurring region. These methods usually have the defects of insufficient prior information and relatively poor adaptiveness. Patch-based method not only uses the valid information of the input image itself, but also utilizes the prior information of the sample images to improve the adaptiveness. However the cost function of this method is quite time-consuming and the method may also produce ringing artifacts. In this paper, we propose an improved non-blind deblurring algorithm based on learning patch likelihoods. On one hand, we consider the effect of the Gaussian mixture model with different weights and normalize the weight values, which can optimize the cost function and reduce running time. On the other hand, a post processing method is proposed to solve the ringing artifacts produced by traditional patch-based method. Extensive experiments are performed. Experimental results verify that our method can effectively reduce the execution time, suppress the ringing artifacts effectively, and keep the quality of deblurred image.
Williams, Bradley S; D'Amico, Ellen; Kastens, Jude H; Thorp, James H; Flotemersch, Joseph E; Thoms, Martin C
2013-09-01
River systems consist of hydrogeomorphic patches (HPs) that emerge at multiple spatiotemporal scales. Functional process zones (FPZs) are HPs that exist at the river valley scale and are important strata for framing whole-watershed research questions and management plans. Hierarchical classification procedures aid in HP identification by grouping sections of river based on their hydrogeomorphic character; however, collecting data required for such procedures with field-based methods is often impractical. We developed a set of GIS-based tools that facilitate rapid, low cost riverine landscape characterization and FPZ classification. Our tools, termed RESonate, consist of a custom toolbox designed for ESRI ArcGIS®. RESonate automatically extracts 13 hydrogeomorphic variables from readily available geospatial datasets and datasets derived from modeling procedures. An advanced 2D flood model, FLDPLN, designed for MATLAB® is used to determine valley morphology by systematically flooding river networks. When used in conjunction with other modeling procedures, RESonate and FLDPLN can assess the character of large river networks quickly and at very low costs. Here we describe tool and model functions in addition to their benefits, limitations, and applications.
NASA Technical Reports Server (NTRS)
Brown, W. C.; Dickinson, R. M.; Nalos, E. J.; Ott, J. H.
1980-01-01
The function of the rectenna in the solar power satellite system is described and the basic design choices based on the desired microwave field concentration and ground clearance requirements are given. One important area of concern, from the EMI point of view, harmonic reradiation and scattering from the rectenna is also designed. An optimization of a rectenna system design to minimize costs was performed. The rectenna cost breakdown for a 56 w installation is given as an example.
1979-12-01
used to reduce costs ). The orbital data from the prototype ion composi- tion telescope will not only be of great scientific interest -pro- viding for...active device whose transfer function may be almost arbitrarily defined, and cost and production trends permit contemplation of networks containing...developing solid-state television camera systems based on CCD imagers. RICA hopes to produce a $500 color camera for consumer use. Fairchild and Texas
A Cost Estimation Analysis of U.S. Navy Ship Fuel-Savings Techniques and Technologies
2009-09-01
readings to the boiler operator. The PLC will provide constant automatic trimming of the excess oxygen based upon real time SGA readings. An SCD...the author): The Aegis Combat System is controlled by an advanced, automatic detect-and-track, multi-function three-dimensional passive...subsequently offloaded. An Online Wash System would reduce these maintenance costs and improve fuel efficiency of these engines by keeping the engines
Organizational Cost of Quality Improvement for Depression Care
Liu, Chuan-Fen; Rubenstein, Lisa V; Kirchner, JoAnn E; Fortney, John C; Perkins, Mark W; Ober, Scott K; Pyne, Jeffrey M; Chaney, Edmund F
2009-01-01
Objective We documented organizational costs for depression care quality improvement (QI) to develop an evidence-based, Veterans Health Administration (VA) adapted depression care model for primary care practices that performed well for patients, was sustained over time, and could be spread nationally in VA. Data Sources and Study Setting Project records and surveys from three multistate VA administrative regions and seven of their primary care practices. Study Design Descriptive analysis. Data Collection We documented project time commitments and expenses for 86 clinical QI and 42 technical expert support team participants for 4 years from initial contact through care model design, Plan–Do–Study–Act cycles, and achievement of stable workloads in which models functioned as routine care. We assessed time, salary costs, and costs for conference calls, meetings, e-mails, and other activities. Principle Findings Over an average of 27 months, all clinics began referring patients to care managers. Clinical participants spent 1,086 hours at a cost of $84,438. Technical experts spent 2,147 hours costing $197,787. Eighty-five percent of costs derived from initial regional engagement activities and care model design. Conclusions Organizational costs of the QI process for depression care in a large health care system were significant, and should be accounted for when planning for implementation of evidence-based depression care. PMID:19146566
Using block pulse functions for seismic vibration semi-active control of structures with MR dampers
NASA Astrophysics Data System (ADS)
Rahimi Gendeshmin, Saeed; Davarnia, Daniel
2018-03-01
This article applied the idea of block pulse functions in the semi-active control of structures. The BP functions give effective tools to approximate complex problems. The applied control algorithm has a major effect on the performance of the controlled system and the requirements of the control devices. In control problems, it is important to devise an accurate analytical technique with less computational cost. It is proved that the BP functions are fundamental tools in approximation problems which have been applied in disparate areas in last decades. This study focuses on the employment of BP functions in control algorithm concerning reduction the computational cost. Magneto-rheological (MR) dampers are one of the well-known semi-active tools that can be used to control the response of civil Structures during earthquake. For validation purposes, numerical simulations of a 5-story shear building frame with MR dampers are presented. The results of suggested method were compared with results obtained by controlling the frame by the optimal control method based on linear quadratic regulator theory. It can be seen from simulation results that the suggested method can be helpful in reducing seismic structural responses. Besides, this method has acceptable accuracy and is in agreement with optimal control method with less computational costs.
Maciejewski, Matthew L; Liu, Chuan-Fen; Fihn, Stephan D
2009-01-01
To compare the ability of generic comorbidity and risk adjustment measures, a diabetes-specific measure, and a self-reported functional status measure to explain variation in health care expenditures for individuals with diabetes. This study included a retrospective cohort of 3,092 diabetic veterans participating in a multisite trial. Two comorbidity measures, four risk adjusters, a functional status measure, a diabetes complication count, and baseline expenditures were constructed from administrative and survey data. Outpatient, inpatient, and total expenditure models were estimated using ordinary least squares regression. Adjusted R(2) statistics and predictive ratios were compared across measures to assess overall explanatory power and explanatory power of low- and high-cost subgroups. Administrative data-based risk adjusters performed better than the comorbidity, functional status, and diabetes-specific measures in all expenditure models. The diagnostic cost groups (DCGs) measure had the greatest predictive power overall and for the low- and high-cost subgroups, while the diabetes-specific measure had the lowest predictive power. A model with DCGs and the diabetes-specific measure modestly improved predictive power. Existing generic measures can be useful for diabetes-specific research and policy applications, but more predictive diabetes-specific measures are needed.
Optimisation of reconstruction--reprojection-based motion correction for cardiac SPECT.
Kangasmaa, Tuija S; Sohlberg, Antti O
2014-07-01
Cardiac motion is a challenging cause of image artefacts in myocardial perfusion SPECT. A wide range of motion correction methods have been developed over the years, and so far automatic algorithms based on the reconstruction--reprojection principle have proved to be the most effective. However, these methods have not been fully optimised in terms of their free parameters and implementational details. Two slightly different implementations of reconstruction--reprojection-based motion correction techniques were optimised for effective, good-quality motion correction and then compared with each other. The first of these methods (Method 1) was the traditional reconstruction-reprojection motion correction algorithm, where the motion correction is done in projection space, whereas the second algorithm (Method 2) performed motion correction in reconstruction space. The parameters that were optimised include the type of cost function (squared difference, normalised cross-correlation and mutual information) that was used to compare measured and reprojected projections, and the number of iterations needed. The methods were tested with motion-corrupt projection datasets, which were generated by adding three different types of motion (lateral shift, vertical shift and vertical creep) to motion-free cardiac perfusion SPECT studies. Method 2 performed slightly better overall than Method 1, but the difference between the two implementations was small. The execution time for Method 2 was much longer than for Method 1, which limits its clinical usefulness. The mutual information cost function gave clearly the best results for all three motion sets for both correction methods. Three iterations were sufficient for a good quality correction using Method 1. The traditional reconstruction--reprojection-based method with three update iterations and mutual information cost function is a good option for motion correction in clinical myocardial perfusion SPECT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Twenty-eight geothermal areas in Kenya were evaluated and prioritized for development. The prioritization was based on the potential size, resource temperature, level of exploration risk, location, and exploration/development costs for each geothermal area. Suswa, Eburru and Arus are found to offer the best short-term prospects for successful private power development. It was found that cost per kill developed are significantly lower for the larger (50MW) than for smaller-sized (10 or 20 NW) projects. In addition to plant size, the cost per kill developed is seen to be a function of resource temperature, generation mode (binary or flash cycle) and transmissionmore » distance.« less
Wavelength routing beyond the standard graph coloring approach
NASA Astrophysics Data System (ADS)
Blankenhorn, Thomas
2004-04-01
When lightpaths are routed in the planning stage of transparent optical networks, the textbook approach is to use algorithms that try to minimize the overall number of wavelengths used in the . We demonstrate that this method cannot be expected to minimize actual costs when the marginal cost of instlling more wavelengths is a declining function of the number of wavelengths already installed, as is frequently the case. We further demonstrate how cost optimization can theoretically be improved with algorithms based on Prim"s algorithm. Finally, we test this theory with simulaion on a series of actual network topologies, which confirm the theoretical analysis.
Use of Model-Based Design Methods for Enhancing Resiliency Analysis of Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Knox, Lenora A.
The most common traditional non-functional requirement analysis is reliability. With systems becoming more complex, networked, and adaptive to environmental uncertainties, system resiliency has recently become the non-functional requirement analysis of choice. Analysis of system resiliency has challenges; which include, defining resilience for domain areas, identifying resilience metrics, determining resilience modeling strategies, and understanding how to best integrate the concepts of risk and reliability into resiliency. Formal methods that integrate all of these concepts do not currently exist in specific domain areas. Leveraging RAMSoS, a model-based reliability analysis methodology for Systems of Systems (SoS), we propose an extension that accounts for resiliency analysis through evaluation of mission performance, risk, and cost using multi-criteria decision-making (MCDM) modeling and design trade study variability modeling evaluation techniques. This proposed methodology, coined RAMSoS-RESIL, is applied to a case study in the multi-agent unmanned aerial vehicle (UAV) domain to investigate the potential benefits of a mission architecture where functionality to complete a mission is disseminated across multiple UAVs (distributed) opposed to being contained in a single UAV (monolithic). The case study based research demonstrates proof of concept for the proposed model-based technique and provides sufficient preliminary evidence to conclude which architectural design (distributed vs. monolithic) is most resilient based on insight into mission resilience performance, risk, and cost in addition to the traditional analysis of reliability.
Low–Cost Bio-Based Carbon Fiber for High-Temperature Processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naskar, Amit K.; Akato, Kokouvi M.; Tran, Chau D.
GrafTech International Holdings Inc. (GTI), worked with Oak Ridge National Laboratory (ORNL) under CRADA No. NFE-15-05807 to develop lignin-based carbon fiber (LBCF) technology and to demonstrate LBCF performance in high-temperature products and applications. This work was unique and different from other reported LBCF work in that this study was application-focused and scalability-focused. Accordingly, the executed work was based on meeting criteria based on technology development, cost, and application suitability. The focus of this work was to demonstrate lab-scale LBCF from at least 4 different precursor feedstock sources that could meet the estimated production cost of $5.00/pound and have ash levelmore » of less than 500 ppm in the carbonized insulation-grade fiber. Accordingly, a preliminary cost model was developed based on publicly available information. The team demonstrated that 4 lignin samples met the cost criteria, as highlighted in Table 1. In addition, the ash level for the 4 carbonized lignin samples were below 500 ppm. Processing asreceived lignin to produce a high purity lignin fiber was a significant accomplishment in that most industrial lignin, prior to purification, had greater than 4X the ash level needed for this project, and prior to this work there was not a clear path of how to achieve the purity target. The lab scale development of LBCF was performed with a specific functional application in mind, specifically for high temperature rigid insulation. GTI is currently a consumer of foreignsourced pitch and rayon based carbon fibers for use in its high temperature insulation products, and the motivation was that LBCF had potential to decrease costs and increase product competitiveness in the marketplace through lowered raw material costs, lowered energy costs, and decreased environmental footprint. At the end of this project, the Technology Readiness Level (TRL) remained at 5 for LBCF in high temperature insulation.« less
2015-01-01
Background The economic cost of depression is becoming an ever more important determinant for health policy and decision makers. Internet-based interventions with and without therapist support have been found to be effective options for the treatment of mild to moderate depression. With increasing demands on health resources and shortages of mental health care professionals, the integration of cost-effective treatment options such as Internet-based programs into primary health care could increase efficiency in terms of resource use and costs. Objective Our aim was to evaluate the cost-effectiveness of an Internet-based intervention (myCompass) for the treatment of mild-to-moderate depression compared to treatment as usual and cognitive behavior therapy in a stepped care model. Methods A decision model was constructed using a cost utility framework to show both costs and health outcomes. In accordance with current treatment guidelines, a stepped care model included myCompass as the first low-intervention step in care for a proportion of the model cohort, with participants beginning from a low-intensity intervention to increasing levels of treatment. Model parameters were based on data from the recent randomized controlled trial of myCompass, which showed that the intervention reduced symptoms of depression, anxiety, and stress and improved work and social functioning for people with symptoms in the mild-to-moderate range. Results The average net monetary benefit (NMB) was calculated, identifying myCompass as the strategy with the highest net benefit. The mean incremental NMB per individual for the myCompass group was AUD 1165.88 compared to treatment as usual and AUD 522.58 for the cognitive behavioral therapy model. Conclusions Internet-based interventions can provide cost-effective access to treatment when provided as part of a stepped care model. Widespread dissemination of Internet-based programs can potentially reduce demands on primary and tertiary services and reduce unmet need. PMID:26561555
Solomon, Daniela; Proudfoot, Judith; Clarke, Janine; Christensen, Helen
2015-11-11
The economic cost of depression is becoming an ever more important determinant for health policy and decision makers. Internet-based interventions with and without therapist support have been found to be effective options for the treatment of mild to moderate depression. With increasing demands on health resources and shortages of mental health care professionals, the integration of cost-effective treatment options such as Internet-based programs into primary health care could increase efficiency in terms of resource use and costs. Our aim was to evaluate the cost-effectiveness of an Internet-based intervention (myCompass) for the treatment of mild-to-moderate depression compared to treatment as usual and cognitive behavior therapy in a stepped care model. A decision model was constructed using a cost utility framework to show both costs and health outcomes. In accordance with current treatment guidelines, a stepped care model included myCompass as the first low-intervention step in care for a proportion of the model cohort, with participants beginning from a low-intensity intervention to increasing levels of treatment. Model parameters were based on data from the recent randomized controlled trial of myCompass, which showed that the intervention reduced symptoms of depression, anxiety, and stress and improved work and social functioning for people with symptoms in the mild-to-moderate range. The average net monetary benefit (NMB) was calculated, identifying myCompass as the strategy with the highest net benefit. The mean incremental NMB per individual for the myCompass group was AUD 1165.88 compared to treatment as usual and AUD 522.58 for the cognitive behavioral therapy model. Internet-based interventions can provide cost-effective access to treatment when provided as part of a stepped care model. Widespread dissemination of Internet-based programs can potentially reduce demands on primary and tertiary services and reduce unmet need.
[The equivalence and interchangeability of medical articles].
Antonov, V S
2013-11-01
The information concerning the interchangeability of medical articles is highly valuable because it makes it possible to correlate most precisely medical articles with medical technologies and medical care standards and to optimize budget costs under public purchasing. The proposed procedure of determination of interchangeability is based on criteria of equivalence of prescriptions, functional technical and technological characteristics and effectiveness of functioning of medical articles.
2007-01-01
substances released after 1986 and munitions released after 2002 are not eligible for DERP funds. These cleanups are generally referred to as non -DERP...relocating functions from one installation to...requirements during the process of property disposal and during the process of relocating functions from one installation to another. The National
An Efficient Scheduling Scheme on Charging Stations for Smart Transportation
NASA Astrophysics Data System (ADS)
Kim, Hye-Jin; Lee, Junghoon; Park, Gyung-Leen; Kang, Min-Jae; Kang, Mikyung
This paper proposes a reservation-based scheduling scheme for the charging station to decide the service order of multiple requests, aiming at improving the satisfiability of electric vehicles. The proposed scheme makes it possible for a customer to reduce the charge cost and waiting time, while a station can extend the number of clients it can serve. A linear rank function is defined based on estimated arrival time, waiting time bound, and the amount of needed power, reducing the scheduling complexity. Receiving the requests from the clients, the power station decides the charge order by the rank function and then replies to the requesters with the waiting time and cost it can guarantee. Each requester can decide whether to charge at that station or try another station. This scheduler can evolve to integrate a new pricing policy and services, enriching the electric vehicle transport system.
Neural network-based optimal adaptive output feedback control of a helicopter UAV.
Nodland, David; Zargarzadeh, Hassan; Jagannathan, Sarangapani
2013-07-01
Helicopter unmanned aerial vehicles (UAVs) are widely used for both military and civilian operations. Because the helicopter UAVs are underactuated nonlinear mechanical systems, high-performance controller design for them presents a challenge. This paper introduces an optimal controller design via an output feedback for trajectory tracking of a helicopter UAV, using a neural network (NN). The output-feedback control system utilizes the backstepping methodology, employing kinematic and dynamic controllers and an NN observer. The online approximator-based dynamic controller learns the infinite-horizon Hamilton-Jacobi-Bellman equation in continuous time and calculates the corresponding optimal control input by minimizing a cost function, forward-in-time, without using the value and policy iterations. Optimal tracking is accomplished by using a single NN utilized for the cost function approximation. The overall closed-loop system stability is demonstrated using Lyapunov analysis. Finally, simulation results are provided to demonstrate the effectiveness of the proposed control design for trajectory tracking.
Study on multimodal transport route under low carbon background
NASA Astrophysics Data System (ADS)
Liu, Lele; Liu, Jie
2018-06-01
Low-carbon environmental protection is the focus of attention around the world, scientists are constantly researching on production of carbon emissions and living carbon emissions. However, there is little literature about multimodal transportation based on carbon emission at home and abroad. Firstly, this paper introduces the theory of multimodal transportation, the multimodal transport models that didn't consider carbon emissions and consider carbon emissions are analyzed. On this basis, a multi-objective programming 0-1 programming model with minimum total transportation cost and minimum total carbon emission is proposed. The idea of weight is applied to Ideal point method for solving problem, multi-objective programming is transformed into a single objective function. The optimal solution of carbon emission to transportation cost under different weights is determined by a single objective function with variable weights. Based on the model and algorithm, an example is given and the results are analyzed.
Viricel, Clément; de Givry, Simon; Schiex, Thomas; Barbe, Sophie
2018-02-20
Accurate and economic methods to predict change in protein binding free energy upon mutation are imperative to accelerate the design of proteins for a wide range of applications. Free energy is defined by enthalpic and entropic contributions. Following the recent progresses of Artificial Intelligence-based algorithms for guaranteed NP-hard energy optimization and partition function computation, it becomes possible to quickly compute minimum energy conformations and to reliably estimate the entropic contribution of side-chains in the change of free energy of large protein interfaces. Using guaranteed Cost Function Network algorithms, Rosetta energy functions and Dunbrack's rotamer library, we developed and assessed EasyE and JayZ, two methods for binding affinity estimation that ignore or include conformational entropic contributions on a large benchmark of binding affinity experimental measures. If both approaches outperform most established tools, we observe that side-chain conformational entropy brings little or no improvement on most systems but becomes crucial in some rare cases. as open-source Python/C ++ code at sourcesup.renater.fr/projects/easy-jayz. thomas.schiex@inra.fr and sophie.barbe@insa-toulouse.fr. Supplementary data are available at Bioinformatics online.
Jia, Xiaofang; Dong, Shaojun; Wang, Erkang
2016-02-15
Electrochemical biosensors have played active roles at the forefront of bioanalysis because they have the potential to achieve sensitive, specific and low-cost detection of biomolecules and many others. Engineering the electrochemical sensing interface with functional nanomaterials leads to novel electrochemical biosensors with improved performances in terms of sensitivity, selectivity, stability and simplicity. Functional nanomaterials possess good conductivity, catalytic activity, biocompatibility and high surface area. Coupled with bio-recognition elements, these features can amplify signal transduction and biorecognition events, resulting in highly sensitive biosensing. Additionally, microfluidic electrochemical biosensors have attracted considerable attention on account of their miniature, portable and low-cost systems as well as high fabrication throughput and ease of scaleup. For example, electrochemical enzymetic biosensors and aptamer biosensors (aptasensors) based on the integrated microchip can be used for portable point-of-care diagnostics and environmental monitoring. This review is a summary of our recent progress in the field of electrochemical biosensors, including aptasensors, cytosensors, enzymatic biosensors and self-powered biosensors based on biofuel cells. We presented the advantages that functional nanomaterials and microfluidic chip technology bring to the electrochemical biosensors, together with future prospects and possible challenges. Copyright © 2015 Elsevier B.V. All rights reserved.
The Cost of Ankylosing Spondylitis in the UK Using Linked Routine and Patient-Reported Survey Data
Cooksey, Roxanne; Husain, Muhammad J.; Brophy, Sinead; Davies, Helen; Rahman, Muhammad A.; Atkinson, Mark D.; Phillips, Ceri J.; Siebert, Stefan
2015-01-01
Background Ankylosing spondylitis (AS) is a chronic inflammatory arthritis which typically begins in early adulthood and impacts on healthcare resource utilisation and the ability to work. Previous studies examining the cost of AS have relied on patient-reported questionnaires based on recall. This study uses a combination of patient-reported and linked-routine data to examine the cost of AS in Wales, UK. Methods Participants in an existing AS cohort study (n = 570) completed questionnaires regarding work status, out-of-pocket expenses, visits to health professionals and disease severity. Participants gave consent for their data to be linked to routine primary and secondary care clinical datasets. Health resource costs were calculated using a bottom-up micro-costing approach. Human capital costs methods were used to estimate work productivity loss costs, particularly relating to work and early retirement. Regression analyses were used to account for age, gender, disease activity. Results The total cost of AS in the UK is estimated at £19016 per patient per year, calculated to include GP attendance, administration costs and hospital costs derived from routine data records, plus patient-reported non-NHS costs, out-of-pocket AS-related expenses, early retirement, absenteeism, presenteeism and unpaid assistance costs. The majority of the cost (>80%) was as a result of work-related costs. Conclusion The major cost of AS is as a result of loss of working hours, early retirement and unpaid carer’s time. Therefore, much of AS costs are hidden and not easy to quantify. Functional impairment is the main factor associated with increased cost of AS. Interventions which keep people in work to retirement age and reduce functional impairment would have the greatest impact on reducing costs of AS. The combination of patient-reported and linked routine data significantly enhanced the health economic analysis and this methodology that can be applied to other chronic conditions. PMID:26185984
The Cost of Ankylosing Spondylitis in the UK Using Linked Routine and Patient-Reported Survey Data.
Cooksey, Roxanne; Husain, Muhammad J; Brophy, Sinead; Davies, Helen; Rahman, Muhammad A; Atkinson, Mark D; Phillips, Ceri J; Siebert, Stefan
2015-01-01
Ankylosing spondylitis (AS) is a chronic inflammatory arthritis which typically begins in early adulthood and impacts on healthcare resource utilisation and the ability to work. Previous studies examining the cost of AS have relied on patient-reported questionnaires based on recall. This study uses a combination of patient-reported and linked-routine data to examine the cost of AS in Wales, UK. Participants in an existing AS cohort study (n = 570) completed questionnaires regarding work status, out-of-pocket expenses, visits to health professionals and disease severity. Participants gave consent for their data to be linked to routine primary and secondary care clinical datasets. Health resource costs were calculated using a bottom-up micro-costing approach. Human capital costs methods were used to estimate work productivity loss costs, particularly relating to work and early retirement. Regression analyses were used to account for age, gender, disease activity. The total cost of AS in the UK is estimated at £19016 per patient per year, calculated to include GP attendance, administration costs and hospital costs derived from routine data records, plus patient-reported non-NHS costs, out-of-pocket AS-related expenses, early retirement, absenteeism, presenteeism and unpaid assistance costs. The majority of the cost (>80%) was as a result of work-related costs. The major cost of AS is as a result of loss of working hours, early retirement and unpaid carer's time. Therefore, much of AS costs are hidden and not easy to quantify. Functional impairment is the main factor associated with increased cost of AS. Interventions which keep people in work to retirement age and reduce functional impairment would have the greatest impact on reducing costs of AS. The combination of patient-reported and linked routine data significantly enhanced the health economic analysis and this methodology that can be applied to other chronic conditions.
Space Biology Initiative. Trade Studies, volume 2
NASA Technical Reports Server (NTRS)
1989-01-01
The six studies which are the subjects of this report are entitled: Design Modularity and Commonality; Modification of Existing Hardware (COTS) vs. New Hardware Build Cost Analysis; Automation Cost vs. Crew Utilization; Hardware Miniaturization versus Cost; Space Station Freedom/Spacelab Modules Compatibility vs. Cost; and Prototype Utilization in the Development of Space Hardware. The product of these six studies was intended to provide a knowledge base and methodology that enables equipment produced for the Space Biology Initiative program to meet specific design and functional requirements in the most efficient and cost effective form consistent with overall mission integration parameters. Each study promulgates rules of thumb, formulas, and matrices that serves as a handbook for the use and guidance of designers and engineers in design, development, and procurement of Space Biology Initiative (SBI) hardware and software.
Space Biology Initiative. Trade Studies, volume 1
NASA Technical Reports Server (NTRS)
1989-01-01
The six studies which are addressed are entitled: Design Modularity and Commonality; Modification of Existing Hardware (COTS) vs. New Hardware Build Cost Analysis; Automation Cost vs. Crew Utilization; Hardware Miniaturization versus Cost; Space Station Freedom/Spacelab Modules Compatibility vs. Cost; and Prototype Utilization in the Development of Space Hardware. The product of these six studies was intended to provide a knowledge base and methodology that enables equipment produced for the Space Biology Initiative program to meet specific design and functional requirements in the most efficient and cost effective form consistent with overall mission integration parameters. Each study promulgates rules of thumb, formulas, and matrices that serves has a handbook for the use and guidance of designers and engineers in design, development, and procurement of Space Biology Initiative (SBI) hardware and software.
Thalanany, Mariamma M; Mugford, Miranda; Hibbert, Clare; Cooper, Nicola J; Truesdale, Ann; Robinson, Steven; Tiruvoipati, Ravindranath; Elbourne, Diana R; Peek, Giles J; Clemens, Felicity; Hardy, Polly; Wilson, Andrew
2008-01-01
Background Extracorporeal Membrane Oxygenation (ECMO) is a technology used in treatment of patients with severe but potentially reversible respiratory failure. A multi-centre randomised controlled trial (CESAR) was funded in the UK to compare care including ECMO with conventional intensive care management. The protocol and funding for the CESAR trial included plans for economic data collection and analysis. Given the high cost of treatment, ECMO is considered an expensive technology for many funding systems. However, conventional treatment for severe respiratory failure is also one of the more costly forms of care in any health system. Methods/Design The objectives of the economic evaluation are to compare the costs of a policy of referral for ECMO with those of conventional treatment; to assess cost-effectiveness and the cost-utility at 6 months follow-up; and to assess the cost-utility over a predicted lifetime. Resources used by patients in the trial are identified. Resource use data are collected from clinical report forms and through follow up interviews with patients. Unit costs of hospital intensive care resources are based on parallel research on cost functions in UK NHS intensive care units. Other unit costs are based on published NHS tariffs. Cost effectiveness analysis uses the outcome: survival without severe disability. Cost utility analysis is based on quality adjusted life years gained based on the Euroqol EQ-5D at 6 months. Sensitivity analysis is planned to vary assumptions about transport costs and method of costing intensive care. Uncertainty will also be expressed in analysis of individual patient data. Probabilities of cost effectiveness given different funding thresholds will be estimated. Discussion In our view it is important to record our methods in detail and present them before publication of the results of the trial so that a record of detail not normally found in the final trial reports can be made available in the public domain. Trial Registrations The CESAR trial registration number is ISRCTN47279827. PMID:18447931
Hubig, Michael; Suchandt, Steffen; Adam, Nico
2004-10-01
Phase unwrapping (PU) represents an important step in synthetic aperture radar interferometry (InSAR) and other interferometric applications. Among the different PU methods, the so called branch-cut approaches play an important role. In 1996 M. Costantini [Proceedings of the Fringe '96 Workshop ERS SAR Interferometry (European Space Agency, Munich, 1996), pp. 261-272] proposed to transform the problem of correctly placing branch cuts into a minimum cost flow (MCF) problem. The crucial point of this new approach is to generate cost functions that represent the a priori knowledge necessary for PU. Since cost functions are derived from measured data, they are random variables. This leads to the question of MCF solution stability: How much can the cost functions be varied without changing the cheapest flow that represents the correct branch cuts? This question is partially answered: The existence of a whole linear subspace in the space of cost functions is shown; this subspace contains all cost differences by which a cost function can be changed without changing the cost difference between any two flows that are discharging any residue configuration. These cost differences are called strictly stable cost differences. For quadrangular nonclosed networks (the most important type of MCF networks for interferometric purposes) a complete classification of strictly stable cost differences is presented. Further, the role of the well-known class of node potentials in the framework of strictly stable cost differences is investigated, and information on the vector-space structure representing the MCF environment is provided.
Assembling and Using an LED-Based Detector to Monitor Absorbance Changes during Acid-Base Titrations
ERIC Educational Resources Information Center
Santos, Willy G.; Cavalheiro, E´der T. G.
2015-01-01
A simple photometric assembly based in an LED as a light source and a photodiode as a detector is proposed in order to follow the absorbance changes as a function of the titrant volume added during the course of acid-base titrations in the presence of a suitable visual indicator. The simplicity and low cost of the electronic device allow the…
IEEE 802.21 Assisted Seamless and Energy Efficient Handovers in Mixed Networks
NASA Astrophysics Data System (ADS)
Liu, Huaiyu; Maciocco, Christian; Kesavan, Vijay; Low, Andy L. Y.
Network selection is the decision process for a mobile terminal to handoff between homogeneous or heterogeneous networks. With multiple available networks, the selection process must evaluate factors like network services/conditions, monetary cost, system conditions, user preferences etc. In this paper, we investigate network selection using a cost function and information provided by IEEE 802.21. The cost function provides flexibility to balance different factors in decision making and our research is focused on improving both seamlessness and energy efficiency of handovers. Our solution is evaluated using real WiFi, WiMax, and 3G signal strength traces. The results show that appropriate networks were selected based on selection policies, handovers were triggered at optimal times to increase overall network connectivity as compared to traditional triggering schemes, while at the same time the energy consumption of multi-radio devices for both on-going operations as well as during handovers is optimized.
Biological filters and their use in potable water filtration systems in spaceflight conditions.
Thornhill, Starla G; Kumar, Manish
2018-05-01
Providing drinking water to space missions such as the International Space Station (ISS) is a costly requirement for human habitation. To limit the costs of water transport, wastewater is collected and purified using a variety of physical and chemical means. To date, sand-based biofilters have been designed to function against gravity, and biofilms have been shown to form in microgravity conditions. Development of a universal silver-recycling biological filter system that is able to function in both microgravity and full gravity conditions would reduce the costs incurred in removing organic contaminants from wastewater by limiting the energy and chemical inputs required. This paper aims to propose the use of a sand-substrate biofilter to replace chemical means of water purification on manned spaceflights. Copyright © 2018 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.
Hugo, Cherie; Isenring, Elisabeth; Miller, Michelle; Marshall, Skye
2018-05-01
observational studies have shown that nutritional strategies to manage malnutrition may be cost-effective in aged care; but more robust economic data is needed to support and encourage translation to practice. Therefore, the aim of this systematic review is to compare the cost-effectiveness of implementing nutrition interventions targeting malnutrition in aged care homes versus usual care. residential aged care homes. systematic literature review of studies published between January 2000 and August 2017 across 10 electronic databases. Cochrane Risk of Bias tool and GRADE were used to evaluate the quality of the studies. eight included studies (3,098 studies initially screened) reported on 11 intervention groups, evaluating the effect of modifications to dining environment (n = 1), supplements (n = 5) and food-based interventions (n = 5). Interventions had a low cost of implementation (<£2.30/resident/day) and provided clinical improvement for a range of outcomes including weight, nutritional status and dietary intake. Supplements and food-based interventions further demonstrated a low cost per quality adjusted life year or unit of physical function improvement. GRADE assessment revealed the quality of the body of evidence that introducing malnutrition interventions, whether they be environmental, supplements or food-based, are cost-effective in aged care homes was low. this review suggests supplements and food-based nutrition interventions in the aged care setting are clinically effective, have a low cost of implementation and may be cost-effective at improving clinical outcomes associated with malnutrition. More studies using well-defined frameworks for economic analysis, stronger study designs with improved quality, along with validated malnutrition measures are needed to confirm and increase confidence with these findings.
Woo, Jean; Lau, Edith; Lau, Chak Sing; Lee, Polly; Zhang, James; Kwok, Timothy; Chan, Cynthia; Chiu, P; Chan, Kai Ming; Chan, A; Lam, D
2003-08-15
To determine the direct and indirect cost of osteoarthritis (OA) according to disease severity, and to estimate the total cost of the disease in Hong Kong. This study is a retrospective, cross-sectional, nonrandom, cohort design, with subjects stratified according to disease severity based on functional limitation and the presence or absence of joint prosthesis. Subjects were recruited from primary care, geriatric medicine, rheumatology, and orthopedic clinics. There were 219 patients in the mild disease category, 290 patients in the severe category, and 65 patients with joint replacement. A questionnaire gathered information on demographic and socioeconomic characteristics, function limitation, use of health and social services, and effect on occupation and living arrangements over the previous 12 months. Costs were calculated as direct and indirect. Low education and socioeconomic class were associated with more severe disease. OA affected family or close relationships in 44%. The average cost incurred as a result of side effects of medication is similar to the average cost of medication itself. Excluding joint replacement, the direct costs ranged from Hong Kong (HK) dollar $11,690 to $40,180 per person per year and indirect costs, HK $3,300-$6,640. The direct costs are comparable to those reported in Western countries; however, the ratio of direct to indirect costs is much higher than 1, in contrast to the greater indirect versus direct costs reported in whites. The total cost expressed as a percentage of gross national product is also much lower in Hong Kong. The socioeconomic impact of OA in the Hong Kong population is comparable to Western countries, but the economic burden is largely placed on the government, with patients having relatively low out-of-pocket expenditures.
Voss, John D; Nadkarni, Mohan M; Schectman, Joel M
2005-02-01
Academic medical centers face barriers to training physicians in systems- and practice-based learning competencies needed to function in the changing health care environment. To address these problems, at the University of Virginia School of Medicine the authors developed the Clinical Health Economics System Simulation (CHESS), a computerized team-based quasi-competitive simulator to teach the principles and practical application of health economics. CHESS simulates treatment costs to patients and society as well as physician reimbursement. It is scenario based with residents grouped into three teams, each team playing CHESS using differing (fee-for-service or capitated) reimbursement models. Teams view scenarios and select from two or three treatment options that are medically justifiable yet have different potential cost implications. CHESS displays physician reimbursement and patient and societal costs for each scenario as well as costs and income summarized across all scenarios extrapolated to a physician's entire patient panel. The learners are asked to explain these findings and may change treatment options and other variables such as panel size and case mix to conduct sensitivity analyses in real time. Evaluations completed in 2003 by 68 (94%) CHESS resident and faculty participants at 19 U.S. residency programs preferred CHESS to a traditional lecture-and-discussion format to learn about medical decision making, physician reimbursement, patient costs, and societal costs. Ninety-eight percent reported increased knowledge of health economics after viewing the simulation. CHESS demonstrates the potential of computer simulation to teach health economics and other key elements of practice- and systems-based competencies.
NASA Technical Reports Server (NTRS)
Lee, Timothy J.; Arnold, James O. (Technical Monitor)
1994-01-01
A new spin orbital basis is employed in the development of efficient open-shell coupled-cluster and perturbation theories that are based on a restricted Hartree-Fock (RHF) reference function. The spin orbital basis differs from the standard one in the spin functions that are associated with the singly occupied spatial orbital. The occupied orbital (in the spin orbital basis) is assigned the delta(+) = 1/square root of 2(alpha+Beta) spin function while the unoccupied orbital is assigned the delta(-) = 1/square root of 2(alpha-Beta) spin function. The doubly occupied and unoccupied orbitals (in the reference function) are assigned the standard alpha and Beta spin functions. The coupled-cluster and perturbation theory wave functions based on this set of "symmetric spin orbitals" exhibit much more symmetry than those based on the standard spin orbital basis. This, together with interacting space arguments, leads to a dramatic reduction in the computational cost for both coupled-cluster and perturbation theory. Additionally, perturbation theory based on "symmetric spin orbitals" obeys Brillouin's theorem provided that spin and spatial excitations are both considered. Other properties of the coupled-cluster and perturbation theory wave functions and models will be discussed.
Jacobs, Christopher; Lambourne, Luke; Xia, Yu; Segrè, Daniel
2017-01-01
System-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism's genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene's fitness contribution to an organism "here and now" and the same gene's historical importance as evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call "function-loss cost", which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.
The Economics of Cognitive Impairment: Volunteering and Cognitive Function in the HILDA Survey.
Hosking, Diane E; Anstey, Kaarin J
2016-01-01
The economic impact of older-age cognitive impairment has been estimated primarily by the direct and indirect costs associated with dementia care. Other potential costs associated with milder cognitive impairment in the community have received little attention. To quantify the cost of nonclinical cognitive impairment in a large population-based sample in order to more fully inform cost-effectiveness evaluations of interventions to maintain cognitive health. Volunteering by seniors has economic value but those with lower cognitive function may contribute fewer hours. Relations between hours volunteering and cognitive impairment were assessed using the Household, Income and Labour Dynamics in Australia (HILDA) survey data. These findings were extrapolated to the Australian population to estimate one potential cost attributable to nonclinical cognitive impairment. In those aged ≥60 years in HILDA (n = 3,127), conservatively defined cognitive impairment was present in 3.8% of the sample. Impairment was defined by performance ≥1 standard deviation below the age- and education-adjusted mean on both the Symbol Digit Modalities Test and Backwards Digit Span test. In fully adjusted binomial regression models, impairment was associated with the probability of undertaking 1 h 9 min less volunteering a week compared to being nonimpaired (β = -1.15, 95% confidence interval -1.82 to -0.47, p = 0.001). In the population, 3.8% impairment equated to probable loss of AUD 302,307,969 per annum estimated by hours of volunteering valued by replacement cost. Nonclinical cognitive impairment in older age impacts upon on the nonmonetary economy via probable loss of volunteering contribution. Valuing loss of contribution provides additional information for cost-effectiveness evaluations of research and action directed toward maintaining older-age cognitive functioning. © 2016 S. Karger AG, Basel.
The Closure of the Cycle: Enzymatic Synthesis and Functionalization of Bio-Based Polyesters.
Pellis, Alessandro; Herrero Acero, Enrique; Ferrario, Valerio; Ribitsch, Doris; Guebitz, Georg M; Gardossi, Lucia
2016-04-01
The polymer industry is under pressure to mitigate the environmental cost of petrol-based plastics. Biotechnologies contribute to the gradual replacement of petrol-based chemistry and the development of new renewable products, leading to the closure of carbon circle. An array of bio-based building blocks is already available on an industrial scale and is boosting the development of new generations of sustainable and functionally competitive polymers, such as polylactic acid (PLA). Biocatalysts add higher value to bio-based polymers by catalyzing not only their selective modification, but also their synthesis under mild and controlled conditions. The ultimate aim is the introduction of chemical functionalities on the surface of the polymer while retaining its bulk properties, thus enlarging the spectrum of advanced applications. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Dimitropoulos, Dimitrios
Electricity industries are experiencing upward cost pressures in many parts of the world. Chapter 1 of this thesis studies the production technology of electricity distributors. Although production and cost functions are mathematical duals, practitioners typically estimate only one or the other. This chapter proposes an approach for joint estimation of production and costs. Combining such quantity and price data has the effect of adding statistical information without introducing additional parameters into the model. We define a GMM estimator that produces internally consistent parameter estimates for both the production function and the cost function. We consider a multi-output framework, and show how to account for the presence of certain types of simultaneity and measurement error. The methodology is applied to data on 73 Ontario distributors for the period 2002-2012. As expected, the joint model results in a substantial improvement in the precision of parameter estimates. Chapter 2 focuses on productivity trends in electricity distribution. We apply two methodologies for estimating productivity growth . an index based approach, and an econometric cost based approach . to our data on the 73 Ontario distributors for the period 2002 to 2012. The resulting productivity growth estimates are approximately 1% per year, suggesting a reversal of the positive estimates that have generally been reported in previous periods. We implement flexible semi-parametric variants to assess the robustness of these conclusions and discuss the use of such statistical analyses for calibrating productivity and relative efficiencies within a price-cap framework. In chapter 3, I turn to the historically important problem of vertical contractual relations. While the existing literature has established that resale price maintenance is sufficient to coordinate the distribution network of a manufacturer, this chapter asks whether such vertical restraints are necessary. Specifically, I study the vertical contracting problem between an upstream manufacturer and its downstream distributors in a setting where spot market contracts fail, but resale price maintenance cannot be appealed to due to legal prohibition. I show that a bonus scheme based on retail revenues is sufficient to provide incentives to decentralized retailers to elicit the correct levels of both price and service.
NASA Astrophysics Data System (ADS)
Dimitropoulos, Dimitrios
Electricity industries are experiencing upward cost pressures in many parts of the world. Chapter 1 of this thesis studies the production technology of electricity distributors. Although production and cost functions are mathematical duals, practitioners typically estimate only one or the other. This chapter proposes an approach for joint estimation of production and costs. Combining such quantity and price data has the effect of adding statistical information without introducing additional parameters into the model. We define a GMM estimator that produces internally consistent parameter estimates for both the production function and the cost function. We consider a multi-output framework, and show how to account for the presence of certain types of simultaneity and measurement error. The methodology is applied to data on 73 Ontario distributors for the period 2002-2012. As expected, the joint model results in a substantial improvement in the precision of parameter estimates. Chapter 2 focuses on productivity trends in electricity distribution. We apply two methodologies for estimating productivity growth---an index based approach, and an econometric cost based approach---to our data on the 73 Ontario distributors for the period 2002 to 2012. The resulting productivity growth estimates are approximately -1% per year, suggesting a reversal of the positive estimates that have generally been reported in previous periods. We implement flexible semi-parametric variants to assess the robustness of these conclusions and discuss the use of such statistical analyses for calibrating productivity and relative efficiencies within a price-cap framework. In chapter 3, I turn to the historically important problem of vertical contractual relations. While the existing literature has established that resale price maintenance is sufficient to coordinate the distribution network of a manufacturer, this chapter asks whether such vertical restraints are necessary. Specifically, I study the vertical contracting problem between an upstream manufacturer and its downstream distributors in a setting where spot market contracts fail, but resale price maintenance cannot be appealed to due to legal prohibition. I show that a bonus scheme based on retail revenues is sufficient to provide incentives to decentralized retailers to elicit the correct levels of both price and service.
Naveršnik, Klemen; Mrhar, Aleš
2014-02-27
A new health care technology must be cost-effective in order to be adopted. If evidence regarding cost-effectiveness is uncertain, then the decision maker faces two choices: (1) adopt the technology and run the risk that it is less effective in actual practice, or (2) reject the technology and risk that potential health is forgone. A new depression eHealth service was found to be cost-effective in a previously published study. The results, however, were unreliable because it was based on a pilot clinical trial. A conservative decision maker would normally require stronger evidence for the intervention to be implemented. Our objective was to evaluate how to facilitate service implementation by shifting the burden of risk due to uncertainty to the service provider and ensure that the intervention remains cost-effective during routine use. We propose a risk-sharing scheme, where the service cost depends on the actual effectiveness of the service in real-life setting. Routine efficacy data can be used as the input to the cost-effectiveness model, which employs a mapping function to translate a depression specific score into quality-adjusted life-years. The latter is the denominator in the cost-effectiveness ratio calculation, required by the health care decision maker. The output of the model is a "value graph", showing intervention value as a function of its observed (future) efficacy, using the €30,000 per quality-adjusted life-year (QALY) threshold. We found that the eHealth service should improve the patient's outcome by at least 11.9 points on the Beck Depression Inventory scale in order for the cost-effectiveness ratio to remain below the €30,000/QALY threshold. The value of a single point improvement was found to be between €200 and €700, depending on depression severity at treatment start. Value of the eHealth service, based on the current efficacy estimates, is €1900, which is significantly above its estimated cost (€200). The eHealth depression service is particularly suited to routine monitoring, since data can be gathered through the Internet within the service communication channels. This enables real-time cost-effectiveness evaluation and allows a value-based price to be established. We propose a novel pricing scheme where the price is set to a point in the interval between cost and value, which provides an economic surplus to both the payer and the provider. Such a business model will assure that a portion of the surplus is retained by the payer and not completely appropriated by the private provider. If the eHealth service were to turn out less effective than originally anticipated, then the price would be lowered in order to achieve the cost-effectiveness threshold and this risk of financial loss would be borne by the provider.
Using stochastic dynamic programming to support catchment-scale water resources management in China
NASA Astrophysics Data System (ADS)
Davidsen, Claus; Pereira-Cardenal, Silvio Javier; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter
2013-04-01
A hydro-economic modelling approach is used to optimize reservoir management at river basin level. We demonstrate the potential of this integrated approach on the Ziya River basin, a complex basin on the North China Plain south-east of Beijing. The area is subject to severe water scarcity due to low and extremely seasonal precipitation, and the intense agricultural production is highly dependent on irrigation. Large reservoirs provide water storage for dry months while groundwater and the external South-to-North Water Transfer Project are alternative sources of water. An optimization model based on stochastic dynamic programming has been developed. The objective function is to minimize the total cost of supplying water to the users, while satisfying minimum ecosystem flow constraints. Each user group (agriculture, domestic and industry) is characterized by fixed demands, fixed water allocation costs for the different water sources (surface water, groundwater and external water) and fixed costs of water supply curtailment. The multiple reservoirs in the basin are aggregated into a single reservoir to reduce the dimensions of decisions. Water availability is estimated using a hydrological model. The hydrological model is based on the Budyko framework and is forced with 51 years of observed daily rainfall and temperature data. 23 years of observed discharge from an in-situ station located downstream a remote mountainous catchment is used for model calibration. Runoff serial correlation is described by a Markov chain that is used to generate monthly runoff scenarios to the reservoir. The optimal costs at a given reservoir state and stage were calculated as the minimum sum of immediate and future costs. Based on the total costs for all states and stages, water value tables were generated which contain the marginal value of stored water as a function of the month, the inflow state and the reservoir state. The water value tables are used to guide allocation decisions in simulation mode. The performance of the operation rules based on water value tables was evaluated. The approach was used to assess the performance of alternative development scenarios and infrastructure projects successfully in the case study region.
Cost-benefit decision circuitry: proposed modulatory role for acetylcholine.
Fobbs, Wambura C; Mizumori, Sheri J Y
2014-01-01
In order to select which action should be taken, an animal must weigh the costs and benefits of possible outcomes associate with each action. Such decisions, called cost-benefit decisions, likely involve several cognitive processes (including memory) and a vast neural circuitry. Rodent models have allowed research to begin to probe the neural basis of three forms of cost-benefit decision making: effort-, delay-, and risk-based decision making. In this review, we detail the current understanding of the functional circuits that subserve each form of decision making. We highlight the extensive literature by detailing the ability of dopamine to influence decisions by modulating structures within these circuits. Since acetylcholine projects to all of the same important structures, we propose several ways in which the cholinergic system may play a local modulatory role that will allow it to shape these behaviors. A greater understanding of the contribution of the cholinergic system to cost-benefit decisions will permit us to better link the decision and memory processes, and this will help us to better understand and/or treat individuals with deficits in a number of higher cognitive functions including decision making, learning, memory, and language. © 2014 Elsevier Inc. All rights reserved.
Optimal feedback control of turbulent channel flow
NASA Technical Reports Server (NTRS)
Bewley, Thomas; Choi, Haecheon; Temam, Roger; Moin, Parviz
1993-01-01
Feedback control equations were developed and tested for computing wall normal control velocities to control turbulent flow in a channel with the objective of reducing drag. The technique used is the minimization of a 'cost functional' which is constructed to represent some balance of the drag integrated over the wall and the net control effort. A distribution of wall velocities is found which minimizes this cost functional some time shortly in the future based on current observations of the flow near the wall. Preliminary direct numerical simulations of the scheme applied to turbulent channel flow indicates it provides approximately 17 percent drag reduction. The mechanism apparent when the scheme is applied to a simplified flow situation is also discussed.
Boareto, Marcelo; Cesar, Jonatas; Leite, Vitor B P; Caticha, Nestor
2015-01-01
We introduce Supervised Variational Relevance Learning (Suvrel), a variational method to determine metric tensors to define distance based similarity in pattern classification, inspired in relevance learning. The variational method is applied to a cost function that penalizes large intraclass distances and favors small interclass distances. We find analytically the metric tensor that minimizes the cost function. Preprocessing the patterns by doing linear transformations using the metric tensor yields a dataset which can be more efficiently classified. We test our methods using publicly available datasets, for some standard classifiers. Among these datasets, two were tested by the MAQC-II project and, even without the use of further preprocessing, our results improve on their performance.
Prime focus architectures for large space telescopes: reduce surfaces to save cost
NASA Astrophysics Data System (ADS)
Breckinridge, J. B.; Lillie, C. F.
2016-07-01
Conceptual architectures are now being developed to identify future directions for post JWST large space telescope systems to operate in the UV Optical and near IR regions of the spectrum. Here we show that the cost of optical surfaces within large aperture telescope/instrument systems can exceed $100M/reflection when expressed in terms of the aperture increase needed to over come internal absorption loss. We recommend a program in innovative optical design to minimize the number of surfaces by considering multiple functions for mirrors. An example is given using the Rowland circle imaging spectrometer systems for UV space science. With few exceptions, current space telescope architectures are based on systems optimized for ground-based astronomy. Both HST and JWST are classical "Cassegrain" telescopes derived from the ground-based tradition to co-locate the massive primary mirror and the instruments at the same end of the metrology structure. This requirement derives from the dual need to minimize observatory dome size and cost in the presence of the Earth's 1-g gravitational field. Space telescopes, however function in the zero gravity of space and the 1- g constraint is relieved to the advantage of astronomers. Here we suggest that a prime focus large aperture telescope system in space may have potentially have higher transmittance, better pointing, improved thermal and structural control, less internal polarization and broader wavelength coverage than Cassegrain telescopes. An example is given showing how UV astronomy telescopes use single optical elements for multiple functions and therefore have a minimum number of reflections.
Self-balancing dynamic scheduling of electrical energy for energy-intensive enterprises
NASA Astrophysics Data System (ADS)
Gao, Yunlong; Gao, Feng; Zhai, Qiaozhu; Guan, Xiaohong
2013-06-01
Balancing production and consumption with self-generation capacity in energy-intensive enterprises has huge economic and environmental benefits. However, balancing production and consumption with self-generation capacity is a challenging task since the energy production and consumption must be balanced in real time with the criteria specified by power grid. In this article, a mathematical model for minimising the production cost with exactly realisable energy delivery schedule is formulated. And a dynamic programming (DP)-based self-balancing dynamic scheduling algorithm is developed to obtain the complete solution set for such a multiple optimal solutions problem. For each stage, a set of conditions are established to determine whether a feasible control trajectory exists. The state space under these conditions is partitioned into subsets and each subset is viewed as an aggregate state, the cost-to-go function is then expressed as a function of initial and terminal generation levels of each stage and is proved to be a staircase function with finite steps. This avoids the calculation of the cost-to-go of every state to resolve the issue of dimensionality in DP algorithm. In the backward sweep process of the algorithm, an optimal policy is determined to maximise the realisability of energy delivery schedule across the entire time horizon. And then in the forward sweep process, the feasible region of the optimal policy with the initial and terminal state at each stage is identified. Different feasible control trajectories can be identified based on the region; therefore, optimising for the feasible control trajectory is performed based on the region with economic and reliability objectives taken into account.
Oestergaard, Lisa G; Christensen, Finn B; Nielsen, Claus V; Bünger, Cody E; Fruensgaard, Soeren; Sogaard, Rikke
2013-11-01
Economic evaluation conducted alongside a randomized controlled trial with 1-year follow-up. To examine the cost-effectiveness of initiating rehabilitation 6 weeks after surgery as opposed to 12 weeks after surgery. In a previously reported randomized controlled trial, we assessed the impact of timing of rehabilitation after a lumbar spinal fusion and found that a fast-track strategy led to poorer functional ability. Before making recommendations, it seems relevant to address the societal perspective including return to work, quality of life, and costs. A cost-effectiveness analysis and a cost-utility analysis were conducted. Eighty-two patients undergoing instrumented lumbar spinal fusion due to degenerative disc disease or spondylolisthesis (grade I or II) were randomized to an identical protocol of 4 sessions of group-based rehabilitation and were instructed in home exercises focusing on active stability training. Outcome parameters included functional disability (Oswestry Disability Index) and quality-adjusted life years. Health care and productivity costs were estimated from national registries and reported in euros. Costs and effects were transformed into net benefit. Bootstrapping was used to estimate 95% confidence intervals (95% CI). The fast-track strategy tended to be costlier by €6869 (95% CI, -4640 to 18,378) while at the same time leading to significantly poorer outcomes of functional disability by -9 points (95% CI, -18 to -3) and a tendency for a reduced gain in quality-adjusted life years by -0.04 (95% CI, -0.13 to 0.01). The overall probability for the fast-track strategy being cost-effective does not reach 10% at conventional thresholds for cost-effectiveness. Initiating rehabilitation at 6 weeks as opposed to 12 weeks after surgery is on average more costly and less effective. The uncertainty of this result did not seem to be sensitive to methodological issues, and clinical managements who have already adapted fast-track rehabilitation strategies have reason to reconsider their choice. .
Kauvar, Arielle N B; Cronin, Terrence; Roenigk, Randall; Hruza, George; Bennett, Richard
2015-05-01
Basal cell carcinoma (BCC) is the most common cancer in the US population affecting approximately 2.8 million people per year. Basal cell carcinomas are usually slow-growing and rarely metastasize, but they do cause localized tissue destruction, compromised function, and cosmetic disfigurement. To provide clinicians with guidelines for the management of BCC based on evidence from a comprehensive literature review, and consensus among the authors. An extensive review of the medical literature was conducted to evaluate the optimal treatment methods for cutaneous BCC, taking into consideration cure rates, recurrence rates, aesthetic and functional outcomes, and cost-effectiveness of the procedures. Surgical approaches provide the best outcomes for BCCs. Mohs micrographic surgery provides the highest cure rates while maximizing tissue preservation, maintenance of function, and cosmesis. Mohs micrographic surgery is an efficient and cost-effective procedure and remains the treatment of choice for high-risk BCCs and for those in cosmetically sensitive locations. Nonsurgical modalities may be used for low-risk BCCs when surgery is contraindicated or impractical, but the cure rates are lower.
Probabilistic distance-based quantizer design for distributed estimation
NASA Astrophysics Data System (ADS)
Kim, Yoon Hak
2016-12-01
We consider an iterative design of independently operating local quantizers at nodes that should cooperate without interaction to achieve application objectives for distributed estimation systems. We suggest as a new cost function a probabilistic distance between the posterior distribution and its quantized one expressed as the Kullback Leibler (KL) divergence. We first present the analysis that minimizing the KL divergence in the cyclic generalized Lloyd design framework is equivalent to maximizing the logarithmic quantized posterior distribution on the average which can be further computationally reduced in our iterative design. We propose an iterative design algorithm that seeks to maximize the simplified version of the posterior quantized distribution and discuss that our algorithm converges to a global optimum due to the convexity of the cost function and generates the most informative quantized measurements. We also provide an independent encoding technique that enables minimization of the cost function and can be efficiently simplified for a practical use of power-constrained nodes. We finally demonstrate through extensive experiments an obvious advantage of improved estimation performance as compared with the typical designs and the novel design techniques previously published.
Digital robust active control law synthesis for large order systems using constrained optimization
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
1987-01-01
This paper presents a direct digital control law synthesis procedure for a large order, sampled data, linear feedback system using constrained optimization techniques to meet multiple design requirements. A linear quadratic Gaussian type cost function is minimized while satisfying a set of constraints on the design loads and responses. General expressions for gradients of the cost function and constraints, with respect to the digital control law design variables are derived analytically and computed by solving a set of discrete Liapunov equations. The designer can choose the structure of the control law and the design variables, hence a stable classical control law as well as an estimator-based full or reduced order control law can be used as an initial starting point. Selected design responses can be treated as constraints instead of lumping them into the cost function. This feature can be used to modify a control law, to meet individual root mean square response limitations as well as minimum single value restrictions. Low order, robust digital control laws were synthesized for gust load alleviation of a flexible remotely piloted drone aircraft.
Lung tumor segmentation in PET images using graph cuts.
Ballangan, Cherry; Wang, Xiuying; Fulham, Michael; Eberl, Stefan; Feng, David Dagan
2013-03-01
The aim of segmentation of tumor regions in positron emission tomography (PET) is to provide more accurate measurements of tumor size and extension into adjacent structures, than is possible with visual assessment alone and hence improve patient management decisions. We propose a segmentation energy function for the graph cuts technique to improve lung tumor segmentation with PET. Our segmentation energy is based on an analysis of the tumor voxels in PET images combined with a standardized uptake value (SUV) cost function and a monotonic downhill SUV feature. The monotonic downhill feature avoids segmentation leakage into surrounding tissues with similar or higher PET tracer uptake than the tumor and the SUV cost function improves the boundary definition and also addresses situations where the lung tumor is heterogeneous. We evaluated the method in 42 clinical PET volumes from patients with non-small cell lung cancer (NSCLC). Our method improves segmentation and performs better than region growing approaches, the watershed technique, fuzzy-c-means, region-based active contour and tumor customized downhill. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yu, Wan-Ting; Yu, Hong-yi; Du, Jian-Ping; Wang, Ding
2018-04-01
The Direct Position Determination (DPD) algorithm has been demonstrated to achieve a better accuracy with known signal waveforms. However, the signal waveform is difficult to be completely known in the actual positioning process. To solve the problem, we proposed a DPD method for digital modulation signals based on improved particle swarm optimization algorithm. First, a DPD model is established for known modulation signals and a cost function is obtained on symbol estimation. Second, as the optimization of the cost function is a nonlinear integer optimization problem, an improved Particle Swarm Optimization (PSO) algorithm is considered for the optimal symbol search. Simulations are carried out to show the higher position accuracy of the proposed DPD method and the convergence of the fitness function under different inertia weight and population size. On the one hand, the proposed algorithm can take full advantage of the signal feature to improve the positioning accuracy. On the other hand, the improved PSO algorithm can improve the efficiency of symbol search by nearly one hundred times to achieve a global optimal solution.
Approximate analytical relationships for linear optimal aeroelastic flight control laws
NASA Astrophysics Data System (ADS)
Kassem, Ayman Hamdy
1998-09-01
This dissertation introduces new methods to uncover functional relationships between design parameters of a contemporary control design technique and the resulting closed-loop properties. Three new methods are developed for generating such relationships through analytical expressions: the Direct Eigen-Based Technique, the Order of Magnitude Technique, and the Cost Function Imbedding Technique. Efforts concentrated on the linear-quadratic state-feedback control-design technique applied to an aeroelastic flight control task. For this specific application, simple and accurate analytical expressions for the closed-loop eigenvalues and zeros in terms of basic parameters such as stability and control derivatives, structural vibration damping and natural frequency, and cost function weights are generated. These expressions explicitly indicate how the weights augment the short period and aeroelastic modes, as well as the closed-loop zeros, and by what physical mechanism. The analytical expressions are used to address topics such as damping, nonminimum phase behavior, stability, and performance with robustness considerations, and design modifications. This type of knowledge is invaluable to the flight control designer and would be more difficult to formulate when obtained from numerical-based sensitivity analysis.
Medicare capitation model, functional status, and multiple comorbidities: model accuracy
Noyes, Katia; Liu, Hangsheng; Temkin-Greener, Helena
2012-01-01
Objective This study examined financial implications of CMS-Hierarchical Condition Categories (HCC) risk-adjustment model on Medicare payments for individuals with comorbid chronic conditions. Study Design The study used 1992-2000 data from the Medicare Current Beneficiary Survey and corresponding Medicare claims. The pairs of comorbidities were formed based on the prior evidence about possible synergy between these conditions and activities of daily living (ADL) deficiencies and included heart disease and cancer, lung disease and cancer, stroke and hypertension, stroke and arthritis, congestive heart failure (CHF) and osteoporosis, diabetes and coronary artery disease, CHF and dementia. Methods For each beneficiary, we calculated the actual Medicare cost ratio as the ratio of the individual’s annualized costs to the mean annual Medicare cost of all people in the study. The actual Medicare cost ratios, by ADLs, were compared to the HCC ratios under the CMS-HCC payment model. Using multivariate regression models, we tested whether having the identified pairs of comorbidities affects the accuracy of CMS-HCC model predictions. Results The CMS-HCC model underpredicted Medicare capitation payments for patients with hypertension, lung disease, congestive heart failure and dementia. The difference between the actual costs and predicted payments was partially explained by beneficiary functional status and less than optimal adjustment for these chronic conditions. Conclusions Information about beneficiary functional status should be incorporated in reimbursement models since underpaying providers for caring for population with multiple comorbidities may provide severe disincentives for managed care plans to enroll such individuals and to appropriately manage their complex and costly conditions. PMID:18837646
Cieza, Alarcos; Baldwin, David S.
2017-01-01
Development of payment systems for mental health services has been hindered by limited evidence for the utility of diagnosis or symptoms in predicting costs of care. We investigated the utility of functioning information in predicting costs for patients with mood and anxiety disorders. This was a prospective cohort study involving 102 adult patients attending a tertiary referral specialist clinic for mood and anxiety disorders. The main outcome was total costs, calculated by applying unit costs to healthcare use data. After adjusting for covariates, a significant total costs association was yielded for functioning (eβ=1.02; 95% confidence interval: 1.01–1.03), but not depressive symptom severity or anxiety symptom severity. When we accounted for the correlations between the main independent variables by constructing an abridged functioning metric, a significant total costs association was again yielded for functioning (eβ=1.04; 95% confidence interval: 1.01–1.09), but not symptom severity. The utility of functioning in predicting costs for patients with mood and anxiety disorders was supported. Functioning information could be useful within mental health payment systems. PMID:28383309
A space-based public service platform for terrestrial rescue operations
NASA Technical Reports Server (NTRS)
Fleisig, R.; Bernstein, J.; Cramblit, D. C.
1977-01-01
The space-based Public Service Platform (PSP) is a multibeam, high-gain communications relay satellite that can provide a variety of functions for a large number of people on earth equipped with extremely small, very low cost transceivers. This paper describes the PSP concept, the rationale used to derive the concept, the criteria for selecting specific communication functions to be performed, and the advantages of performing such functions via satellite. The discussion focuses on the benefits of using a PSP for natural disaster warning; control of attendant rescue/assistance operations; and rescue of people in downed aircraft, aboard sinking ships, lost or injured on land.
Crystal structure prediction supported by incomplete experimental data
NASA Astrophysics Data System (ADS)
Tsujimoto, Naoto; Adachi, Daiki; Akashi, Ryosuke; Todo, Synge; Tsuneyuki, Shinji
2018-05-01
We propose an efficient theoretical scheme for structure prediction on the basis of the idea of combining methods, which optimize theoretical calculation and experimental data simultaneously. In this scheme, we formulate a cost function based on a weighted sum of interatomic potential energies and a penalty function which is defined with partial experimental data totally insufficient for conventional structure analysis. In particular, we define the cost function using "crystallinity" formulated with only peak positions within the small range of the x-ray-diffraction pattern. We apply this method to well-known polymorphs of SiO2 and C with up to 108 atoms in the simulation cell and show that it reproduces the correct structures efficiently with very limited information of diffraction peaks. This scheme opens a new avenue for determining and predicting structures that are difficult to determine by conventional methods.
An Adaptive ANOVA-based PCKF for High-Dimensional Nonlinear Inverse Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
LI, Weixuan; Lin, Guang; Zhang, Dongxiao
2014-02-01
The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect—except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos bases in the expansion helps to capture uncertainty more accurately but increases computational cost. Bases selection is particularly importantmore » for high-dimensional stochastic problems because the number of polynomial chaos bases required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE bases are pre-set based on users’ experience. Also, for sequential data assimilation problems, the bases kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE bases for different problems and automatically adjusts the number of bases in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm is tested with different examples and demonstrated great effectiveness in comparison with non-adaptive PCKF and EnKF algorithms.« less
Feasibility and Supply Analysis of U.S. Geothermal District Heating and Cooling System
NASA Astrophysics Data System (ADS)
He, Xiaoning
Geothermal energy is a globally distributed sustainable energy with the advantages of a stable base load energy production with a high capacity factor and zero SOx, CO, and particulates emissions. It can provide a potential solution to the depletion of fossil fuels and air pollution problems. The geothermal district heating and cooling system is one of the most common applications of geothermal energy, and consists of geothermal wells to provide hot water from a fractured geothermal reservoir, a surface energy distribution system for hot water transmission, and heating/cooling facilities to provide water and space heating as well as air conditioning for residential and commercial buildings. To gain wider recognition for the geothermal district heating and cooling (GDHC) system, the potential to develop such a system was evaluated in the western United States, and in the state of West Virginia. The geothermal resources were categorized into identified hydrothermal resources, undiscovered hydrothermal resources, near hydrothermal enhanced geothermal system (EGS), and deep EGS. Reservoir characteristics of the first three categories were estimated individually, and their thermal potential calculated. A cost model for such a system was developed for technical performance and economic analysis at each geothermally active location. A supply curve for the system was then developed, establishing the quantity and the cost of potential geothermal energy which can be used for the GDHC system. A West Virginia University (WVU) case study was performed to compare the competiveness of a geothermal energy system to the current steam based system. An Aspen Plus model was created to simulate the year-round campus heating and cooling scenario. Five cases of varying water flow rates and temperatures were simulated to find the lowest levelized cost of heat (LCOH) for the WVU case study. The model was then used to derive a levelized cost of heat as a function of the population density at a constant geothermal gradient. By use of such functions in West Virginia at a census tract level, the most promising census tracts in WV for the development of geothermal district heating and cooling systems were mapped. This study is unique in that its purpose was to utilize supply analyses for the GDHC systems and determine an appropriate economic assessment of the viability and sustainability of the systems. It was found that the market energy demand, production temperature, and project lifetime have negative effects on the levelized cost, while the drilling cost, discount rate, and capital cost have positive effects on the levelized cost by sensitivity analysis. Moreover, increasing the energy demand is the most effective way to decrease the levelized cost. The derived levelized cost function shows that for EGS based systems, the population density has a strong negative effect on the LCOH at any geothermal gradient, while the gradient only has a negative effect on the LCOH at a low population density.
Goh, Joshua O S; Su, Yu-Shiang; Tang, Yong-Jheng; McCarrey, Anna C; Tereshchenko, Alexander; Elkins, Wendy; Resnick, Susan M
2016-12-07
Aging compromises the frontal, striatal, and medial temporal areas of the reward system, impeding accurate value representation and feedback processing critical for decision making. However, substantial variability characterizes age-related effects on the brain so that some older individuals evince clear neurocognitive declines whereas others are spared. Moreover, the functional correlates of normative individual differences in older-adult value-based decision making remain unclear. We performed a functional magnetic resonance imaging study in 173 human older adults during a lottery choice task in which costly to more desirable stakes were depicted using low to high expected values (EVs) of points. Across trials that varied in EVs, participants decided to accept or decline the offered stakes to maximize total accumulated points. We found that greater age was associated with less optimal decisions, accepting stakes when losses were likely and declining stakes when gains were likely, and was associated with increased frontal activity for costlier stakes. Critically, risk preferences varied substantially across older adults and neural sensitivity to EVs in the frontal, striatal, and medial temporal areas dissociated risk-aversive from risk-taking individuals. Specifically, risk-averters increased neural responses to increasing EVs as stakes became more desirable, whereas risk-takers increased neural responses with decreasing EV as stakes became more costly. Risk preference also modulated striatal responses during feedback with risk-takers showing more positive responses to gains compared with risk-averters. Our findings highlight the frontal, striatal, and medial temporal areas as key neural loci in which individual differences differentially affect value-based decision-making ability in older adults. Frontal, striatal, and medial temporal functions implicated in value-based decision processing of rewards and costs undergo substantial age-related changes. However, age effects on brain function and cognition differ across individuals. How this normative variation relates to older-adult value-based decision making is unclear. We found that although the ability make optimal decisions declines with age, there is still much individual variability in how this deterioration occurs. Critically, whereas risk-averters showed increased neural activity to increasingly valuable stakes in frontal, striatal, and medial temporal areas, risk-takers instead increased activity as stakes became more costly. Such distinct functional decision-making processing in these brain regions across normative older adults may reflect individual differences in susceptibility to age-related brain changes associated with incipient cognitive impairment. Copyright © 2016 the authors 0270-6474/16/3612498-12$15.00/0.
Fuzzy/Neural Software Estimates Costs of Rocket-Engine Tests
NASA Technical Reports Server (NTRS)
Douglas, Freddie; Bourgeois, Edit Kaminsky
2005-01-01
The Highly Accurate Cost Estimating Model (HACEM) is a software system for estimating the costs of testing rocket engines and components at Stennis Space Center. HACEM is built on a foundation of adaptive-network-based fuzzy inference systems (ANFIS) a hybrid software concept that combines the adaptive capabilities of neural networks with the ease of development and additional benefits of fuzzy-logic-based systems. In ANFIS, fuzzy inference systems are trained by use of neural networks. HACEM includes selectable subsystems that utilize various numbers and types of inputs, various numbers of fuzzy membership functions, and various input-preprocessing techniques. The inputs to HACEM are parameters of specific tests or series of tests. These parameters include test type (component or engine test), number and duration of tests, and thrust level(s) (in the case of engine tests). The ANFIS in HACEM are trained by use of sets of these parameters, along with costs of past tests. Thereafter, the user feeds HACEM a simple input text file that contains the parameters of a planned test or series of tests, the user selects the desired HACEM subsystem, and the subsystem processes the parameters into an estimate of cost(s).
18 CFR 11.12 - Determination of section 10(f) costs.
Code of Federal Regulations, 2010 CFR
2010-04-01
... costs of the project. (2) If power is not an authorized function of the headwater project, the section... costs designated as the joint-use power cost, derived by deeming a power function at the project. The value of the benefits assigned to the deemed power function, for purposes of determining the value of...
18 CFR 11.12 - Determination of section 10(f) costs.
Code of Federal Regulations, 2012 CFR
2012-04-01
... costs of the project. (2) If power is not an authorized function of the headwater project, the section... costs designated as the joint-use power cost, derived by deeming a power function at the project. The value of the benefits assigned to the deemed power function, for purposes of determining the value of...
18 CFR 11.12 - Determination of section 10(f) costs.
Code of Federal Regulations, 2011 CFR
2011-04-01
... costs of the project. (2) If power is not an authorized function of the headwater project, the section... costs designated as the joint-use power cost, derived by deeming a power function at the project. The value of the benefits assigned to the deemed power function, for purposes of determining the value of...
18 CFR 11.12 - Determination of section 10(f) costs.
Code of Federal Regulations, 2014 CFR
2014-04-01
... costs of the project. (2) If power is not an authorized function of the headwater project, the section... costs designated as the joint-use power cost, derived by deeming a power function at the project. The value of the benefits assigned to the deemed power function, for purposes of determining the value of...
18 CFR 11.12 - Determination of section 10(f) costs.
Code of Federal Regulations, 2013 CFR
2013-04-01
... costs of the project. (2) If power is not an authorized function of the headwater project, the section... costs designated as the joint-use power cost, derived by deeming a power function at the project. The value of the benefits assigned to the deemed power function, for purposes of determining the value of...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrison, T. J.
2014-02-01
The cost of nuclear power is a straightforward yet complicated topic. It is straightforward in that the cost of nuclear power is a function of the cost to build the nuclear power plant, the cost to operate and maintain it, and the cost to provide fuel for it. It is complicated in that some of those costs are not necessarily known, introducing uncertainty into the analysis. For large light water reactor (LWR)-based nuclear power plants, the uncertainty is mainly contained within the cost of construction. The typical costs of operations and maintenance (O&M), as well as fuel, are well knownmore » based on the current fleet of LWRs. However, the last currently operating reactor to come online was Watts Bar 1 in May 1996; thus, the expected construction costs for gigawatt (GW)-class reactors in the United States are based on information nearly two decades old. Extrapolating construction, O&M, and fuel costs from GW-class LWRs to LWR-based small modular reactors (SMRs) introduces even more complication. The per-installed-kilowatt construction costs for SMRs are likely to be higher than those for the GW-class reactors based on the property of the economy of scale. Generally speaking, the economy of scale is the tendency for overall costs to increase slower than the overall production capacity. For power plants, this means that doubling the power production capacity would be expected to cost less than twice as much. Applying this property in the opposite direction, halving the power production capacity would be expected to cost more than half as much. This can potentially make the SMRs less competitive in the electricity market against the GW-class reactors, as well as against other power sources such as natural gas and subsidized renewables. One factor that can potentially aid the SMRs in achieving economic competitiveness is an economy of numbers, as opposed to the economy of scale, associated with learning curves. The basic concept of the learning curve is that the more a new process is repeated, the more efficient the process can be made. Assuming that efficiency directly relates to cost means that the more a new process is repeated successfully and efficiently, the less costly the process can be made. This factor ties directly into the factory fabrication and modularization aspect of the SMR paradigm—manufacturing serial, standardized, identical components for use in nuclear power plants can allow the SMR industry to use the learning curves to predict and optimize deployment costs.« less
Orbit Clustering Based on Transfer Cost
NASA Technical Reports Server (NTRS)
Gustafson, Eric D.; Arrieta-Camacho, Juan J.; Petropoulos, Anastassios E.
2013-01-01
We propose using cluster analysis to perform quick screening for combinatorial global optimization problems. The key missing component currently preventing cluster analysis from use in this context is the lack of a useable metric function that defines the cost to transfer between two orbits. We study several proposed metrics and clustering algorithms, including k-means and the expectation maximization algorithm. We also show that proven heuristic methods such as the Q-law can be modified to work with cluster analysis.
DOD Competitive Sourcing: Questions About Goals, Pace, and Risks of Key Reform Initiative.
1999-02-01
internal audit , and the function being studied. To the extent existing in-house resources are limited, if resources need to be shifted to meet new...contractor support costs and $24 million for existing in-house staff—to compete 4,000 positions in a multifunction, multilocation study, at least $7,000...competitions and of personnel separation costs on their installations. Army officials, based on work by the Army Audit Agency, have expressed concern that
Courses of Action to Optimize Heavy Bearings Cages
NASA Astrophysics Data System (ADS)
Szekely, V. G.
2016-11-01
The global expansion in the industrial, economically and technological context determines the need to develop products, technologies, processes and methods which ensure increased performance, lower manufacturing costs and synchronization of the main costs reported to the elementary values which correspond to utilization”. The development trend of the heavy bearing industry and the wide use of bearings determines the necessity of choosing the most appropriate material for a given application in order to meet the cumulative requirements of durability, reliability, strength, etc. Evaluation of commonly known or new materials represents a fundamental criterion, in order to choose the materials based on the cost, machinability and the technological process. In order to ensure the most effective basis for the decision, regarding the heavy bearing cage, in the first stage the functions of the product are established and in a further step a comparative analysis of the materials is made in order to establish the best materials which satisfy the product functions. The decision for selecting the most appropriate material is based largely on the overlapping of the material costs and manufacturing process during which the half-finished material becomes a finished product. The study is orientated towards a creative approach, especially towards innovation and reengineering by using specific techniques and methods applied in inventics. The main target is to find new efficient and reliable constructive and/or technological solutions which are consistent with the concept of sustainable development.
Medical technology management: from planning to application.
David, Y; Jahnke, E
2005-01-01
Appropriate deployment of technological innovation contributes to improvement in the quality of healthcare delivered, the containment of cost, and access to the healthcare system. Hospitals have been allocating a significant portion of their resources to procuring and managing capital assets; they are continuously faced with demands for new medical equipment and are asked to manage existing inventory for which they are not well prepared. To objectively manage their investment, hospitals are developing medical technology management programs that need pertinent information and planning methodology for integrating new equipment into existing operations as well as for optimizing costs of ownership of all equipment. Clinical engineers can identify technological solutions based on the matching of new medical equipment with hospital's objectives. They can review their institution's overall technological position, determine strengths and weaknesses, develop equipment-selection criteria, supervise installations, train users and monitor post procurement performance to assure meeting of goals. This program, together with cost accounting analysis, will objectively guide the capital assets decision-making process. Cost accounting analysis is a multivariate function that includes determining the amount, based upon a strategic plan and financial resources, of funding to be allocated annually for medical equipment acquisition and replacement. Often this function works closely with clinical engineering to establish equipment useful life and prioritization of acquisition, upgrade, and replacement of inventory within budget confines and without conducting time consuming, individual financial capital project evaluations.
The evolution of human phenotypic plasticity: age and nutritional status at maturity.
Gage, Timothy B
2003-08-01
Several evolutionary optimal models of human plasticity in age and nutritional status at reproductive maturation are proposed and their dynamics examined. These models differ from previously published models because fertility is not assumed to be a function of body size or nutritional status. Further, the models are based on explicitly human demographic patterns, that is, model human life-tables, model human fertility tables, and, a nutrient flow-based model of maternal nutritional status. Infant survival (instead of fertility as in previous models) is assumed to be a function of maternal nutritional status. Two basic models are examined. In the first the cost of reproduction is assumed to be a constant proportion of total nutrient flow. In the second the cost of reproduction is constant for each birth. The constant proportion model predicts a negative slope of age and nutritional status at maturation. The constant cost per birth model predicts a positive slope of age and nutritional status at maturation. Either model can account for the secular decline in menarche observed over the last several centuries in Europe. A search of the growth literature failed to find definitive empirical documentation of human phenotypic plasticity in age and nutritional status at maturation. Most research strategies confound genetics with phenotypic plasticity. The one study that reports secular trends suggests a marginally insignificant, but positive slope. This view tends to support the constant cost per birth model.
Hime, Neil J; Fitzgerald, Dominic; Robinson, Paul; Selvadurai, Hiran; Van Asperen, Peter; Jaffé, Adam; Zurynski, Yvonne
2014-03-19
Rare chronic diseases of childhood are often complex and associated with multiple health issues. Such conditions present significant demands on health services, but the degree of these demands is seldom reported. This study details the utilisation of hospital services and associated costs in a single case of surfactant protein C deficiency, an example of childhood interstitial lung disease. Hospital records and case notes for a single patient were reviewed. Costs associated with inpatient services were extracted from a paediatric hospital database. Actual costs were compared to cost estimates based on both disease/procedure-related cost averages for inpatient hospital episodes and a recently implemented Australian hospital funding algorithm (activity-based funding). To age 8 years and 10 months the child was a hospital inpatient for 443 days over 32 admissions. A total of 298 days were spent in paediatric intensive care. Investigations included 58 chest x-rays, 9 bronchoscopies, 10 lung function tests and 11 sleep studies. Comprehensive disease management failed to prevent respiratory decline and a lung transplant was required. Costs of inpatient care at three tertiary hospitals totalled $966,531 (Australian dollars). Disease- and procedure-related cost averages underestimated costs of paediatric inpatient services for this patient by 68%. An activity-based funding algorithm that is currently being adopted in Australia estimated the cost of hospital health service provision with more accuracy. Health service usage and inpatient costs for this case of rare chronic childhood respiratory disease were substantial. This case study demonstrates that disease- and procedure-related cost averages are insufficient to estimate costs associated with rare chronic diseases that require complex management. This indicates that the health service use for similar episodes of hospital care is greater for children with rare diseases than other children. The impacts of rare chronic childhood diseases should be considered when planning resources for paediatric health services.
Pfalzer, Lucinda A.; Springer, Barbara; Levy, Ellen; McGarvey, Charles L.; Danoff, Jerome V.; Gerber, Lynn H.; Soballe, Peter W.
2012-01-01
Secondary prevention involves monitoring and screening to prevent negative sequelae from chronic diseases such as cancer. Breast cancer treatment sequelae, such as lymphedema, may occur early or late and often negatively affect function. Secondary prevention through prospective physical therapy surveillance aids in early identification and treatment of breast cancer–related lymphedema (BCRL). Early intervention may reduce the need for intensive rehabilitation and may be cost saving. This perspective article compares a prospective surveillance model with a traditional model of impairment-based care and examines direct treatment costs associated with each program. Intervention and supply costs were estimated based on the Medicare 2009 physician fee schedule for 2 groups: (1) a prospective surveillance model group (PSM group) and (2) a traditional model group (TM group). The PSM group comprised all women with breast cancer who were receiving interval prospective surveillance, assuming that one third would develop early-stage BCRL. The prospective surveillance model includes the cost of screening all women plus the cost of intervention for early-stage BCRL. The TM group comprised women referred for BCRL treatment using a traditional model of referral based on late-stage lymphedema. The traditional model cost includes the direct cost of treating patients with advanced-stage lymphedema. The cost to manage early-stage BCRL per patient per year using a prospective surveillance model is $636.19. The cost to manage late-stage BCRL per patient per year using a traditional model is $3,124.92. The prospective surveillance model is emerging as the standard of care in breast cancer treatment and is a potential cost-saving mechanism for BCRL treatment. Further analysis of indirect costs and utility is necessary to assess cost-effectiveness. A shift in the paradigm of physical therapy toward a prospective surveillance model is warranted. PMID:21921254
ERIC Educational Resources Information Center
Gendreau, Audrey
2014-01-01
Efficient self-organizing virtual clusterheads that supervise data collection based on their wireless connectivity, risk, and overhead costs, are an important element of Wireless Sensor Networks (WSNs). This function is especially critical during deployment when system resources are allocated to a subsequent application. In the presented research,…
Long-range strategy for remote sensing: an integrated supersystem
NASA Astrophysics Data System (ADS)
Glackin, David L.; Dodd, Joseph K.
1995-12-01
Present large space-based remote sensing systems, and those planned for the next two decades, remain dichotomous and custom-built. An integrated architecture might reduce total cost without limiting system performance. An example of such an architecture, developed at The Aerospace Corporation, explores the feasibility of reducing overall space systems costs by forming a 'super-system' which will provide environmental, earth resources and theater surveillance information to a variety of users. The concept involves integration of programs, sharing of common spacecraft bus designs and launch vehicles, use of modular components and subsystems, integration of command and control and data capture functions, and establishment of an integrated program office. Smart functional modules that are easily tested and replaced are used wherever possible in the space segment. Data is disseminated to systems such as NASA's EOSDIS, and data processing is performed at established centers of expertise. This concept is advanced for potential application as a follow-on to currently budgeted and planned space-based remote sensing systems. We hope that this work will serve to engender discussion that may be of assistance in leading to multinational remote sensing systems with greater cost effectiveness at no loss of utility to the end user.
An ultrahigh-accuracy Miniature Dew Point Sensor based on an Integrated Photonics Platform.
Tao, Jifang; Luo, Yu; Wang, Li; Cai, Hong; Sun, Tao; Song, Junfeng; Liu, Hui; Gu, Yuandong
2016-07-15
The dew point is the temperature at which vapour begins to condense out of the gaseous phase. The deterministic relationship between the dew point and humidity is the basis for the industry-standard "chilled-mirror" dew point hygrometers used for highly accurate humidity measurements, which are essential for a broad range of industrial and metrological applications. However, these instruments have several limitations, such as high cost, large size and slow response. In this report, we demonstrate a compact, integrated photonic dew point sensor (DPS) that features high accuracy, a small footprint, and fast response. The fundamental component of this DPS is a partially exposed photonic micro-ring resonator, which serves two functions simultaneously: 1) sensing the condensed water droplets via evanescent fields and 2) functioning as a highly accurate, in situ temperature sensor based on the thermo-optic effect (TOE). This device virtually eliminates most of the temperature-related errors that affect conventional "chilled-mirror" hygrometers. Moreover, this DPS outperforms conventional "chilled-mirror" hygrometers with respect to size, cost and response time, paving the way for on-chip dew point detection and extension to applications for which the conventional technology is unsuitable because of size, cost, and other constraints.
An ultrahigh-accuracy Miniature Dew Point Sensor based on an Integrated Photonics Platform
NASA Astrophysics Data System (ADS)
Tao, Jifang; Luo, Yu; Wang, Li; Cai, Hong; Sun, Tao; Song, Junfeng; Liu, Hui; Gu, Yuandong
2016-07-01
The dew point is the temperature at which vapour begins to condense out of the gaseous phase. The deterministic relationship between the dew point and humidity is the basis for the industry-standard “chilled-mirror” dew point hygrometers used for highly accurate humidity measurements, which are essential for a broad range of industrial and metrological applications. However, these instruments have several limitations, such as high cost, large size and slow response. In this report, we demonstrate a compact, integrated photonic dew point sensor (DPS) that features high accuracy, a small footprint, and fast response. The fundamental component of this DPS is a partially exposed photonic micro-ring resonator, which serves two functions simultaneously: 1) sensing the condensed water droplets via evanescent fields and 2) functioning as a highly accurate, in situ temperature sensor based on the thermo-optic effect (TOE). This device virtually eliminates most of the temperature-related errors that affect conventional “chilled-mirror” hygrometers. Moreover, this DPS outperforms conventional “chilled-mirror” hygrometers with respect to size, cost and response time, paving the way for on-chip dew point detection and extension to applications for which the conventional technology is unsuitable because of size, cost, and other constraints.
Multi Objective Optimization of Yarn Quality and Fibre Quality Using Evolutionary Algorithm
NASA Astrophysics Data System (ADS)
Ghosh, Anindya; Das, Subhasis; Banerjee, Debamalya
2013-03-01
The quality and cost of resulting yarn play a significant role to determine its end application. The challenging task of any spinner lies in producing a good quality yarn with added cost benefit. The present work does a multi-objective optimization on two objectives, viz. maximization of cotton yarn strength and minimization of raw material quality. The first objective function has been formulated based on the artificial neural network input-output relation between cotton fibre properties and yarn strength. The second objective function is formulated with the well known regression equation of spinning consistency index. It is obvious that these two objectives are conflicting in nature i.e. not a single combination of cotton fibre parameters does exist which produce maximum yarn strength and minimum cotton fibre quality simultaneously. Therefore, it has several optimal solutions from which a trade-off is needed depending upon the requirement of user. In this work, the optimal solutions are obtained with an elitist multi-objective evolutionary algorithm based on Non-dominated Sorting Genetic Algorithm II (NSGA-II). These optimum solutions may lead to the efficient exploitation of raw materials to produce better quality yarns at low costs.
NASA Technical Reports Server (NTRS)
1971-01-01
Computational techniques were developed and assimilated for the design optimization. The resulting computer program was then used to perform initial optimization and sensitivity studies on a typical thermal protection system (TPS) to demonstrate its application to the space shuttle TPS design. The program was developed in Fortran IV for the CDC 6400 but was subsequently converted to the Fortran V language to be used on the Univac 1108. The program allows for improvement and update of the performance prediction techniques. The program logic involves subroutines which handle the following basic functions: (1) a driver which calls for input, output, and communication between program and user and between the subroutines themselves; (2) thermodynamic analysis; (3) thermal stress analysis; (4) acoustic fatigue analysis; and (5) weights/cost analysis. In addition, a system total cost is predicted based on system weight and historical cost data of similar systems. Two basic types of input are provided, both of which are based on trajectory data. These are vehicle attitude (altitude, velocity, and angles of attack and sideslip), for external heat and pressure loads calculation, and heating rates and pressure loads as a function of time.
Effort-based cost-benefit valuation and the human brain
Croxson, Paula L; Walton, Mark E; O'Reilly, Jill X; Behrens, Timothy EJ; Rushworth, Matthew FS
2010-01-01
In both the wild and the laboratory, animals' preferences for one course of action over another reflect not just reward expectations but also the cost in terms of effort that must be invested in pursuing the course of action. The ventral striatum and dorsal anterior cingulate cortex (ACCd) are implicated in the making of cost-benefit decisions in the rat but there is little information about how effort costs are processed and influence calculations of expected net value in other mammals including the human. We carried out a functional magnetic resonance imaging (fMRI) study to determine whether and where activity in the human brain was available to guide effort-based cost-benefit valuation. Subjects were scanned while they performed a series of effortful actions to obtain secondary reinforcers. At the beginning of each trial, subjects were presented with one of eight different visual cues which they had learned indicated how much effort the course of action would entail and how much reward could be expected at its completion. Cue-locked activity in the ventral striatum and midbrain reflected the net value of the course of action, signaling the expected amount of reward discounted by the amount of effort to be invested. Activity in ACCd also reflected the interaction of both expected reward and effort costs. Posterior orbitofrontal and insular activity, however, only reflected the expected reward magnitude. The ventral striatum and anterior cingulate cortex may be the substrate of effort-based cost-benefit valuation in primates as well as in rats. PMID:19357278
Cognitive cost as dynamic allocation of energetic resources
Christie, S. Thomas; Schrater, Paul
2015-01-01
While it is widely recognized that thinking is somehow costly, involving cognitive effort and producing mental fatigue, these costs have alternatively been assumed to exist, treated as the brain's assessment of lost opportunities, or suggested to be metabolic but with implausible biological bases. We present a model of cognitive cost based on the novel idea that the brain senses and plans for longer-term allocation of metabolic resources by purposively conserving brain activity. We identify several distinct ways the brain might control its metabolic output, and show how a control-theoretic model that models decision-making with an energy budget can explain cognitive effort avoidance in terms of an optimal allocation of limited energetic resources. The model accounts for both subject responsiveness to reward and the detrimental effects of hypoglycemia on cognitive function. A critical component of the model is using astrocytic glycogen as a plausible basis for limited energetic reserves. Glycogen acts as an energy buffer that can temporarily support high neural activity beyond the rate supported by blood glucose supply. The published dynamics of glycogen depletion and repletion are consonant with a broad array of phenomena associated with cognitive cost. Our model thus subsumes both the “cost/benefit” and “limited resource” models of cognitive cost while retaining valuable contributions of each. We discuss how the rational control of metabolic resources could underpin the control of attention, working memory, cognitive look ahead, and model-free vs. model-based policy learning. PMID:26379482
Cognitive cost as dynamic allocation of energetic resources.
Christie, S Thomas; Schrater, Paul
2015-01-01
While it is widely recognized that thinking is somehow costly, involving cognitive effort and producing mental fatigue, these costs have alternatively been assumed to exist, treated as the brain's assessment of lost opportunities, or suggested to be metabolic but with implausible biological bases. We present a model of cognitive cost based on the novel idea that the brain senses and plans for longer-term allocation of metabolic resources by purposively conserving brain activity. We identify several distinct ways the brain might control its metabolic output, and show how a control-theoretic model that models decision-making with an energy budget can explain cognitive effort avoidance in terms of an optimal allocation of limited energetic resources. The model accounts for both subject responsiveness to reward and the detrimental effects of hypoglycemia on cognitive function. A critical component of the model is using astrocytic glycogen as a plausible basis for limited energetic reserves. Glycogen acts as an energy buffer that can temporarily support high neural activity beyond the rate supported by blood glucose supply. The published dynamics of glycogen depletion and repletion are consonant with a broad array of phenomena associated with cognitive cost. Our model thus subsumes both the "cost/benefit" and "limited resource" models of cognitive cost while retaining valuable contributions of each. We discuss how the rational control of metabolic resources could underpin the control of attention, working memory, cognitive look ahead, and model-free vs. model-based policy learning.
Min-Cut Based Segmentation of Airborne LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Ural, S.; Shan, J.
2012-07-01
Introducing an organization to the unstructured point cloud before extracting information from airborne lidar data is common in many applications. Aggregating the points with similar features into segments in 3-D which comply with the nature of actual objects is affected by the neighborhood, scale, features and noise among other aspects. In this study, we present a min-cut based method for segmenting the point cloud. We first assess the neighborhood of each point in 3-D by investigating the local geometric and statistical properties of the candidates. Neighborhood selection is essential since point features are calculated within their local neighborhood. Following neighborhood determination, we calculate point features and determine the clusters in the feature space. We adapt a graph representation from image processing which is especially used in pixel labeling problems and establish it for the unstructured 3-D point clouds. The edges of the graph that are connecting the points with each other and nodes representing feature clusters hold the smoothness costs in the spatial domain and data costs in the feature domain. Smoothness costs ensure spatial coherence, while data costs control the consistency with the representative feature clusters. This graph representation formalizes the segmentation task as an energy minimization problem. It allows the implementation of an approximate solution by min-cuts for a global minimum of this NP hard minimization problem in low order polynomial time. We test our method with airborne lidar point cloud acquired with maximum planned post spacing of 1.4 m and a vertical accuracy 10.5 cm as RMSE. We present the effects of neighborhood and feature determination in the segmentation results and assess the accuracy and efficiency of the implemented min-cut algorithm as well as its sensitivity to the parameters of the smoothness and data cost functions. We find that smoothness cost that only considers simple distance parameter does not strongly conform to the natural structure of the points. Including shape information within the energy function by assigning costs based on the local properties may help to achieve a better representation for segmentation.
Group physical therapy for veterans with knee osteoarthritis: study design and methodology.
Allen, Kelli D; Bongiorni, Dennis; Walker, Tessa A; Bartle, John; Bosworth, Hayden B; Coffman, Cynthia J; Datta, Santanu K; Edelman, David; Hall, Katherine S; Hansen, Gloria; Jennings, Caroline; Lindquist, Jennifer H; Oddone, Eugene Z; Senick, Margaret J; Sizemore, John C; St John, Jamie; Hoenig, Helen
2013-03-01
Physical therapy (PT) is a key component of treatment for knee osteoarthritis (OA) and can decrease pain and improve function. Given the expected rise in prevalence of knee OA and the associated demand for treatment, there is a need for models of care that cost-effectively extend PT services for patients with this condition. This manuscript describes a randomized clinical trial of a group-based physical therapy program that can potentially extend services to more patients with knee OA, providing a greater number of sessions per patient, at lower staffing costs compared to traditional individual PT. Participants with symptomatic knee OA (n = 376) are randomized to either a 12-week group-based PT program (six 1 h sessions, eight patients per group, led by a physical therapist and physical therapist assistant) or usual PT care (two individual visits with a physical therapist). Participants in both PT arms receive instruction in an exercise program, information on joint care and protection, and individual consultations with a physical therapist to address specific functional and therapeutic needs. The primary outcome is the Western Ontario and McMasters Universities Osteoarthritis Index (self-reported pain, stiffness, and function), and the secondary outcome is the Short Physical Performance Test Protocol (objective physical function). Outcomes are assessed at baseline and 12-week follow-up, and the primary outcome is also assessed via telephone at 24-week follow-up to examine sustainability of effects. Linear mixed models will be used to compare outcomes for the two study arms. An economic cost analysis of the PT interventions will also be conducted. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash
2003-01-01
Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.
18 CFR 301.7 - Average System Cost methodology functionalization.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Average System Cost methodology functionalization. 301.7 Section 301.7 Conservation of Power and Water Resources FEDERAL ENERGY... ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each Account...
18 CFR 301.7 - Average System Cost methodology functionalization.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Average System Cost methodology functionalization. 301.7 Section 301.7 Conservation of Power and Water Resources FEDERAL ENERGY... ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each Account...
18 CFR 301.7 - Average System Cost methodology functionalization.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Average System Cost methodology functionalization. 301.7 Section 301.7 Conservation of Power and Water Resources FEDERAL ENERGY... ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each Account...
18 CFR 301.7 - Average System Cost methodology functionalization.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Average System Cost methodology functionalization. 301.7 Section 301.7 Conservation of Power and Water Resources FEDERAL ENERGY... ACT § 301.7 Average System Cost methodology functionalization. (a) Functionalization of each Account...
NASA Astrophysics Data System (ADS)
Dharmaseelan, Anoop; Adistambha, Keyne D.
2015-05-01
Fuel cost accounts for 40 percent of the operating cost of an airline. Fuel cost can be minimized by planning a flight on optimized routes. The routes can be optimized by searching best connections based on the cost function defined by the airline. The most common algorithm that used to optimize route search is Dijkstra's. Dijkstra's algorithm produces a static result and the time taken for the search is relatively long. This paper experiments a new algorithm to optimize route search which combines the principle of simulated annealing and genetic algorithm. The experimental results of route search, presented are shown to be computationally fast and accurate compared with timings from generic algorithm. The new algorithm is optimal for random routing feature that is highly sought by many regional operators.
Implementation of a VLSI Level Zero Processing system utilizing the functional component approach
NASA Technical Reports Server (NTRS)
Shi, Jianfei; Horner, Ward P.; Grebowsky, Gerald J.; Chesney, James R.
1991-01-01
A high rate Level Zero Processing system is currently being prototyped at NASA/Goddard Space Flight Center (GSFC). Based on state-of-the-art VLSI technology and the functional component approach, the new system promises capabilities of handling multiple Virtual Channels and Applications with a combined data rate of up to 20 Megabits per second (Mbps) at low cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geller, Drew Adam; Backhaus, Scott N.
Control of consumer electrical devices for providing electrical grid services is expanding in both the scope and the diversity of loads that are engaged in control, but there are few experimentally-based models of these devices suitable for control designs and for assessing the cost of control. A laboratory-scale test system is developed to experimentally evaluate the use of a simple window-mount air conditioner for electrical grid regulation services. The experimental test bed is a single, isolated air conditioner embedded in a test system that both emulates the thermodynamics of an air conditioned room and also isolates the air conditioner frommore » the real-world external environmental and human variables that perturb the careful measurements required to capture a model that fully characterizes both the control response functions and the cost of control. The control response functions and cost of control are measured using harmonic perturbation of the temperature set point and a test protocol that further isolates the air conditioner from low frequency environmental variability.« less
Eek, Frida; Merlo, Juan; Gerdtham, Ulf; Lithman, Thor
2009-01-01
Environmentally intolerant persons report decreased self-rated health and daily functioning. However, it remains unclear whether this condition also results in increased health care costs. The aim of this study was to describe the health care consumption and attitudes towards health care in subjects presenting subjective environmental annoyance in relation to the general population, as well as to a group with a well-known disorder as treated hypertension (HT). Methods. Postal questionnaire (n = 13 604) and record linkage with population-based register on health care costs. Results. Despite significantly lower subjective well being and health than both the general population and HT group, the environmentally annoyed subjects had lower health care costs than the hypertension group. In contrast to the hypertension group, the environmentally annoyed subjects expressed more negative attitudes toward the health care than the general population. Conclusions. Despite their impaired subjective health and functional capacity, health care utilisation costs were not much increased for the environmentally annoyed group. This may partly depend on negative attitudes towards the health care in this group. PMID:19936124
NASA Astrophysics Data System (ADS)
Li, You-Rong; Du, Mei-Tang; Wang, Jian-Ning
2012-12-01
This paper focuses on the research of an evaporator with a binary mixture of organic working fluids in the organic Rankine cycle. Exergoeconomic analysis and performance optimization were performed based on the first and second laws of thermodynamics, and the exergoeconomic theory. The annual total cost per unit heat transfer rate was introduced as the objective function. In this model, the exergy loss cost caused by the heat transfer irreversibility and the capital cost were taken into account; however, the exergy loss due to the frictional pressure drops, heat dissipation to surroundings, and the flow imbalance were neglected. The variation laws of the annual total cost with respect to the number of transfer units and the temperature ratios were presented. Optimal design parameters that minimize the objective function had been obtained, and the effects of some important dimensionless parameters on the optimal performances had also been discussed for three types of evaporator flow arrangements. In addition, optimal design parameters of evaporators were compared with those of condensers.
The Deterministic Information Bottleneck
NASA Astrophysics Data System (ADS)
Strouse, D. J.; Schwab, David
2015-03-01
A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.
System cost/performance analysis (study 2.3). Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Kazangey, T.
1973-01-01
The relationships between performance, safety, cost, and schedule parameters were identified and quantified in support of an overall effort to generate program models and methodology that provide insight into a total space vehicle program. A specific space vehicle system, the attitude control system (ACS), was used, and a modeling methodology was selected that develops a consistent set of quantitative relationships among performance, safety, cost, and schedule, based on the characteristics of the components utilized in candidate mechanisms. These descriptive equations were developed for a three-axis, earth-pointing, mass expulsion ACS. A data base describing typical candidate ACS components was implemented, along with a computer program to perform sample calculations. This approach, implemented on a computer, is capable of determining the effect of a change in functional requirements to the ACS mechanization and the resulting cost and schedule. By a simple extension of this modeling methodology to the other systems in a space vehicle, a complete space vehicle model can be developed. Study results and recommendations are presented.
Rapid prototyping prosthetic hand acting by a low-cost shape-memory-alloy actuator.
Soriano-Heras, Enrique; Blaya-Haro, Fernando; Molino, Carlos; de Agustín Del Burgo, José María
2018-06-01
The purpose of this article is to develop a new concept of modular and operative prosthetic hand based on rapid prototyping and a novel shape-memory-alloy (SMA) actuator, thus minimizing the manufacturing costs. An underactuated mechanism was needed for the design of the prosthesis to use only one input source. Taking into account the state of the art, an underactuated mechanism prosthetic hand was chosen so as to implement the modifications required for including the external SMA actuator. A modular design of a new prosthesis was developed which incorporated a novel SMA actuator for the index finger movement. The primary objective of the prosthesis is achieved, obtaining a modular and functional low-cost prosthesis based on additive manufacturing executed by a novel SMA actuator. The external SMA actuator provides a modular system which allows implementing it in different systems. This paper combines rapid prototyping and a novel SMA actuator to develop a new concept of modular and operative low-cost prosthetic hand.
NASA Astrophysics Data System (ADS)
Latief, Yusuf; Berawi, Mohammed Ali; Basten, Van; Budiman, Rachmat; Riswanto
2017-06-01
Building has a big impact on the environmental developments. There are three general motives in building, namely the economy, society, and environment. Total completed building construction in Indonesia increased by 116% during 2009 to 2011. It made the energy consumption increased by 11% within the last three years. In fact, 70% of energy consumption is used for electricity needs on commercial buildings which leads to an increase of greenhouse gas emissions by 25%. Green Building cycle costs is known as highly building upfront cost in Indonesia. The purpose of optimization in this research improves building performance with some of green concept alternatives. Research methodology is mixed method of qualitative and quantitative approaches through questionnaire surveys and case study. Assessing the successful of optimization functions in the existing green building is based on the operational and maintenance phase with the Life Cycle Assessment Method. Choosing optimization results were based on the largest efficiency of building life cycle and the most effective cost to refund.
Costing the supply chain for delivery of ACT and RDTs in the public sector in Benin and Kenya.
Shretta, Rima; Johnson, Brittany; Smith, Lisa; Doumbia, Seydou; de Savigny, Don; Anupindi, Ravi; Yadav, Prashant
2015-02-05
Studies have shown that supply chain costs are a significant proportion of total programme costs. Nevertheless, the costs of delivering specific products are poorly understood and ballpark estimates are often used to inadequately plan for the budgetary implications of supply chain expenses. The purpose of this research was to estimate the country level costs of the public sector supply chain for artemisinin-based combination therapy (ACT) and rapid diagnostic tests (RDTs) from the central to the peripheral levels in Benin and Kenya. A micro-costing approach was used and primary data on the various cost components of the supply chain was collected at the central, intermediate, and facility levels between September and November 2013. Information sources included central warehouse databases, health facility records, transport schedules, and expenditure reports. Data from document reviews and semi-structured interviews were used to identify cost inputs and estimate actual costs. Sampling was purposive to isolate key variables of interest. Survey guides were developed and administered electronically. Data were extracted into Microsoft Excel, and the supply chain cost per unit of ACT and RDT distributed by function and level of system was calculated. In Benin, supply chain costs added USD 0.2011 to the initial acquisition cost of ACT and USD 0.3375 to RDTs (normalized to USD 1). In Kenya, they added USD 0.2443 to the acquisition cost of ACT and USD 0.1895 to RDTs (normalized to USD 1). Total supply chain costs accounted for more than 30% of the initial acquisition cost of the products in some cases and these costs were highly sensitive to product volumes. The major cost drivers were found to be labour, transport, and utilities with health facilities carrying the majority of the cost per unit of product. Accurate cost estimates are needed to ensure adequate resources are available for supply chain activities. Product volumes should be considered when costing supply chain functions rather than dollar value. Further work is needed to develop extrapolative costing models that can be applied at country level without extensive micro-costing exercises. This will allow other countries to generate more accurate estimates in the future.
Which factors affect software projects maintenance cost more?
Dehaghani, Sayed Mehdi Hejazi; Hajrahimi, Nafiseh
2013-03-01
The software industry has had significant progress in recent years. The entire life of software includes two phases: production and maintenance. Software maintenance cost is increasingly growing and estimates showed that about 90% of software life cost is related to its maintenance phase. Extraction and considering the factors affecting the software maintenance cost help to estimate the cost and reduce it by controlling the factors. In this study, the factors affecting software maintenance cost were determined then were ranked based on their priority and after that effective ways to reduce the maintenance costs were presented. This paper is a research study. 15 software related to health care centers information systems in Isfahan University of Medical Sciences and hospitals function were studied in the years 2010 to 2011. Among Medical software maintenance team members, 40 were selected as sample. After interviews with experts in this field, factors affecting maintenance cost were determined. In order to prioritize the factors derived by AHP, at first, measurement criteria (factors found) were appointed by members of the maintenance team and eventually were prioritized with the help of EC software. Based on the results of this study, 32 factors were obtained which were classified in six groups. "Project" was ranked the most effective feature in maintenance cost with the highest priority. By taking into account some major elements like careful feasibility of IT projects, full documentation and accompany the designers in the maintenance phase good results can be achieved to reduce maintenance costs and increase longevity of the software.
International casemix and funding models: lessons for rehabilitation.
Turner-Stokes, Lynne; Sutch, Stephen; Dredge, Robert; Eagar, Kathy
2012-03-01
This series of articles for rehabilitation in practice aims to cover a knowledge element of the rehabilitation medicine curriculum. Nevertheless they are intended to be of interest to a multidisciplinary audience. The competency addressed in this article is 'An understanding of the different international models for funding of health care services and casemix systems, as exemplified by those in the US, Australia and the UK.' Payment for treatment in healthcare systems around the world is increasingly based on fixed tariff models to drive up efficiency and contain costs. Casemix classifications, however, must account adequately for the resource implications of varying case complexity. Rehabilitation poses some particular challenges for casemix development. The objectives of this educational narrative review are (a) to provide an overview of the development of casemix in rehabilitation, (b) to describe key characteristics of some well-established casemix and payment models in operation around the world and (c) to explore opportunities for future development arising from the lessons learned. Diagnosis alone does not adequately describe cost variation in rehabilitation. Functional dependency is considered a better cost indicator, and casemix classifications for inpatient rehabilitation in the United States and Australia rely on the Functional Independence Measure (FIM). Fixed episode-based prospective payment systems are shown to contain costs, but at the expense of poorer functional outcomes. More sophisticated models incorporating a mixture of episode and weighted per diem rates may offer greater flexibility to optimize outcome, while still providing incentive for throughput. The development of casemix in rehabilitation poses similar challenges for healthcare systems all around the world. Well-established casemix systems in the United States and Australia have afforded valuable lessons for other countries to learn from, but have not provided all the answers. A range of casemix and payment models is required to cater for different healthcare cultures, and casemix tools must capture all the key cost-determinants of treatment for patients with complex needs.
Costs and quality of life of patients with ankylosing spondylitis in Hong Kong.
Zhu, T Y; Tam, L-S; Lee, V W-Y; Hwang, W W; Li, T K; Lee, K K; Li, E K
2008-09-01
To assess the annual direct, indirect and total societal costs, quality of life (QoL) of AS in a Chinese population in Hong Kong and determine the cost determinants. A retrospective, non-randomized, cross-sectional study was performed in a cohort of 145 patients with AS in Hong Kong. Participants completed questionnaires on sociodemographics, work status and out-of-pocket expenses. Health resources consumption was recorded by chart review. Functional impairment and disease activity were measured using the Bath Ankylosing Spondylitis Functional Index (BASFI) and the Bath Ankylosing Spondylitis Disease Activity Index (BASDAI), respectively. Patients' QoL was assessed using the Short Form-36 (SF-36). The mean age of the patients was 40 yrs with mean disease duration of 10 yrs. The mean BASDAI score was 4.7 and BASFI score was 3.3. Annual total costs averaged USD 9120. Direct costs accounted for 38% of the total costs while indirect costs accounted for 62%. Costs of technical examinations represented the largest proportion of total cost. Patients with AS reported significantly impaired QoL. Functional impairment became the major cost driver of direct costs and total costs. There is a substantial societal cost related to the treatment of AS in Hong Kong. Functional impairment is the most important cost driver. Treatments that reduce functional impairment may be effective to decrease the costs of AS and improve the patient's QoL, and ease the pressure on the healthcare system.
Information Systems: Fact or Fiction.
ERIC Educational Resources Information Center
Bearley, William
Rising costs of programming and program maintenance have caused discussion concerning the need for generalized information systems. These would provide data base functions plus complete report writing and file maintenance capabilities. All administrative applications, including online registration, student records, and financial applications are…
Recycled-tire pyrolytic carbon made functional: A high-arsenite [As(III)] uptake material PyrC350®.
Mouzourakis, E; Georgiou, Y; Louloudi, M; Konstantinou, I; Deligiannakis, Y
2017-03-15
A novel material, PyrC 350 ® , has been developed from pyrolytic-tire char (PyrC), as an efficient low-cost Arsenite [As(III)] adsorbent from water. PyrC 350 ® achieves 31mgg -1 As(III) uptake, that remains unaltered at pH=4-8.5. A theoretical Surface Complexation Model has been developed that explains the adsorption mechanism, showing that in situ formed Fe 3 C, ZnS particles act cooperatively with the carbon matrix for As(III) adsorption. Addressing the key-issue of cost-effectiveness, we provide a comparison of As(III)-uptake effectiveness in conjunction with a cost analysis, showing that PyrC 350 ® stands in the top of [effectiveness/cost] vs. existing carbon-based, low-cost materials. Copyright © 2016 Elsevier B.V. All rights reserved.
Nousiainen, Markku T; McQueen, Sydney A; Ferguson, Peter; Alman, Benjamin; Kraemer, William; Safir, Oleg; Reznick, Richard; Sonnadara, Ranil
2016-04-01
Although simulation-based training is becoming widespread in surgical education and research supports its use, one major limitation is cost. Until now, little has been published on the costs of simulation in residency training. At the University of Toronto, a novel competency-based curriculum in orthopaedic surgery has been implemented for training selected residents, which makes extensive use of simulation. Despite the benefits of this intensive approach to simulation, there is a need to consider its financial implications and demands on faculty time. This study presents a cost and faculty work-hours analysis of implementing simulation as a teaching and evaluation tool in the University of Toronto's novel competency-based curriculum program compared with the historic costs of using simulation in the residency training program. All invoices for simulation training were reviewed to determine the financial costs before and after implementation of the competency-based curriculum. Invoice items included costs for cadavers, artificial models, skills laboratory labor, associated materials, and standardized patients. Costs related to the surgical skills laboratory rental fees and orthopaedic implants were waived as a result of special arrangements with the skills laboratory and implant vendors. Although faculty time was not reimbursed, faculty hours dedicated to simulation were also evaluated. The academic year of 2008 to 2009 was chosen to represent an academic year that preceded the introduction of the competency-based curriculum. During this year, 12 residents used simulation for teaching. The academic year of 2010 to 2011 was chosen to represent an academic year when the competency-based curriculum training program was functioning parallel but separate from the regular stream of training. In this year, six residents used simulation for teaching and assessment. The academic year of 2012 to 2013 was chosen to represent an academic year when simulation was used equally among the competency-based curriculum and regular stream residents for teaching (60 residents) and among 14 competency-based curriculum residents and 21 regular stream residents for assessment. The total costs of using simulation to teach and assess all residents in the competency-based curriculum and regular stream programs (academic year 2012-2013) (CDN 155,750, USD 158,050) were approximately 15 times higher than the cost of using simulation to teach residents before the implementation of the competency-based curriculum (academic year 2008-2009) (CDN 10,090, USD 11,140). The number of hours spent teaching and assessing trainees increased from 96 to 317 hours during this period, representing a threefold increase. Although the financial costs and time demands on faculty in running the simulation program in the new competency-based curriculum at the University of Toronto have been substantial, augmented learner and trainer satisfaction has been accompanied by direct evidence of improved and more efficient learning outcomes. The higher costs and demands on faculty time associated with implementing simulation for teaching and assessment must be considered when it is used to enhance surgical training.
Flexible plastic, paper and textile lab-on-a chip platforms for electrochemical biosensing.
Economou, Anastasios; Kokkinos, Christos; Prodromidis, Mamas
2018-06-26
Flexible biosensors represent an increasingly important and rapidly developing field of research. Flexible materials offer several advantages as supports of biosensing platforms in terms of flexibility, weight, conformability, portability, cost, disposability and scope for integration. On the other hand, electrochemical detection is perfectly suited to flexible biosensing devices. The present paper reviews the field of integrated electrochemical bionsensors fabricated on flexible materials (plastic, paper and textiles) which are used as functional base substrates. The vast majority of electrochemical flexible lab-on-a-chip (LOC) biosensing devices are based on plastic supports in a single or layered configuration. Among these, wearable devices are perhaps the ones that most vividly demonstrate the utility of the concept of flexible biosensors while diagnostic cards represent the state-of-the art in terms of integration and functionality. Another important type of flexible biosensors utilize paper as a functional support material enabling the fabrication of low-cost and disposable paper-based devices operating on the lateral flow, drop-casting or folding (origami) principles. Finally, textile-based biosensors are beginning to emerge enabling real-time measurements in the working environment or in wound care applications. This review is timely due to the significant advances that have taken place over the last few years in the area of LOC biosensors and aims to direct the readers to emerging trends in this field.
NASA Astrophysics Data System (ADS)
Rahman, R.; Nemmang, M. S.; Hazurina, Nor; Shahidan, S.; Khairul Tajuddin Jemain, Raden; Abdullah, M. E.; Hassan, M. F.
2017-11-01
The main issue related to this research was to examine the feasibility of natural rubber SMR 20 in the manufacturing of cement mortar for sub-base layer construction. Subbase layers have certain functions that need to be fulfilled in order to assure strong and adequate permeability of pavement performance. In a pavement structure, sub-base is below the base and serves as the foundation for the overall pavement structure, transmitting traffic loads to the sub-grade and providing drainage. Based on this research, the natural rubber, SMR 20 was with the percentages of 0%, 5%, 10% and 15% to mix with sand in the manufacture of the cement mortar. This research describes some of the properties and cost of the materials for the natural rubber and sand in cement mortar manufacturing by laboratory testing. Effects of the natural rubber replacement on mechanical properties of mortar were investigated by laboratory testing such as compressive strength test and density. This study obtained the 5% of natural rubber replaced in sand can achieved the strength of normal mortar after 7 days and 28 days. The strength of cement mortar depends on the density of cement mortar. According to the cost of both materials, sand shows the lower cost in material for the cement mortar manufacturing than the uses of natural rubber. Thus, the convectional cement mortar which used sand need lower cost than the modified rubber cement mortar and the most economical to apply in industrial. As conclusion, the percentage of 5% natural rubber in the cement mortar would have the same with normal cement mortar in terms of the strength. However, in terms of the cost of the construction, it will increase higher than cost of normal cement mortar production. So that, this modified cement mortar is not economical for the road sub-base construction.
Integrated Glass Coating Manufacturing Line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brophy, Brenor
2015-09-30
This project aims to enable US module manufacturers to coat glass with Enki’s state of the art tunable functionalized AR coatings at the lowest possible cost and highest possible performance by encapsulating Enki’s coating process in an integrated tool that facilitates effective process improvement through metrology and data analysis for greater quality and performance while reducing footprint, operating and capital costs. The Phase 1 objective was a fully designed manufacturing line, including fully specified equipment ready for issue of purchase requisitions; a detailed economic justification based on market prices at the end of Phase 1 and projected manufacturing costs andmore » a detailed deployment plan for the equipment.« less
Kenya geothermal private power project: A prefeasibility study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1992-10-01
Twenty-eight geothermal areas in Kenya were evaluated and prioritized for development. The prioritization was based on the potential size, resource temperature, level of exploration risk, location, and exploration/development costs for each geothermal area. Suswa, Eburru and Arus are found to offer the best short-term prospects for successful private power development. It was found that cost per kill developed are significantly lower for the larger (50MW) than for smaller-sized (10 or 20 NW) projects. In addition to plant size, the cost per kill developed is seen to be a function of resource temperature, generation mode (binary or flash cycle) and transmissionmore » distance.« less
Contracting for intensive care services.
Dorman, S
1996-01-01
Purchasers will increasingly expect clinical services in the NHS internal market to provide objective measures of their benefits and cost effectiveness in order to maintain or develop current funding levels. There is limited scientific evidence to demonstrate the clinical effectiveness of intensive care services in terms of mortality/morbidity. Intensive care is a high-cost service and studies of cost-effectiveness need to take account of case-mix variations, differences in admission and discharge policies, and other differences between units. Decisions over development or rationalisation of intensive care services should be based on proper outcome studies of well defined patient groups. The purchasing function itself requires development in order to support effective contracting.
Metal oxide resistive random access memory based synaptic devices for brain-inspired computing
NASA Astrophysics Data System (ADS)
Gao, Bin; Kang, Jinfeng; Zhou, Zheng; Chen, Zhe; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan
2016-04-01
The traditional Boolean computing paradigm based on the von Neumann architecture is facing great challenges for future information technology applications such as big data, the Internet of Things (IoT), and wearable devices, due to the limited processing capability issues such as binary data storage and computing, non-parallel data processing, and the buses requirement between memory units and logic units. The brain-inspired neuromorphic computing paradigm is believed to be one of the promising solutions for realizing more complex functions with a lower cost. To perform such brain-inspired computing with a low cost and low power consumption, novel devices for use as electronic synapses are needed. Metal oxide resistive random access memory (ReRAM) devices have emerged as the leading candidate for electronic synapses. This paper comprehensively addresses the recent work on the design and optimization of metal oxide ReRAM-based synaptic devices. A performance enhancement methodology and optimized operation scheme to achieve analog resistive switching and low-energy training behavior are provided. A three-dimensional vertical synapse network architecture is proposed for high-density integration and low-cost fabrication. The impacts of the ReRAM synaptic device features on the performances of neuromorphic systems are also discussed on the basis of a constructed neuromorphic visual system with a pattern recognition function. Possible solutions to achieve the high recognition accuracy and efficiency of neuromorphic systems are presented.
KOSMOS: a universal morph server for nucleic acids, proteins and their complexes.
Seo, Sangjae; Kim, Moon Ki
2012-07-01
KOSMOS is the first online morph server to be able to address the structural dynamics of DNA/RNA, proteins and even their complexes, such as ribosomes. The key functions of KOSMOS are the harmonic and anharmonic analyses of macromolecules. In the harmonic analysis, normal mode analysis (NMA) based on an elastic network model (ENM) is performed, yielding vibrational modes and B-factor calculations, which provide insight into the potential biological functions of macromolecules based on their structural features. Anharmonic analysis involving elastic network interpolation (ENI) is used to generate plausible transition pathways between two given conformations by optimizing a topology-oriented cost function that guarantees a smooth transition without steric clashes. The quality of the computed pathways is evaluated based on their various facets, including topology, energy cost and compatibility with the NMA results. There are also two unique features of KOSMOS that distinguish it from other morph servers: (i) the versatility in the coarse-graining methods and (ii) the various connection rules in the ENM. The models enable us to analyze macromolecular dynamics with the maximum degrees of freedom by combining a variety of ENMs from full-atom to coarse-grained, backbone and hybrid models with one connection rule, such as distance-cutoff, number-cutoff or chemical-cutoff. KOSMOS is available at http://bioengineering.skku.ac.kr/kosmos.
Implementing a Mobility Program to Minimize Post-Intensive Care Syndrome.
Hopkins, Ramona O; Mitchell, Lorie; Thomsen, George E; Schafer, Michele; Link, Maggie; Brown, Samuel M
2016-01-01
Immobility in the intensive care unit (ICU) is associated with neuromuscular weakness, post-intensive care syndrome, functional limitations, and high costs. Early mobility-based rehabilitation in the ICU is feasible and safe. Mobility-based rehabilitation varied widely across 5 ICUs in 1 health care system, suggesting a need for continuous training and evaluation to maintain a strong mobility-based rehabilitation program. Early mobility-based rehabilitation shortens ICU and hospital stays, reduces delirium, and increases muscle strength and the ability to ambulate. Long-term effects include increased ability for self-care, faster return to independent functioning, improved physical function, and reduced hospital readmission and death. Factors that influence early mobility-based rehabilitation include having an interdisciplinary team; strong unit leadership; access to physical, occupational, and respiratory therapists; a culture focused on patient safety and quality improvement; a champion of early mobility; and a focus on measuring performance and outcomes.
Highlights of recent balance of system research and evaluation
NASA Astrophysics Data System (ADS)
Thomas, M. G.; Stevens, J. W.
The cost of most photovoltaic (PV) systems is more a function of the balance of system (BOS) components than the collectors. The exception to this rule is the grid-tied system whose cost is related more directly to the collectors, and secondarily to the inverter/controls. In fact, recent procurements throughout the country document that collector costs for roof-mounted, utility-tied systems (Russell, PV Systems Workshop, 7/94) represent 60% to 70% of the system cost. This contrasts with the current market for packaged stand-alone all PV or PV-hybrid systems where collectors represent only 25% to 35% of the total. Not only are the BOS components the cost drivers in the current cost-effective PV system market place, they are also the least reliable components. This paper discusses the impact that BOS issues have on component performance, system performance, and system cost and reliability. We will also look at recent recommended changes in system design based upon performance evaluations of fielded PV systems.
Fairness in optimizing bus-crew scheduling process.
Ma, Jihui; Song, Cuiying; Ceder, Avishai Avi; Liu, Tao; Guan, Wei
2017-01-01
This work proposes a model considering fairness in the problem of crew scheduling for bus drivers (CSP-BD) using a hybrid ant-colony optimization (HACO) algorithm to solve it. The main contributions of this work are the following: (a) a valid approach for cases with a special cost structure and constraints considering the fairness of working time and idle time; (b) an improved algorithm incorporating Gamma heuristic function and selecting rules. The relationships of each cost are examined with ten bus lines collected from the Beijing Public Transport Holdings (Group) Co., Ltd., one of the largest bus transit companies in the world. It shows that unfair cost is indirectly related to common cost, fixed cost and extra cost and also the unfair cost approaches to common and fixed cost when its coefficient is twice of common cost coefficient. Furthermore, the longest time for the tested bus line with 1108 pieces, 74 blocks is less than 30 minutes. The results indicate that the HACO-based algorithm can be a feasible and efficient optimization technique for CSP-BD, especially with large scale problems.
Designing informed game-based rehabilitation tasks leveraging advances in virtual reality.
Lange, Belinda; Koenig, Sebastian; Chang, Chien-Yen; McConnell, Eric; Suma, Evan; Bolas, Mark; Rizzo, Albert
2012-01-01
This paper details a brief history and rationale for the use of virtual reality (VR) technology for clinical research and intervention, and then focuses on game-based VR applications in the area of rehabilitation. An analysis of the match between rehabilitation task requirements and the assets available with VR technology is presented. Low-cost camera-based systems capable of tracking user behavior at sufficient levels for game-based virtual rehabilitation activities are currently available for in-home use. Authoring software is now being developed that aims to provide clinicians with a usable toolkit for leveraging this technology. This will facilitate informed professional input on software design, development and application to ensure safe and effective use in the rehabilitation context. The field of rehabilitation generally stands to benefit from the continual advances in VR technology, concomitant system cost reductions and an expanding clinical research literature and knowledge base. Home-based activity within VR systems that are low-cost, easy to deploy and maintain, and meet the requirements for "good" interactive rehabilitation tasks could radically improve users' access to care, adherence to prescribed training and subsequently enhance functional activity in everyday life in clinical populations.
PID Tuning Using Extremum Seeking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Killingsworth, N; Krstic, M
2005-11-15
Although proportional-integral-derivative (PID) controllers are widely used in the process industry, their effectiveness is often limited due to poor tuning. Manual tuning of PID controllers, which requires optimization of three parameters, is a time-consuming task. To remedy this difficulty, much effort has been invested in developing systematic tuning methods. Many of these methods rely on knowledge of the plant model or require special experiments to identify a suitable plant model. Reviews of these methods are given in [1] and the survey paper [2]. However, in many situations a plant model is not known, and it is not desirable to openmore » the process loop for system identification. Thus a method for tuning PID parameters within a closed-loop setting is advantageous. In relay feedback tuning [3]-[5], the feedback controller is temporarily replaced by a relay. Relay feedback causes most systems to oscillate, thus determining one point on the Nyquist diagram. Based on the location of this point, PID parameters can be chosen to give the closed-loop system a desired phase and gain margin. An alternative tuning method, which does not require either a modification of the system or a system model, is unfalsified control [6], [7]. This method uses input-output data to determine whether a set of PID parameters meets performance specifications. An adaptive algorithm is used to update the PID controller based on whether or not the controller falsifies a given criterion. The method requires a finite set of candidate PID controllers that must be initially specified [6]. Unfalsified control for an infinite set of PID controllers has been developed in [7]; this approach requires a carefully chosen input signal [8]. Yet another model-free PID tuning method that does not require opening of the loop is iterative feedback tuning (IFT). IFT iteratively optimizes the controller parameters with respect to a cost function derived from the output signal of the closed-loop system, see [9]. This method is based on the performance of the closed-loop system during a step response experiment [10], [11]. In this article we present a method for optimizing the step response of a closed-loop system consisting of a PID controller and an unknown plant with a discrete version of extremum seeking (ES). Specifically, ES is used to minimize a cost function similar to that used in [10], [11], which quantifies the performance of the PID controller. ES, a non-model-based method, iteratively modifies the arguments (in this application the PID parameters) of a cost function so that the output of the cost function reaches a local minimum or local maximum. In the next section we apply ES to PID controller tuning. We illustrate this technique through simulations comparing the effectiveness of ES to other PID tuning methods. Next, we address the importance of the choice of cost function and consider the effect of controller saturation. Furthermore, we discuss the choice of ES tuning parameters. Finally, we offer some conclusions.« less
Patient time and out-of-pocket costs for long-term prostate cancer survivors in Ontario, Canada.
de Oliveira, Claire; Bremner, Karen E; Ni, Andy; Alibhai, Shabbir M H; Laporte, Audrey; Krahn, Murray D
2014-03-01
Time and out-of-pocket (OOP) costs can represent a substantial burden for cancer patients but have not been described for long-term cancer survivors. We estimated these costs, their predictors, and their relationship to financial income, among a cohort of long-term prostate cancer (PC) survivors. A population-based, community-dwelling, geographically diverse sample of long-term (2-13 years) PC survivors in Ontario, Canada, was identified from the Ontario Cancer Registry and contacted through their referring physicians. We obtained data on demographics, health care resource use, and OOP costs through mailed questionnaires and conducted chart reviews to obtain clinical data. We compared mean annual time and OOP costs (2006 Canadian dollars) across clinical and sociodemographic characteristics and examined the association between costs and four groups of predictors (patient, disease, system, symptom) using two-part regression models. Patients' (N = 585) mean age was 73 years; 77 % were retired, and 42 % reported total annual incomes less than $40,000. Overall, mean time costs were $838/year and mean OOP costs were $200/year. Although generally low, total costs represented approximately 10 % of income for lower income patients. No demographic variables were associated with costs. Radical prostatectomy, younger age, poor urinary function, current androgen deprivation therapy, and recent diagnosis were significantly associated with increased likelihood of incurring any costs, but only urinary function significantly affected total amount. Time and OOP costs are modest for most long-term PC survivors but can represent a substantial burden for lower income patients. Even several years after diagnosis, PC-specific treatments and treatment-related dysfunction are associated with increased costs. Time and out-of-pocket costs are generally manageable for long-term PC survivors but can be a significant burden mainly for lower income patients. The effects of PC-specific, treatment-related dysfunctions on quality of life can also represent sources of expense for patients.
Multi-objective possibilistic model for portfolio selection with transaction cost
NASA Astrophysics Data System (ADS)
Jana, P.; Roy, T. K.; Mazumder, S. K.
2009-06-01
In this paper, we introduce the possibilistic mean value and variance of continuous distribution, rather than probability distributions. We propose a multi-objective Portfolio based model and added another entropy objective function to generate a well diversified asset portfolio within optimal asset allocation. For quantifying any potential return and risk, portfolio liquidity is taken into account and a multi-objective non-linear programming model for portfolio rebalancing with transaction cost is proposed. The models are illustrated with numerical examples.
TH-A-9A-04: Incorporating Liver Functionality in Radiation Therapy Treatment Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, V; Epelman, M; Feng, M
2014-06-15
Purpose: Liver SBRT patients have both variable pretreatment liver function (e.g., due to degree of cirrhosis and/or prior treatments) and sensitivity to radiation, leading to high variability in potential liver toxicity with similar doses. This work aims to explicitly incorporate liver perfusion into treatment planning to redistribute dose to preserve well-functioning areas without compromising target coverage. Methods: Voxel-based liver perfusion, a measure of functionality, was computed from dynamic contrast-enhanced MRI. Two optimization models with different cost functions subject to the same dose constraints (e.g., minimum target EUD and maximum critical structure EUDs) were compared. The cost functions minimized were EUDmore » (standard model) and functionality-weighted EUD (functional model) to the liver. The resulting treatment plans delivering the same target EUD were compared with respect to their DVHs, their dose wash difference, the average dose delivered to voxels of a particular perfusion level, and change in number of high-/low-functioning voxels receiving a particular dose. Two-dimensional synthetic and three-dimensional clinical examples were studied. Results: The DVHs of all structures of plans from each model were comparable. In contrast, in plans obtained with the functional model, the average dose delivered to high-/low-functioning voxels was lower/higher than in plans obtained with its standard counterpart. The number of high-/low-functioning voxels receiving high/low dose was lower in the plans that considered perfusion in the cost function than in the plans that did not. Redistribution of dose can be observed in the dose wash differences. Conclusion: Liver perfusion can be used during treatment planning potentially to minimize the risk of toxicity during liver SBRT, resulting in better global liver function. The functional model redistributes dose in the standard model from higher to lower functioning voxels, while achieving the same target EUD and satisfying dose limits to critical structures. This project is funded by MCubed and grant R01-CA132834.« less
Al-Shaikhli, Saif Dawood Salman; Yang, Michael Ying; Rosenhahn, Bodo
2016-12-01
This paper presents a novel method for Alzheimer's disease classification via an automatic 3D caudate nucleus segmentation. The proposed method consists of segmentation and classification steps. In the segmentation step, we propose a novel level set cost function. The proposed cost function is constrained by a sparse representation of local image features using a dictionary learning method. We present coupled dictionaries: a feature dictionary of a grayscale brain image and a label dictionary of a caudate nucleus label image. Using online dictionary learning, the coupled dictionaries are learned from the training data. The learned coupled dictionaries are embedded into a level set function. In the classification step, a region-based feature dictionary is built. The region-based feature dictionary is learned from shape features of the caudate nucleus in the training data. The classification is based on the measure of the similarity between the sparse representation of region-based shape features of the segmented caudate in the test image and the region-based feature dictionary. The experimental results demonstrate the superiority of our method over the state-of-the-art methods by achieving a high segmentation (91.5%) and classification (92.5%) accuracy. In this paper, we find that the study of the caudate nucleus atrophy gives an advantage over the study of whole brain structure atrophy to detect Alzheimer's disease. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Cost analysis of incidental durotomy in spine surgery.
Nandyala, Sreeharsha V; Elboghdady, Islam M; Marquez-Lara, Alejandro; Noureldin, Mohamed N B; Sankaranarayanan, Sriram; Singh, Kern
2014-08-01
Retrospective database analysis. To characterize the consequences of an incidental durotomy with regard to perioperative complications and total hospital costs. There is a paucity of data regarding how an incidental durotomy and its associated complications may relate to total hospital costs. The Nationwide Inpatient Sample database was queried from 2008 to 2011. Patients who underwent cervical or lumbar decompression and/or fusion procedures were identified, stratified by approach, and separated into cohorts based on a documented intraoperative incidental durotomy. Patient demographics, comorbidities (Charlson Comorbidity Index), length of hospital stay, perioperative outcomes, and costs were assessed. Analysis of covariance and multivariate linear regression were used to assess the adjusted mean costs of hospitalization as a function of durotomy. The incidental durotomy rate in cervical and lumbar spine surgery is 0.4% and 2.9%, respectively. Patients with an incidental durotomy incurred a longer hospitalization and a greater incidence of perioperative complications including hematoma and neurological injury (P < 0.001). Regression analysis demonstrated that a cervical durotomy and its postoperative sequelae contributed an additional adjusted $7638 (95% confidence interval, 6489-8787; P < 0.001) to the total hospital costs. Similarly, lumbar durotomy contributed an additional adjusted $2412 (95% confidence interval, 1920-2902; P < 0.001) to the total hospital costs. The approach-specific procedural groups demonstrated similar discrepancies in the mean total hospital costs as a function of durotomy. This analysis of the Nationwide Inpatient Sample database demonstrates that incidental durotomies increase hospital resource utilization and costs. In addition, it seems that a cervical durotomy and its associated complications carry a greater financial burden than a lumbar durotomy. Further studies are warranted to investigate the long-term financial implications of incidental durotomies in spine surgery and to reduce the costs associated with this complication. 3.
Parameter Optimization for Turbulent Reacting Flows Using Adjoints
NASA Astrophysics Data System (ADS)
Lapointe, Caelan; Hamlington, Peter E.
2017-11-01
The formulation of a new adjoint solver for topology optimization of turbulent reacting flows is presented. This solver provides novel configurations (e.g., geometries and operating conditions) based on desired system outcomes (i.e., objective functions) for complex reacting flow problems of practical interest. For many such problems, it would be desirable to know optimal values of design parameters (e.g., physical dimensions, fuel-oxidizer ratios, and inflow-outflow conditions) prior to real-world manufacture and testing, which can be expensive, time-consuming, and dangerous. However, computational optimization of these problems is made difficult by the complexity of most reacting flows, necessitating the use of gradient-based optimization techniques in order to explore a wide design space at manageable computational cost. The adjoint method is an attractive way to obtain the required gradients, because the cost of the method is determined by the dimension of the objective function rather than the size of the design space. Here, the formulation of a novel solver is outlined that enables gradient-based parameter optimization of turbulent reacting flows using the discrete adjoint method. Initial results and an outlook for future research directions are provided.
Effects of the Finnish Alzheimer disease exercise trial (FINALEX): a randomized controlled trial.
Pitkälä, Kaisu H; Pöysti, Minna M; Laakkonen, Marja-Liisa; Tilvis, Reijo S; Savikko, Niina; Kautiainen, Hannu; Strandberg, Timo E
2013-05-27
Few rigorous clinical trials have investigated the effectiveness of exercise on the physical functioning of patients with Alzheimer disease (AD). To investigate the effects of intense and long-term exercise on the physical functioning and mobility of home-dwelling patients with AD and to explore its effects on the use and costs of health and social services. A randomized controlled trial. A total of 210 home-dwelling patients with AD living with their spousal caregiver. The 3 trial arms included (1) group-based exercise (GE; 4-hour sessions with approximately 1-hour training) and (2) tailored home-based exercise (HE; 1-hour training), both twice a week for 1 year, and (3) a control group (CG) receiving the usual community care. The Functional Independence Measure (FIM), the Short Physical Performance Battery, and information on the use and costs of social and health care services. All groups deteriorated in functioning during the year after randomization, but deterioration was significantly faster in the CG than in the HE or GE group at 6 (P = .003) and 12 (P = .015) months. The FIM changes at 12 months were -7.1 (95% CI, -3.7 to -10.5), -10.3 (95% CI, -6.7 to -13.9), and -14.4 (95% CI, -10.9 to -18.0) in the HE group, GE group, and CG, respectively. The HE and GE groups had significantly fewer falls than the CG during the follow-up year. The total costs of health and social services for the HE patient-caregiver dyads (in US dollars per dyad per year) were $25,112 (95% CI, $17,642 to $32,581) (P = .13 for comparison with the CG), $22,066 in the GE group ($15,931 to $28,199; P = .03 vs CG), and $34,121 ($24,559 to $43,681) in the CG. An intensive and long-term exercise program had beneficial effects on the physical functioning of patients with AD without increasing the total costs of health and social services or causing any significant adverse effects. anzctr.org.au Identifier: ACTRN12608000037303.
Maciejewski, Matthew L.; Liu, Chuan-Fen; Fihn, Stephan D.
2009-01-01
OBJECTIVE—To compare the ability of generic comorbidity and risk adjustment measures, a diabetes-specific measure, and a self-reported functional status measure to explain variation in health care expenditures for individuals with diabetes. RESEARCH DESIGN AND METHODS—This study included a retrospective cohort of 3,092 diabetic veterans participating in a multisite trial. Two comorbidity measures, four risk adjusters, a functional status measure, a diabetes complication count, and baseline expenditures were constructed from administrative and survey data. Outpatient, inpatient, and total expenditure models were estimated using ordinary least squares regression. Adjusted R2 statistics and predictive ratios were compared across measures to assess overall explanatory power and explanatory power of low- and high-cost subgroups. RESULTS—Administrative data–based risk adjusters performed better than the comorbidity, functional status, and diabetes-specific measures in all expenditure models. The diagnostic cost groups (DCGs) measure had the greatest predictive power overall and for the low- and high-cost subgroups, while the diabetes-specific measure had the lowest predictive power. A model with DCGs and the diabetes-specific measure modestly improved predictive power. CONCLUSIONS—Existing generic measures can be useful for diabetes-specific research and policy applications, but more predictive diabetes-specific measures are needed. PMID:18945927
Schawo, Saskia J; van Eeren, Hester; Soeteman, Djira I; van der Veldt, Marie-Christine; Noom, Marc J; Brouwer, Werner; Busschbach, Jan J V; Hakkaart, Leona
2012-12-01
Many interventions initiated within and financed from the health care sector are not necessarily primarily aimed at improving health. This poses important questions regarding the operationalisation of economic evaluations in such contexts. We investigated whether assessing cost-effectiveness using state-of-the-art methods commonly applied in health care evaluations is feasible and meaningful when evaluating interventions aimed at reducing youth delinquency. A probabilistic Markov model was constructed to create a framework for the assessment of the cost-effectiveness of systemic interventions in delinquent youth. For illustrative purposes, Functional Family Therapy (FFT), a systemic intervention aimed at improving family functioning and, primarily, reducing delinquent activity in youths, was compared to Treatment as Usual (TAU). "Criminal activity free years" (CAFYs) were introduced as central outcome measure. Criminal activity may e.g. be based on police contacts or committed crimes. In absence of extensive data and for illustrative purposes the current study based criminal activity on available literature on recidivism. Furthermore, a literature search was performed to deduce the model's structure and parameters. Common cost-effectiveness methodology could be applied to interventions for youth delinquency. Model characteristics and parameters were derived from literature and ongoing trial data. The model resulted in an estimate of incremental costs/CAFY and included long-term effects. Illustrative model results point towards dominance of FFT compared to TAU. Using a probabilistic model and the CAFY outcome measure to assess cost-effectiveness of systemic interventions aimed to reduce delinquency is feasible. However, the model structure is limited to three states and the CAFY measure was defined rather crude. Moreover, as the model parameters are retrieved from literature the model results are illustrative in the absence of empirical data. The current model provides a framework to assess the cost-effectiveness of systemic interventions, while taking into account parameter uncertainty and long-term effectiveness. The framework of the model could be used to assess the cost-effectiveness of systemic interventions alongside (clinical) trial data. Consequently, it is suitable to inform reimbursement decisions, since the value for money of systemic interventions can be demonstrated using a decision analytic model. Future research could be focussed on testing the current model based on extensive empirical data, improving the outcome measure and finding appropriate values for that outcome.
Kim, H; Rajagopalan, M S; Beriwal, S; Smith, K J
2017-10-01
Stereotactic radiosurgery (SRS) alone or upfront whole brain radiation therapy (WBRT) plus SRS are the most commonly used treatment options for one to three brain oligometastases. The most recent randomised clinical trial result comparing SRS alone with upfront WBRT plus SRS (NCCTG N0574) has favoured SRS alone for neurocognitive function, whereas treatment options remain controversial in terms of cognitive decline and local control. The aim of this study was to conduct a cost-effectiveness analysis of these two competing treatments. A Markov model was constructed for patients treated with SRS alone or SRS plus upfront WBRT based on largely randomised clinical trials. Costs were based on 2016 Medicare reimbursement. Strategies were compared using the incremental cost-effectiveness ratio (ICER) and effectiveness was measured in quality-adjusted life years (QALYs). One-way and probabilistic sensitivity analyses were carried out. Strategies were evaluated from the healthcare payer's perspective with a willingness-to-pay threshold of $100 000 per QALY gained. In the base case analysis, the median survival was 9 months for both arms. SRS alone resulted in an ICER of $9917 per QALY gained. In one-way sensitivity analyses, results were most sensitive to variation in cognitive decline rates for both groups and median survival rates, but the SRS alone remained cost-effective for most parameter ranges. Based on the current available evidence, SRS alone was found to be cost-effective for patients with one to three brain metastases compared with upfront WBRT plus SRS. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tombak, Ali
The recent advancement in wireless communications demands an ever increasing improvement in the system performance and functionality with a reduced size and cost. This thesis demonstrates novel RF and microwave components based on ferroelectric and solid-state based tunable capacitor (varactor) technologies for the design of low-cost, small-size and multi-functional wireless communication systems. These include tunable lumped element VHF filters based on ferroelectric varactors, a beam-steering technique which, unlike conventional systems, does not require separate power divider and phase shifters, and a predistortion linearization technique that uses a varactor based tunable R-L-C resonator. Among various ferroelectric materials, Barium Strontium Titanate (BST) is actively being studied for the fabrication of high performance varactors at RF and microwave frequencies. BST based tunable capacitors are presented with typical tunabilities of 4.2:1 with the application of 5 to 10 V DC bias voltages and typical loss tangents in the range of 0.003--0.009 at VHF frequencies. Tunable lumped element lowpass and bandpass VHF filters based on BST varactors are also demonstrated with tunabilities of 40% and 57%, respectively. A new beam-steering technique is developed based on the extended resonance power dividing technique. Phased arrays based on this technique do not require separate power divider and phase shifters. Instead, the power division and phase shifting circuits are combined into a single circuit, which utilizes tunable capacitors. This results in a substantial reduction in the circuit complexity and cost. Phased arrays based on this technique can be employed in mobile multimedia services and automotive collision avoidance radars. A 2-GHz 4-antenna and a 10-GHz 8-antenna extended resonance phased arrays are demonstrated with scan ranges of 20 degrees and 18 degrees, respectively. A new predistortion linearization technique for the linearization of RF/microwave power amplifiers is also presented. This technique utilizes a varactor based tunable R-L-C resonator in shunt configuration. Due to the small number of circuit elements required, linearizers based on this technique offer low-cost and simple circuitry, hence can be utilized in handheld and cellular applications. A 1.8 GHz power amplifier with 9 dB gain is linearized using this technique. The linearizer improves the output 1-dB compression point of the power amplifier from 21 to 22.8 dBm. Adjacent channel power ratio (ACPR) is improved approximately 11 dB at an output RF power level of 17.5 dBm. The thesis is concluded by summarizing the main achievements and discussing the future work directions.
ERIC Educational Resources Information Center
Waldecker, Mark
2005-01-01
Education administrators make buying decisions for furniture based on many factors. Cost, durability, functionality, safety and aesthetics represent just a few. Those issues always will be important, but gaining greater recognition in recent years has been the role furniture plays in creating positive, high-performance learning environments. The…
The designer of the 90's: A live demonstration
NASA Technical Reports Server (NTRS)
Green, Tommy L.; Jordan, Basil M., Jr.; Oglesby, Timothy L.
1989-01-01
A survey of design tools to be used by the aircraft designer is given. Structural reliability, maintainability, cost and predictability, and acoustics expert systems are discussed, as well as scheduling, drawing, engineering systems, sizing functions, and standard parts and materials data bases.
Code of Federal Regulations, 2010 CFR
2010-07-01
... overhaul; and (2) An analysis of the cost to implement the overhaul within a year versus a proposed... be based on a formal comprehensive appraisal or a series of formal appraisals of the functional...
ERIC Educational Resources Information Center
Holman, Gareth; Kohlenberg, Robert J.; Tsai, Mavis; Haworth, Kevin; Jacobson, Emily; Liu, Sarah
2012-01-01
Depression and cigarette smoking are recurrent, interacting problems that co-occur at high rates and--especially when depression is chronic--are difficult to treat and associated with costly health consequences. In this paper we present an integrative therapeutic framework for concurrent treatment of these problems based on evidence-based…
Optimal consensus algorithm integrated with obstacle avoidance
NASA Astrophysics Data System (ADS)
Wang, Jianan; Xin, Ming
2013-01-01
This article proposes a new consensus algorithm for the networked single-integrator systems in an obstacle-laden environment. A novel optimal control approach is utilised to achieve not only multi-agent consensus but also obstacle avoidance capability with minimised control efforts. Three cost functional components are defined to fulfil the respective tasks. In particular, an innovative nonquadratic obstacle avoidance cost function is constructed from an inverse optimal control perspective. The other two components are designed to ensure consensus and constrain the control effort. The asymptotic stability and optimality are proven. In addition, the distributed and analytical optimal control law only requires local information based on the communication topology to guarantee the proposed behaviours, rather than all agents' information. The consensus and obstacle avoidance are validated through simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Simonetto, Andrea
This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less
GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation
Li, Hong; Lu, Mingquan
2017-01-01
Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318
GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.
Wang, Fei; Li, Hong; Lu, Mingquan
2017-06-30
Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.
Dishonest Academic Conduct: From the Perspective of the Utility Function.
Sun, Ying; Tian, Rui
Dishonest academic conduct has aroused extensive attention in academic circles. To explore how scholars make decisions according to the principle of maximal utility, the author has constructed the general utility function based on the expected utility theory. The concrete utility functions of different types of scholars were deduced. They are as follows: risk neutral, risk averse, and risk preference. Following this, the assignment method was adopted to analyze and compare the scholars' utilities of academic conduct. It was concluded that changing the values of risk costs, internal condemnation costs, academic benefits, and the subjective estimation of penalties following dishonest academic conduct can lead to changes in the utility of academic dishonesty. The results of the current study suggest that within scientific research, measures to prevent and govern dishonest academic conduct should be formulated according to the various effects of the above four variables.
Scalable maskless patterning of nanostructures using high-speed scanning probe arrays
NASA Astrophysics Data System (ADS)
Chen, Chen; Akella, Meghana; Du, Zhidong; Pan, Liang
2017-08-01
Nanoscale patterning is the key process to manufacture important products such as semiconductor microprocessors and data storage devices. Many studies have shown that it has the potential to revolutionize the functions of a broad range of products for a wide variety of applications in energy, healthcare, civil, defense and security. However, tools for mass production of these devices usually cost tens of million dollars each and are only affordable to the established semiconductor industry. A new method, nominally known as "pattern-on-the- y", that involves scanning an array of optical or electrical probes at high speed to form nanostructures and offers a new low-cost approach for nanoscale additive patterning. In this paper, we report some progress on using this method to pattern self-assembled monolayers (SAMs) on silicon substrate. We also functionalize the substrate with gold nanoparticle based on the SAM to show the feasibility of preparing amphiphilic and multi-functional surfaces.
The Effect of Publicized Quality Information on Home Health Agency Choice
Jung, Jeah Kyoungrae; Wu, Bingxiao; Kim, Hyunjee; Polsky, Daniel
2016-01-01
We examine consumers’ use of publicized quality information in Medicare home health care markets, where consumer cost sharing and travel costs are absent. We report two findings. First, agencies with high quality scores are more likely to be preferred by consumers after the introduction of a public reporting program than before. Second, consumers’ use of publicized quality information differs by patient group. Community-based patients have slightly larger responses to public reporting than hospital-discharged patients. Patients with functional limitations at the start of their care, at least among hospital-discharged patients, have a larger response to the reported functional outcome measure than those without functional limitations. In all cases of significant marginal effects, magnitudes are small. We conclude that the current public reporting approach is unlikely to have critical impacts on home health agency choice. Identifying and releasing quality information that is meaningful to consumers may help increase consumers’ use of public reports. PMID:26719047
Functional Neuroimaging in Psychopathy.
Del Casale, Antonio; Kotzalidis, Georgios D; Rapinesi, Chiara; Di Pietro, Simone; Alessi, Maria Chiara; Di Cesare, Gianluigi; Criscuolo, Silvia; De Rossi, Pietro; Tatarelli, Roberto; Girardi, Paolo; Ferracuti, Stefano
2015-01-01
Psychopathy is associated with cognitive and affective deficits causing disruptive, harmful and selfish behaviour. These have considerable societal costs due to recurrent crime and property damage. A better understanding of the neurobiological bases of psychopathy could improve therapeutic interventions, reducing the related social costs. To analyse the major functional neural correlates of psychopathy, we reviewed functional neuroimaging studies conducted on persons with this condition. We searched the PubMed database for papers dealing with functional neuroimaging and psychopathy, with a specific focus on how neural functional changes may correlate with task performances and human behaviour. Psychopathy-related behavioural disorders consistently correlated with dysfunctions in brain areas of the orbitofrontal-limbic (emotional processing and somatic reaction to emotions; behavioural planning and responsibility taking), anterior cingulate-orbitofrontal (correct assignment of emotional valence to social stimuli; violent/aggressive behaviour and challenging attitude) and prefrontal-temporal-limbic (emotional stimuli processing/response) networks. Dysfunctional areas more consistently included the inferior frontal, orbitofrontal, dorsolateral prefrontal, ventromedial prefrontal, temporal (mainly the superior temporal sulcus) and cingulated cortices, the insula, amygdala, ventral striatum and other basal ganglia. Emotional processing and learning, and several social and affective decision-making functions are impaired in psychopathy, which correlates with specific changes in neural functions. © 2015 S. Karger AG, Basel.
Striegel Weissman, Ruth; Rosselli, Francine
2017-01-01
Eating disorders are serious mental disorders as reflected in significant impairments in health and psychosocial functioning and excess mortality. Despite the clear evidence of clinical significance and despite availability of evidence-based, effective treatments, research has shown a paradox of elevated health services use and, yet, infrequent treatment specifically targeting the eating disorder (i.e., high unmet treatment need). This review paper summarizes key studies conducted in collaboration with G. Terence Wilson and offers an update of the research literature published since 2011 in three research areas that undergirded our collaborative research project: unmet treatment needs, cost of illness, and cost-effectiveness of treatments. In regards to unmet treatment needs, epidemiological studies find that the number of individuals with an eating disorder who do not receive disorder-specific treatment continues to remain high. Cost-of-illness show that eating disorders are associated with substantial financial burdens for individuals, their family, and society, yet comprehensive examination of costs across public sectors is lacking. Cost measures vary widely, making it difficult to draw firm conclusions. Hospitalization is a major driver of medical costs incurred by individuals with an eating disorder. Only a handful of cost-effectiveness studies have been conducted, leaving policy makers with little information on which to base decisions about allocation of resources to help reduce the burden of suffering attributable to eating disorders. Copyright © 2016 Elsevier Ltd. All rights reserved.
Systematic review of the incremental costs of interventions that increase immunization coverage.
Ozawa, Sachiko; Yemeke, Tatenda T; Thompson, Kimberly M
2018-05-10
Achieving and maintaining high vaccination coverage requires investments, but the costs and effectiveness of interventions to increase coverage remain poorly characterized. We conducted a systematic review of the literature to identify peer-reviewed studies published in English that reported interventions aimed at increasing immunization coverage and the associated costs and effectiveness of the interventions. We found limited information in the literature, with many studies reporting effectiveness estimates, but not providing cost information. Using the available data, we developed a cost function to support future programmatic decisions about investments in interventions to increase immunization coverage for relatively low and high-income countries. The cost function estimates the non-vaccine cost per dose of interventions to increase absolute immunization coverage by one percent, through either campaigns or routine immunization. The cost per dose per percent increase in absolute coverage increased with higher baseline coverage, demonstrating increasing incremental costs required to reach higher coverage levels. Future studies should evaluate the performance of the cost function and add to the database of available evidence to better characterize heterogeneity in costs and generalizability of the cost function. Copyright © 2018. Published by Elsevier Ltd.
Lin, Xiaodong; Liu, Yaqing; Deng, Jiankang; Lyu, Yanlong; Qian, Pengcheng; Li, Yunfei; Wang, Shuo
2018-02-21
The integration of multiple DNA logic gates on a universal platform to implement advance logic functions is a critical challenge for DNA computing. Herein, a straightforward and powerful strategy in which a guanine-rich DNA sequence lighting up a silver nanocluster and fluorophore was developed to construct a library of logic gates on a simple DNA-templated silver nanoclusters (DNA-AgNCs) platform. This library included basic logic gates, YES, AND, OR, INHIBIT, and XOR, which were further integrated into complex logic circuits to implement diverse advanced arithmetic/non-arithmetic functions including half-adder, half-subtractor, multiplexer, and demultiplexer. Under UV irradiation, all the logic functions could be instantly visualized, confirming an excellent repeatability. The logic operations were entirely based on DNA hybridization in an enzyme-free and label-free condition, avoiding waste accumulation and reducing cost consumption. Interestingly, a DNA-AgNCs-based multiplexer was, for the first time, used as an intelligent biosensor to identify pathogenic genes, E. coli and S. aureus genes, with a high sensitivity. The investigation provides a prototype for the wireless integration of multiple devices on even the simplest single-strand DNA platform to perform diverse complex functions in a straightforward and cost-effective way.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almansouri, Hani; Johnson, Christi R; Clayton, Dwight A
All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thickmore » concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.« less
NASA Astrophysics Data System (ADS)
Almansouri, Hani; Johnson, Christi; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector
2017-02-01
All commercial nuclear power plants (NPPs) in the United States contain concrete structures. These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and the degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Concrete structures in NPPs are often inaccessible and contain large volumes of massively thick concrete. While acoustic imaging using the synthetic aperture focusing technique (SAFT) works adequately well for thin specimens of concrete such as concrete transportation structures, enhancements are needed for heavily reinforced, thick concrete. We argue that image reconstruction quality for acoustic imaging in thick concrete could be improved with Model-Based Iterative Reconstruction (MBIR) techniques. MBIR works by designing a probabilistic model for the measurements (forward model) and a probabilistic model for the object (prior model). Both models are used to formulate an objective function (cost function). The final step in MBIR is to optimize the cost function. Previously, we have demonstrated a first implementation of MBIR for an ultrasonic transducer array system. The original forward model has been upgraded to account for direct arrival signal. Updates to the forward model will be documented and the new algorithm will be assessed with synthetic and empirical samples.
Analysis and optimization of hybrid electric vehicle thermal management systems
NASA Astrophysics Data System (ADS)
Hamut, H. S.; Dincer, I.; Naterer, G. F.
2014-02-01
In this study, the thermal management system of a hybrid electric vehicle is optimized using single and multi-objective evolutionary algorithms in order to maximize the exergy efficiency and minimize the cost and environmental impact of the system. The objective functions are defined and decision variables, along with their respective system constraints, are selected for the analysis. In the multi-objective optimization, a Pareto frontier is obtained and a single desirable optimal solution is selected based on LINMAP decision-making process. The corresponding solutions are compared against the exergetic, exergoeconomic and exergoenvironmental single objective optimization results. The results show that the exergy efficiency, total cost rate and environmental impact rate for the baseline system are determined to be 0.29, ¢28 h-1 and 77.3 mPts h-1 respectively. Moreover, based on the exergoeconomic optimization, 14% higher exergy efficiency and 5% lower cost can be achieved, compared to baseline parameters at an expense of a 14% increase in the environmental impact. Based on the exergoenvironmental optimization, a 13% higher exergy efficiency and 5% lower environmental impact can be achieved at the expense of a 27% increase in the total cost.
Evaluation of Life Cycle Assessment (LCA) for Roadway Drainage Systems.
Byrne, Diana M; Grabowski, Marta K; Benitez, Amy C B; Schmidt, Arthur R; Guest, Jeremy S
2017-08-15
Roadway drainage design has traditionally focused on cost-effectively managing water quantity; however, runoff carries pollutants, posing risks to the local environment and public health. Additionally, construction and maintenance incur costs and contribute to global environmental impacts. While life cycle assessment (LCA) can potentially capture local and global environmental impacts of roadway drainage and other stormwater systems, LCA methodology must be evaluated because stormwater systems differ from wastewater and drinking water systems to which LCA is more frequently applied. To this end, this research developed a comprehensive model linking roadway drainage design parameters to LCA and life cycle costing (LCC) under uncertainty. This framework was applied to 10 highway drainage projects to evaluate LCA methodological choices by characterizing environmental and economic impacts of drainage projects and individual components (basin, bioswale, culvert, grass swale, storm sewer, and pipe underdrain). The relative impacts of drainage components varied based on functional unit choice. LCA inventory cutoff criteria evaluation showed the potential for cost-based criteria, which performed better than mass-based criteria. Finally, the local aquatic benefits of grass swales and bioswales offset global environmental impacts for four impact categories, highlighting the need to explicitly consider local impacts (i.e., direct emissions) when evaluating drainage technologies.
Wildfire Suppression Costs for Canada under a Changing Climate
Stocks, Brian J.; Gauthier, Sylvie
2016-01-01
Climate-influenced changes in fire regimes in northern temperate and boreal regions will have both ecological and economic ramifications. We examine possible future wildfire area burned and suppression costs using a recently compiled historical (i.e., 1980–2009) fire management cost database for Canada and several Intergovernmental Panel on Climate Change (IPCC) climate projections. Area burned was modelled as a function of a climate moisture index (CMI), and fire suppression costs then estimated as a function of area burned. Future estimates of area burned were generated from projections of the CMI under two emissions pathways for four General Circulation Models (GCMs); these estimates were constrained to ecologically reasonable values by incorporating a minimum fire return interval of 20 years. Total average annual national fire management costs are projected to increase to just under $1 billion (a 60% real increase from the 1980–2009 period) under the low greenhouse gas emissions pathway and $1.4 billion (119% real increase from the base period) under the high emissions pathway by the end of the century. For many provinces, annual costs that are currently considered extreme (i.e., occur once every ten years) are projected to become commonplace (i.e., occur once every two years or more often) as the century progresses. It is highly likely that evaluations of current wildland fire management paradigms will be necessary to avoid drastic and untenable cost increases as the century progresses. PMID:27513660
Telemedicine and the National Information Infrastructure
Jones, Mary Gardiner
1997-01-01
Abstract Health care is shifting from a focus on hospital-based acute care toward prevention, promotion of wellness, and maintenance of function in community and home-based facilities. Telemedicine can facilitate this shifted focus, but the bulk of the current projects emphasize academic medical center consultations to rural hospitals. Home-based projects encounter barriers of cost and inadequate infrastructure. The 1996 Telecommunications Act as implemented by the Federal Communications commission holds out significant promise to overcome these barriers, although it has serious limitations in its application to health care providers. Health care advocates must work actively on the federal, state, and local public and private sector levels to address these shortcomings and develop cost effective partnerships with other community-based organizations to build network links to facilitate telemedicine-generated services to the home, where the majority of health care decisions are made. PMID:9391928
Gundugurti, Prasad Rao; Nagpal, Rajesh; Sheth, Ashit; Narang, Prashant; Gawande, Sonal; Singh, Vikram
2017-12-01
Schizophrenia is associated with functional challenges for patients; relapses in schizophrenia may lead to increased treatment costs and poor quality of life. This SUSTAIN-I study was conducted to establish psychiatrists' perspective on impact of long-acting injectables (LAIs) antipsychotics on the socio-economic and functional burden of schizophrenia. This cross-sectional, survey-based study was conducted in 5 cities in India. Psychiatrists (≥5years of experience) working in clinics, psychiatric, government hospitals and rehabilitation centers were included and administered a specially designed questionnaire to elicit information on their clinical practice and prescription patterns. Perceived treatment costs for LAI versus oral antipsychotic treatments (OATs) and relapse rates were assessed. Descriptive statistics were used to summarize results. Total 31 physicians completed this survey. In acute phase, OAT prescription was higher whereas chronic patients were treated with either OATs or LAIs. Treatment with LAIs was the preferred treatment in 9% of chronic cases. Reduced relapse rates were observed with LAI treatment: 12% patients on LAIs relapsed as compared with 60% patients on OATs. Monthly medication cost for oral medications was lower ($8-$17) than short-acting injectables ($22-$50). For chronic cases, atypical antipsychotics cost (oral: $11.7-25, LAI: $150-167) was higher than typical antipsychotics (oral: $4-5, LAI: $5-25). Of the total expenses incurred, cost for hospital admissions was the largest component (78%). Despite enhanced treatment adherence and potential to lower risk of rehospitalizations from relapse, LAIs are not the preferred treatment choice for patients with schizophrenia in India, owing to their perceived high costs. Copyright © 2017 Elsevier B.V. All rights reserved.
Lung vessel segmentation in CT images using graph-cuts
NASA Astrophysics Data System (ADS)
Zhai, Zhiwei; Staring, Marius; Stoel, Berend C.
2016-03-01
Accurate lung vessel segmentation is an important operation for lung CT analysis. Filters that are based on analyzing the eigenvalues of the Hessian matrix are popular for pulmonary vessel enhancement. However, due to their low response at vessel bifurcations and vessel boundaries, extracting lung vessels by thresholding the vesselness is not sufficiently accurate. Some methods turn to graph-cuts for more accurate segmentation, as it incorporates neighbourhood information. In this work, we propose a new graph-cuts cost function combining appearance and shape, where CT intensity represents appearance and vesselness from a Hessian-based filter represents shape. Due to the amount of voxels in high resolution CT scans, the memory requirement and time consumption for building a graph structure is very high. In order to make the graph representation computationally tractable, those voxels that are considered clearly background are removed from the graph nodes, using a threshold on the vesselness map. The graph structure is then established based on the remaining voxel nodes, source/sink nodes and the neighbourhood relationship of the remaining voxels. Vessels are segmented by minimizing the energy cost function with the graph-cuts optimization framework. We optimized the parameters used in the graph-cuts cost function and evaluated the proposed method with two manually labeled sub-volumes. For independent evaluation, we used 20 CT scans of the VESSEL12 challenge. The evaluation results of the sub-volume data show that the proposed method produced a more accurate vessel segmentation compared to the previous methods, with F1 score 0.76 and 0.69. In the VESSEL12 data-set, our method obtained a competitive performance with an area under the ROC curve of 0.975, especially among the binary submissions.
A method for spectral DNS of low Rm channel flows based on the least dissipative modes
NASA Astrophysics Data System (ADS)
Kornet, Kacper; Pothérat, Alban
2015-10-01
We put forward a new type of spectral method for the direct numerical simulation of flows where anisotropy or very fine boundary layers are present. The main idea is to take advantage of the fact that such structures are dissipative and that their presence should reduce the number of degrees of freedom of the flow, when paradoxically, their fine resolution incurs extra computational cost in most current methods. The principle of this method is to use a functional basis with elements that already include these fine structures so as to avoid these extra costs. This leads us to develop an algorithm to implement a spectral method for arbitrary functional bases, and in particular, non-orthogonal ones. We construct a basic implementation of this algorithm to simulate magnetohydrodynamic (MHD) channel flows with an externally imposed, transverse magnetic field, where very thin boundary layers are known to develop along the channel walls. In this case, the sought functional basis can be built out of the eigenfunctions of the dissipation operator, which incorporate these boundary layers, and it turns out to be non-orthogonal. We validate this new scheme against numerical simulations of freely decaying MHD turbulence based on a finite volume code and it is found to provide accurate results. Its ability to fully resolve wall-bounded turbulence with a number of modes close to that required by the dynamics is demonstrated on a simple example. This opens the way to full-blown simulations of MHD turbulence under very high magnetic fields. Until now such simulations were too computationally expensive. In contrast to traditional methods the computational cost of the proposed method, does not depend on the intensity of the magnetic field.
Fatoye, Francis; Haigh, Carol
2016-05-01
To examine the cost-effectiveness of semi-rigid ankle brace to facilitate return to work following first-time acute ankle sprains. Economic evaluation based on cost-utility analysis. Ankle sprains are a source of morbidity and absenteeism from work, accounting for 15-20% of all sports injuries. Semi-rigid ankle brace and taping are functional treatment interventions used by Musculoskeletal Physiotherapists and Nurses to facilitate return to work following acute ankle sprains. A decision model analysis, based on cost-utility analysis from the perspective of National Health Service was used. The primary outcomes measure was incremental cost-effectiveness ratio, based on quality-adjusted life years. Costs and quality of life data were derived from published literature, while model clinical probabilities were sourced from Musculoskeletal Physiotherapists. The cost and quality adjusted life years gained using semi-rigid ankle brace was £184 and 0.72 respectively. However, the cost and quality adjusted life years gained following taping was £155 and 0.61 respectively. The incremental cost-effectiveness ratio for the semi-rigid brace was £263 per quality adjusted life year. Probabilistic sensitivity analysis showed that ankle brace provided the highest net-benefit, hence the preferred option. Taping is a cheaper intervention compared with ankle brace to facilitate return to work following first-time ankle sprains. However, the incremental cost-effectiveness ratio observed for ankle brace was less than the National Institute for Health and Care Excellence threshold and the intervention had a higher net-benefit, suggesting that it is a cost-effective intervention. Decision-makers may be willing to pay £263 for an additional gain in quality adjusted life year. The findings of this economic evaluation provide justification for the use of semi-rigid ankle brace by Musculoskeletal Physiotherapists and Nurses to facilitate return to work in individuals with first-time ankle sprains. © 2016 John Wiley & Sons Ltd.
Paper-based piezoelectric touch pads with hydrothermally grown zinc oxide nanowires.
Li, Xiao; Wang, Yu-Hsuan; Zhao, Chen; Liu, Xinyu
2014-12-24
This paper describes a new type of paper-based piezoelectric touch pad integrating zinc oxide nanowires (ZnO NWs), which can serve as user interfaces in paper-based electronics. The sensing functionality of these touch pads is enabled by the piezoelectric property of ZnO NWs grown on paper using a simple, cost-efficient hydrothermal method. A piece of ZnO-NW paper with two screen-printed silver electrodes forms a touch button, and touch-induced electric charges from the button are converted into a voltage output using a charge amplifier circuit. A touch pad consisting of an array of buttons can be readily integrated into paper-based electronic devices, allowing user input of information for various purposes such as programming, identification checking, and gaming. This novel design features ease of fabrication, low cost, ultrathin structure, and good compatibility with techniques in printed electronics, and further enriches the available technologies of paper-based electronics.
Specialist home-based nursing services for children with acute and chronic illnesses.
Parab, Chitra S; Cooper, Carolyn; Woolfenden, Susan; Piper, Susan M
2013-06-15
Specialist paediatric home-based nursing services have been proposed as a cost-effective means of reducing distress resulting from hospital admissions, while enhancing primary care and reducing length of hospital stay. This review is an update of our original review, which was published in 2006. To evaluate specialist home-based nursing services for children with acute and chronic illnesses. We searched the following databases in February 2012: the Cochrane Central Register of Controlled Trials (CENTRAL) in The Cochrane Library 2012 Issue 2, Ovid MEDLINE, EMBASE, PsycINFO, CINAHL and Sociological Abstracts. We also searched ClinicalTrials.gov and the WHO International Clinical Trials Registry Platform. No language restrictions were applied. Randomised controlled trials (RCTs) of children from birth to age 18 years with acute or chronic illnesses allocated to specialist home-based nursing services compared with conventional health care. Outcomes included utilisation of health care, physical and mental health, satisfaction, adverse health outcomes and costs. Two review authors extracted data from the studies independently and resolved any discrepancies by recourse to a third author. Meta-analysis was not appropriate because of the clinical diversity of the studies and the lack of common outcome measures. We screened 4226 titles to yield seven RCTs with a total of 840 participants. Participants, interventions and outcomes were diverse. No significant differences were reported in health outcomes; two studies reported a reduction in the hospital stay with no difference in the hospital readmission rates. Three studies reported a reduction in parental anxiety and improvement in child behaviours was reported in three studies. Overall increased parental satisfaction was reported in three studies. Also, better parental coping and family functioning was reported in one study. By contrast, one study each reported no impact on parental burden of care or on functional status of children. Home care was reported as more costly for service providers with substantial cost savings for the family in two studies, while one study revealed no significant cost benefits for the family. Current research does not provide supporting evidence for a reduction in access to hospital services or a reduction in hospital readmission rate for children with acute and chronic illnesses using specialist home-based nursing services; however, the only summary finding across a few studies was that there is a significant decrease in length of hospitalisation. The preliminary results show no adverse impact on physical health outcomes and a number of papers reported improved satisfaction with home-based care. Further trials are required, measuring health, satisfaction, service utilisation and long-term costs.
GPU-accelerated adjoint algorithmic differentiation
NASA Astrophysics Data System (ADS)
Gremse, Felix; Höfter, Andreas; Razik, Lukas; Kiessling, Fabian; Naumann, Uwe
2016-03-01
Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the ;tape;. Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5 ± 4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography.
GPU-Accelerated Adjoint Algorithmic Differentiation.
Gremse, Felix; Höfter, Andreas; Razik, Lukas; Kiessling, Fabian; Naumann, Uwe
2016-03-01
Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the "tape". Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5 ± 4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography.
GPU-Accelerated Adjoint Algorithmic Differentiation
Gremse, Felix; Höfter, Andreas; Razik, Lukas; Kiessling, Fabian; Naumann, Uwe
2015-01-01
Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the “tape”. Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5 ± 4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography. PMID:26941443
Cost Efficiency in Public Higher Education.
ERIC Educational Resources Information Center
Robst, John
This study used the frontier cost function framework to examine cost efficiency in public higher education. The frontier cost function estimates the minimum predicted cost for producing a given amount of output. Data from the annual Almanac issues of the "Chronicle of Higher Education" were used to calculate state level enrollments at two-year and…
Accurate position estimation methods based on electrical impedance tomography measurements
NASA Astrophysics Data System (ADS)
Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.
2017-08-01
Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.
Design of Water Temperature Control System Based on Single Chip Microcomputer
NASA Astrophysics Data System (ADS)
Tan, Hanhong; Yan, Qiyan
2017-12-01
In this paper, we mainly introduce a multi-function water temperature controller designed with 51 single-chip microcomputer. This controller has automatic and manual water, set the water temperature, real-time display of water and temperature and alarm function, and has a simple structure, high reliability, low cost. The current water temperature controller on the market basically use bimetal temperature control, temperature control accuracy is low, poor reliability, a single function. With the development of microelectronics technology, monolithic microprocessor function is increasing, the price is low, in all aspects of widely used. In the water temperature controller in the application of single-chip, with a simple design, high reliability, easy to expand the advantages of the function. Is based on the appeal background, so this paper focuses on the temperature controller in the intelligent control of the discussion.
Metabolic costs of daily activity in older adults (Chores XL) study: design and methods.
Corbett, Duane B; Wanigatunga, Amal A; Valiani, Vincenzo; Handberg, Eileen M; Buford, Thomas W; Brumback, Babette; Casanova, Ramon; Janelle, Christopher M; Manini, Todd M
2017-06-01
For over 20 years, normative data has guided the prescription of physical activity. This data has since been applied to research and used to plan interventions. While this data seemingly provides accurate estimates of the metabolic cost of daily activities in young adults, the accuracy of use among older adults is less clear. As such, a thorough evaluation of the metabolic cost of daily activities in community dwelling adults across the lifespan is needed. The Metabolic Costs of Daily Activity in Older Adults Study is a cross-sectional study designed to compare the metabolic cost of daily activities in 250 community dwelling adults across the lifespan. Participants (20+ years) performed 38 common daily activities while expiratory gases were measured using a portable indirect calorimeter (Cosmed K4b2). The metabolic cost was examined as a metabolic equivalent value (O 2 uptake relative to 3.5 milliliter• min-1•kg-1), a function of work rate - metabolic economy, and a relative value of resting and peak oxygen uptake. The primary objective is to determine age-related differences in the metabolic cost of common lifestyle and exercise activities. Secondary objectives include (a) investigating the effect of functional impairment on the metabolic cost of daily activities, (b) evaluating the validity of perception-based measurement of exertion across the lifespan, and (c) validating activity sensors for estimating the type and intensity of physical activity. Results of this study are expected to improve the effectiveness by which physical activity and nutrition is recommended for adults across the lifespan.
A multi-pad electrode based functional electrical stimulation system for restoration of grasp
2012-01-01
Background Functional electrical stimulation (FES) applied via transcutaneous electrodes is a common rehabilitation technique for assisting grasp in patients with central nervous system lesions. To improve the stimulation effectiveness of conventional FES, we introduce multi-pad electrodes and a new stimulation paradigm. Methods The new FES system comprises an electrode composed of small pads that can be activated individually. This electrode allows the targeting of motoneurons that activate synergistic muscles and produce a functional movement. The new stimulation paradigm allows asynchronous activation of motoneurons and provides controlled spatial distribution of the electrical charge that is delivered to the motoneurons. We developed an automated technique for the determination of the preferred electrode based on a cost function that considers the required movement of the fingers and the stabilization of the wrist joint. The data used within the cost function come from a sensorized garment that is easy to implement and does not require calibration. The design of the system also includes the possibility for fine-tuning and adaptation with a manually controllable interface. Results The device was tested on three stroke patients. The results show that the multi-pad electrodes provide the desired level of selectivity and can be used for generating a functional grasp. The results also show that the procedure, when performed on a specific user, results in the preferred electrode configuration characteristics for that patient. The findings from this study are of importance for the application of transcutaneous stimulation in the clinical and home environments. PMID:23009589
NASA Astrophysics Data System (ADS)
Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee
2017-07-01
This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.
Self-organizing maps for learning the edit costs in graph matching.
Neuhaus, Michel; Bunke, Horst
2005-06-01
Although graph matching and graph edit distance computation have become areas of intensive research recently, the automatic inference of the cost of edit operations has remained an open problem. In the present paper, we address the issue of learning graph edit distance cost functions for numerically labeled graphs from a corpus of sample graphs. We propose a system of self-organizing maps (SOMs) that represent the distance measuring spaces of node and edge labels. Our learning process is based on the concept of self-organization. It adapts the edit costs in such a way that the similarity of graphs from the same class is increased, whereas the similarity of graphs from different classes decreases. The learning procedure is demonstrated on two different applications involving line drawing graphs and graphs representing diatoms, respectively.
Lublóy, Ágnes; Keresztúri, Judit Lilla; Benedek, Gábor
2017-10-01
Improving patient care coordination is critical for achieving better health outcome measures at reduced cost. However, assessing the results of patient care coordination at system level is lacking. In this report, based on administrative healthcare data, a provider-level care coordination measure is developed to assess the function of primary care at system level. In a sample of 31 070 patients with diabetes we find that the type of collaborative relationship general practitioners build up with specialists is associated with prescription drug costs. Regulating access to secondary care might result in cost savings through improved care coordination. © The Author 2017. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.
NASA Astrophysics Data System (ADS)
Dražević, Emil; Andersen, Anders Søndergaard; Wedege, Kristina; Henriksen, Martin Lahn; Hinge, Mogens; Bentien, Anders
2018-03-01
The transition to renewable energy sources has created need for stationary, low-cost electrical energy storage. A possible technology to address both cost and environmental concerns are batteries based on organic materials. The use of oligoanthraquinones as a replacement for metal hydrides or cadmium in nickel hydroxide rechargeable batteries is investigated in detail regarding polymer composition, electrochemical reversibility and electroactive species cost. Two different oligoanthraquinones are paired with a nickel hydroxide cathode and demonstrate cycling stability dependent on parameters such as supporting electrolyte strength, C-rate, and anode swelling. The energy efficiencies are up to 75% and the cell potential up to 1.13 V. Simple functionalization of the basic structure increases the cell potential by 100 mV.
NASA Astrophysics Data System (ADS)
Ghaffari Razin, Mir Reza; Voosoghi, Behzad
2017-04-01
Ionospheric tomography is a very cost-effective method which is used frequently to modeling of electron density distributions. In this paper, residual minimization training neural network (RMTNN) is used in voxel based ionospheric tomography. Due to the use of wavelet neural network (WNN) with back-propagation (BP) algorithm in RMTNN method, the new method is named modified RMTNN (MRMTNN). To train the WNN with BP algorithm, two cost functions is defined: total and vertical cost functions. Using minimization of cost functions, temporal and spatial ionospheric variations is studied. The GPS measurements of the international GNSS service (IGS) in the central Europe have been used for constructing a 3-D image of the electron density. Three days (2009.04.15, 2011.07.20 and 2013.06.01) with different solar activity index is used for the processing. To validate and better assess reliability of the proposed method, 4 ionosonde and 3 testing stations have been used. Also the results of MRMTNN has been compared to that of the RMTNN method, international reference ionosphere model 2012 (IRI-2012) and spherical cap harmonic (SCH) method as a local ionospheric model. The comparison of MRMTNN results with RMTNN, IRI-2012 and SCH models shows that the root mean square error (RMSE) and standard deviation of the proposed approach are superior to those of the traditional method.
McBain, Ryan K; Salhi, Carmel; Hann, Katrina; Salomon, Joshua A; Kim, Jane J; Betancourt, Theresa S
2016-01-01
Background: One billion children live in war-affected regions of the world. We conducted the first cost-effectiveness analysis of an intervention for war-affected youth in sub-Saharan Africa, as well as a broader cost analysis. Methods: The Youth Readiness Intervention (YRI) is a behavioural treatment for reducing functional impairment associated with psychological distress among war-affected young persons. A randomized controlled trial was conducted in Freetown, Sierra Leone, from July 2012 to July 2013. Participants (n = 436, aged 15–24) were randomized to YRI (n = 222) or care as usual (n = 214). Functional impairment was indexed by the World Health Organization Disability Assessment Scale; scores were converted to quality-adjusted life years (QALYs). An ‘ingredients approach’ estimated financial and economic costs, assuming a societal perspective. Incremental cost-effectiveness ratios (ICERs) were also expressed in terms of gains across dimensions of mental health and schooling. Secondary analyses explored whether intervention effects were largest among those worst-off (upper quartile) at baseline. Results: Retention at 6-month follow-up was 85% (n = 371). The estimated economic cost of the intervention was $104 per participant. Functional impairment was lower among YRI recipients, compared with controls, following the intervention but not at 6-month follow-up, and yielded an ICER of $7260 per QALY gained. At 8-month follow-up, teachers’ interviews indicated that YRI recipients observed higher school enrolment [P < 0.001, odds ratio (OR) 8.9], denoting a cost of $431 per additional school year gained, as well as better school attendance (P = 0.007, OR 34.9) and performance (P = 0.03, effect size = −1.31). Secondary analyses indicated that the intervention was cost-effective among those worst-off at baseline, yielding an ICER of $3564 per QALY gained. Conclusions: The YRI is not cost-effective at a willingness-to-pay threshold of three times average gross domestic product per capita. However, results indicate that the YRI translated into a range of benefits, such as improved school enrolment, not captured by cost-effectiveness analysis. We also outline areas for modification to improve cost-effectiveness in future trials. Trial Registration: clinicaltrials.gov Identifier: RPCGA-YRI-21003 PMID:26345320
McBain, Ryan K; Salhi, Carmel; Hann, Katrina; Salomon, Joshua A; Kim, Jane J; Betancourt, Theresa S
2016-05-01
One billion children live in war-affected regions of the world. We conducted the first cost-effectiveness analysis of an intervention for war-affected youth in sub-Saharan Africa, as well as a broader cost analysis. The Youth Readiness Intervention (YRI) is a behavioural treatment for reducing functional impairment associated with psychological distress among war-affected young persons. A randomized controlled trial was conducted in Freetown, Sierra Leone, from July 2012 to July 2013. Participants (n = 436, aged 15-24) were randomized to YRI (n = 222) or care as usual (n = 214). Functional impairment was indexed by the World Health Organization Disability Assessment Scale; scores were converted to quality-adjusted life years (QALYs). An 'ingredients approach' estimated financial and economic costs, assuming a societal perspective. Incremental cost-effectiveness ratios (ICERs) were also expressed in terms of gains across dimensions of mental health and schooling. Secondary analyses explored whether intervention effects were largest among those worst-off (upper quartile) at baseline. Retention at 6-month follow-up was 85% (n = 371). The estimated economic cost of the intervention was $104 per participant. Functional impairment was lower among YRI recipients, compared with controls, following the intervention but not at 6-month follow-up, and yielded an ICER of $7260 per QALY gained. At 8-month follow-up, teachers' interviews indicated that YRI recipients observed higher school enrolment [P < 0.001, odds ratio (OR) 8.9], denoting a cost of $431 per additional school year gained, as well as better school attendance (P = 0.007, OR 34.9) and performance (P = 0.03, effect size = -1.31). Secondary analyses indicated that the intervention was cost-effective among those worst-off at baseline, yielding an ICER of $3564 per QALY gained. The YRI is not cost-effective at a willingness-to-pay threshold of three times average gross domestic product per capita. However, results indicate that the YRI translated into a range of benefits, such as improved school enrolment, not captured by cost-effectiveness analysis. We also outline areas for modification to improve cost-effectiveness in future trials. clinicaltrials.gov Identifier: RPCGA-YRI-21003. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Gradient corrections to the exchange-correlation free energy
Sjostrom, Travis; Daligault, Jerome
2014-10-07
We develop the first-order gradient correction to the exchange-correlation free energy of the homogeneous electron gas for use in finite-temperature density functional calculations. Based on this, we propose and implement a simple temperature-dependent extension for functionals beyond the local density approximation. These finite-temperature functionals show improvement over zero-temperature functionals, as compared to path-integral Monte Carlo calculations for deuterium equations of state, and perform without computational cost increase compared to zero-temperature functionals and so should be used for finite-temperature calculations. Furthermore, while the present functionals are valid at all temperatures including zero, non-negligible difference with zero-temperature functionals begins at temperatures abovemore » 10 000 K.« less
A general framework for regularized, similarity-based image restoration.
Kheradmand, Amin; Milanfar, Peyman
2014-12-01
Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.
Janssen, Lotte; Kan, Cornelis C; Carpentier, Pieter J; Sizoo, Bram; Hepark, Sevket; Grutters, Janneke; Donders, Rogier; Buitelaar, Jan K; Speckens, Anne E M
2015-09-15
Adults with attention deficit hyperactivity disorder (ADHD) often present with a lifelong pattern of core symptoms that is associated with impairments of functioning in daily life. This has a substantial personal and economic impact. In clinical practice there is a high need for additional or alternative interventions for existing treatments, usually consisting of pharmacotherapy and/or psycho-education. Although previous studies show preliminary evidence for the effectiveness of mindfulness-based interventions in reducing ADHD symptoms and improving executive functioning, these studies have methodological limitations. This study will take account of these limitations and will examine the effectiveness of Mindfulness Based Cognitive Therapy (MBCT) in further detail. A multi-centre, parallel-group, randomised controlled trial will be conducted in N = 120 adults with ADHD. Patients will be randomised to MBCT in addition to treatment as usual (TAU) or TAU alone. Assessments will take place at baseline and at three, six and nine months after baseline. Primary outcome measure will be severity of ADHD symptoms rated by a blinded clinician. Secondary outcome measures will be self-reported ADHD symptoms, executive functioning, mindfulness skills, self-compassion, positive mental health and general functioning. In addition, a cost-effectiveness analysis will be conducted. This trial will offer valuable information about the clinical and cost-effectiveness of MBCT in addition to TAU compared to TAU alone in adults swith ADHD. ClinicalTrials.gov NCT02463396. Registered 8 June 2015.
Chassin, David P.; Behboodi, Sahand; Djilali, Ned
2018-01-28
This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less
Kapinos, Kandice A; Caloyeras, John P; Liu, Hangsheng; Mattke, Soeren
2015-12-01
This article aims to test whether a workplace wellness program reduces health care cost for higher risk employees or employees with greater participation. The program effect on costs was estimated using a generalized linear model with a log-link function using a difference-in-difference framework with a propensity score matched sample of employees using claims and program data from a large US firm from 2003 to 2011. The program targeting higher risk employees did not yield cost savings. Employees participating in five or more sessions aimed at encouraging more healthful living had about $20 lower per member per month costs relative to matched comparisons (P = 0.002). Our results add to the growing evidence base that workplace wellness programs aimed at primary prevention do not reduce health care cost, with the exception of those employees who choose to participate more actively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chassin, David P.; Behboodi, Sahand; Djilali, Ned
This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less
Techno-economic analysis of fuel cell auxiliary power units as alternative to idling
NASA Astrophysics Data System (ADS)
Jain, Semant; Chen, Hsieh-Yeh; Schwank, Johannes
This paper presents a techno-economic analysis of fuel-cell-based auxiliary power units (APUs), with emphasis on applications in the trucking industry and the military. The APU system is intended to reduce the need for discretionary idling of diesel engines or gas turbines. The analysis considers the options for on-board fuel processing of diesel and compares the two leading fuel cell contenders for automotive APU applications: proton exchange membrane fuel cell and solid oxide fuel cell. As options for on-board diesel reforming, partial oxidation and auto-thermal reforming are considered. Finally, using estimated and projected efficiency data, fuel consumption patterns, capital investment, and operating costs of fuel-cell APUs, an economic evaluation of diesel-based APUs is presented, with emphasis on break-even periods as a function of fuel cost, investment cost, idling time, and idling efficiency. The analysis shows that within the range of parameters studied, there are many conditions where deployment of an SOFC-based APU is economically viable. Our analysis indicates that at an APU system cost of 100 kW -1, the economic break-even period is within 1 year for almost the entire range of conditions. At 500 kW -1 investment cost, a 2-year break-even period is possible except for the lowest end of the fuel consumption range considered. However, if the APU investment cost is 3000 kW -1, break-even would only be possible at the highest fuel consumption scenarios. For Abram tanks, even at typical land delivered fuel costs, a 2-year break-even period is possible for APU investment costs as high as 1100 kW -1.
Web-Enabled Systems for Student Access.
ERIC Educational Resources Information Center
Harris, Chad S.; Herring, Tom
1999-01-01
California State University, Fullerton is developing a suite of server-based, Web-enabled applications that distribute the functionality of its student information system software to external customers without modifying the mainframe applications or databases. The cost-effective, secure, and rapidly deployable business solution involves using the…
coordinates research in support of the PEER mission in performance-based earthquake engineering. The broad system dynamic response; assessment of the performance of the structural and nonstructural systems ; consequences in terms of casualties, capital costs, and post-earthquake functionality; and decision-making to
Field-based phenomics for plant genetics research
USDA-ARS?s Scientific Manuscript database
Perhaps the greatest challenge for crop research in the 21st century is how to predict crop performance as a function of genetic architecture and climate change. Advances in “next generation” DNA sequencing have greatly reduced genotyping costs. Methods for characterization of plant traits (phenotyp...
Pathways over Time: Functional Genomics Research in an Introductory Laboratory Course
ERIC Educational Resources Information Center
Reeves, Todd D.; Warner, Douglas M.; Ludlow, Larry H.; O'Connor, Clare M.
2018-01-01
National reports have called for the introduction of research experiences throughout the undergraduate curriculum, but practical implementation at many institutions faces challenges associated with sustainability, cost, and large student populations. We describe a novel course-based undergraduate research experience (CURE) that introduces…
Functionalized multi-walled carbon nanotube based sensors for distributed methane leak detection
This paper presents a highly sensitive, energy efficient and low-cost distributed methane (CH4) sensor system (DMSS) for continuous monitoring, detection and localization of CH4 leaks in natural gas infrastructure such as transmission and distribution pipelines, wells, and produc...
Drukker, M; van Os, J; Sytema, S; Driessen, G; Visser, E; Delespaul, P
2011-09-01
Previous work suggests that the Dutch variant of assertive community treatment (ACT), known as Function ACT (FACT), may be effective in increasing symptomatic remission rates when replacing a system of hospital-based care and separate community-based facilities. FACT guidelines propose a different pattern of psychiatric service consumption compared to traditional services, which should result in different costing parameters than care as usual (CAU). South-Limburg FACT patients, identified through the local psychiatric case register, were matched with patients from a non-FACT control region in the North of the Netherlands (NN). Matching was accomplished using propensity scoring including, among others, total and outpatient care consumption. Assessment, as an important ingredient of FACT, was the point of departure of the present analysis. FACT patients, compared to CAU, had five more outpatient contacts after the index date. Cost-effectiveness was difficult to assess. Implementation of FACT results in measurable changes in mental health care use.
Yebo, Nebiyu A; Sree, Sreeprasanth Pulinthanathu; Levrau, Elisabeth; Detavernier, Christophe; Hens, Zeger; Martens, Johan A; Baets, Roel
2012-05-21
Portable, low cost and real-time gas sensors have a considerable potential in various biomedical and industrial applications. For such applications, nano-photonic gas sensors based on standard silicon fabrication technology offer attractive opportunities. Deposition of high surface area nano-porous coatings on silicon photonic sensors is a means to achieve selective, highly sensitive and multiplexed gas detection on an optical chip. Here we demonstrate selective and reversible ammonia gas detection with functionalized silicon-on-insulator optical micro-ring resonators. The micro-ring resonators are coated with acidic nano-porous aluminosilicate films for specific ammonia sensing, which results in a reversible response to NH(3)with selectivity relative to CO(2). The ammonia detection limit is estimated at about 5 ppm. The detectors reach a steady response to NH(3) within 30 and return to their base level within 60 to 90 seconds. The work opens perspectives on development of nano-photonic sensors for real-time, non-invasive, low cost and light weight biomedical and industrial sensing applications.
Implementation of software-based sensor linearization algorithms on low-cost microcontrollers.
Erdem, Hamit
2010-10-01
Nonlinear sensors and microcontrollers are used in many embedded system designs. As the input-output characteristic of most sensors is nonlinear in nature, obtaining data from a nonlinear sensor by using an integer microcontroller has always been a design challenge. This paper discusses the implementation of six software-based sensor linearization algorithms for low-cost microcontrollers. The comparative study of the linearization algorithms is performed by using a nonlinear optical distance-measuring sensor. The performance of the algorithms is examined with respect to memory space usage, linearization accuracy and algorithm execution time. The implementation and comparison results can be used for selection of a linearization algorithm based on the sensor transfer function, expected linearization accuracy and microcontroller capacity. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Saleh, M; Karfoul, A; Kachenoura, A; Senhadji, L; Albera, L
2016-08-01
Improving the execution time and the numerical complexity of the well-known kurtosis-based maximization method, the RobustICA, is investigated in this paper. A Newton-based scheme is proposed and compared to the conventional RobustICA method. A new implementation using the nonlinear Conjugate Gradient one is investigated also. Regarding the Newton approach, an exact computation of the Hessian of the considered cost function is provided. The proposed approaches and the considered implementations inherit the global plane search of the initial RobustICA method for which a better convergence speed for a given direction is still guaranteed. Numerical results on Magnetic Resonance Spectroscopy (MRS) source separation show the efficiency of the proposed approaches notably the quasi-Newton one using the BFGS method.
Many-body calculations with deuteron based single-particle bases and their associated natural orbits
NASA Astrophysics Data System (ADS)
Puddu, G.
2018-06-01
We use the recently introduced single-particle states obtained from localized deuteron wave-functions as a basis for nuclear many-body calculations. We show that energies can be substantially lowered if the natural orbits (NOs) obtained from this basis are used. We use this modified basis for {}10{{B}}, {}16{{O}} and {}24{{Mg}} employing the bare NNLOopt nucleon–nucleon interaction. The lowering of the energies increases with the mass. Although in principle NOs require a full scale preliminary many-body calculation, we found that an approximate preliminary many-body calculation, with a marginal increase in the computational cost, is sufficient. The use of natural orbits based on an harmonic oscillator basis leads to a much smaller lowering of the energies for a comparable computational cost.
Offshore Wind Plant Balance-of-Station Cost Drivers and Sensitivities (Poster)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saur, G.; Maples, B.; Meadows, B.
2012-09-01
With Balance of System (BOS) costs contributing up to 70% of the installed capital cost, it is fundamental to understanding the BOS costs for offshore wind projects as well as potential cost trends for larger offshore turbines. NREL developed a BOS model using project cost estimates developed by GL Garrad Hassan. Aspects of BOS covered include engineering and permitting, ports and staging, transportation and installation, vessels, foundations, and electrical. The data introduce new scaling relationships for each BOS component to estimate cost as a function of turbine parameters and size, project parameters and size, and soil type. Based on themore » new BOS model, an analysis to understand the non-turbine costs associated with offshore turbine sizes ranging from 3 MW to 6 MW and offshore wind plant sizes ranging from 100 MW to 1000 MW has been conducted. This analysis establishes a more robust baseline cost estimate, identifies the largest cost components of offshore wind project BOS, and explores the sensitivity of the levelized cost of energy to permutations in each BOS cost element. This presentation shows results from the model that illustrates the potential impact of turbine size and project size on the cost of energy from US offshore wind plants.« less
The economic impact of chronic prostatitis.
Calhoun, Elizabeth A; McNaughton Collins, Mary; Pontari, Michel A; O'Leary, Michael; Leiby, Benjamin E; Landis, J Richard; Kusek, John W; Litwin, Mark S
2004-06-14
Little information exists on the economic impact of chronic prostatitis. The objective of this study was to determine the direct and indirect costs associated with chronic prostatitis. Outcomes were assessed using a questionnaire designed to capture health care resource utilization. Resource estimates were converted into unit costs with direct medical cost estimates based on hospital cost-accounting data and indirect costs based on modified labor force, employment, and earnings data from the US Census Bureau. The total direct costs for the 3 months prior to entry into the cohort, excluding hospitalization, were $126 915 for the 167 study participants for an average of $954 per person among the 133 consumers. Of the men, 26% reported work loss valued at an average of $551. The average total costs (direct and indirect) for the 3 months was $1099 per person for those 137 men who had resource consumption with an expected annual total cost per person of $4397. For those study participants with any incurred costs, tests for association revealed that the National Institutes of Health Chronic Prostatitis Symptom Index (P<.001) and each of the 3 subcategories of pain (P =.003), urinary function (P =.03), and quality-of-life (P =.002) were significantly associated with resource use, although the quality-of-life subscale score from the National Institutes of Health Chronic Prostatitis Symptom Index was the only predictor of resource consumption. Chronic prostatitis is associated with substantial costs and lower quality-of-life scores, which predicted resource consumption. The economic impact of chronic prostatitis warrants increased medical attention and resources to identify and test effective treatment strategies.