Sample records for cost function derived

  1. Cost function approach for estimating derived demand for composite wood products

    Treesearch

    T. C. Marcin

    1991-01-01

    A cost function approach was examined for using the concept of duality between production and input factor demands. A translog cost function was used to represent residential construction costs and derived conditional factor demand equations. Alternative models were derived from the translog cost function by imposing parameter restrictions.

  2. Admitting the Inadmissible: Adjoint Formulation for Incomplete Cost Functionals in Aerodynamic Optimization

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Salas, Manuel D.

    1997-01-01

    We derive the adjoint equations for problems in aerodynamic optimization which are improperly considered as "inadmissible." For example, a cost functional which depends on the density, rather than on the pressure, is considered "inadmissible" for an optimization problem governed by the Euler equations. We show that for such problems additional terms should be included in the Lagrangian functional when deriving the adjoint equations. These terms are obtained from the restriction of the interior PDE to the control surface. Demonstrations of the explicit derivation of the adjoint equations for "inadmissible" cost functionals are given for the potential, Euler, and Navier-Stokes equations.

  3. 18 CFR 11.12 - Determination of section 10(f) costs.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... costs of the project. (2) If power is not an authorized function of the headwater project, the section... costs designated as the joint-use power cost, derived by deeming a power function at the project. The value of the benefits assigned to the deemed power function, for purposes of determining the value of...

  4. 18 CFR 11.12 - Determination of section 10(f) costs.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... costs of the project. (2) If power is not an authorized function of the headwater project, the section... costs designated as the joint-use power cost, derived by deeming a power function at the project. The value of the benefits assigned to the deemed power function, for purposes of determining the value of...

  5. 18 CFR 11.12 - Determination of section 10(f) costs.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... costs of the project. (2) If power is not an authorized function of the headwater project, the section... costs designated as the joint-use power cost, derived by deeming a power function at the project. The value of the benefits assigned to the deemed power function, for purposes of determining the value of...

  6. 18 CFR 11.12 - Determination of section 10(f) costs.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... costs of the project. (2) If power is not an authorized function of the headwater project, the section... costs designated as the joint-use power cost, derived by deeming a power function at the project. The value of the benefits assigned to the deemed power function, for purposes of determining the value of...

  7. 18 CFR 11.12 - Determination of section 10(f) costs.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... costs of the project. (2) If power is not an authorized function of the headwater project, the section... costs designated as the joint-use power cost, derived by deeming a power function at the project. The value of the benefits assigned to the deemed power function, for purposes of determining the value of...

  8. Optimum sensitivity derivatives of objective functions in nonlinear programming

    NASA Technical Reports Server (NTRS)

    Barthelemy, J.-F. M.; Sobieszczanski-Sobieski, J.

    1983-01-01

    The feasibility of eliminating second derivatives from the input of optimum sensitivity analyses of optimization problems is demonstrated. This elimination restricts the sensitivity analysis to the first-order sensitivity derivatives of the objective function. It is also shown that when a complete first-order sensitivity analysis is performed, second-order sensitivity derivatives of the objective function are available at little additional cost. An expression is derived whose application to linear programming is presented.

  9. Estimating the Deep Space Network modification costs to prepare for future space missions by using major cost drivers

    NASA Technical Reports Server (NTRS)

    Remer, Donald S.; Sherif, Josef; Buchanan, Harry R.

    1993-01-01

    This paper develops a cost model to do long range planning cost estimates for Deep Space Network (DSN) support of future space missions. The paper focuses on the costs required to modify and/or enhance the DSN to prepare for future space missions. The model is a function of eight major mission cost drivers and estimates both the total cost and the annual costs of a similar future space mission. The model is derived from actual cost data from three space missions: Voyager (Uranus), Voyager (Neptune), and Magellan. Estimates derived from the model are tested against actual cost data for two independent missions, Viking and Mariner Jupiter/Saturn (MJS).

  10. Carbon-Based Functional Materials Derived from Waste for Water Remediation and Energy Storage.

    PubMed

    Ma, Qinglang; Yu, Yifu; Sindoro, Melinda; Fane, Anthony G; Wang, Rong; Zhang, Hua

    2017-04-01

    Carbon-based functional materials hold the key for solving global challenges in the areas of water scarcity and the energy crisis. Although carbon nanotubes (CNTs) and graphene have shown promising results in various fields of application, their high preparation cost and low production yield still dramatically hinder their wide practical applications. Therefore, there is an urgent call for preparing carbon-based functional materials from low-cost, abundant, and sustainable sources. Recent innovative strategies have been developed to convert various waste materials into valuable carbon-based functional materials. These waste-derived carbon-based functional materials have shown great potential in many applications, especially as sorbents for water remediation and electrodes for energy storage. Here, the research progress in the preparation of waste-derived carbon-based functional materials is summarized, along with their applications in water remediation and energy storage; challenges and future research directions in this emerging research field are also discussed. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Does the cost function matter in Bayes decision rule?

    PubMed

    Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann

    2012-02-01

    In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.

  12. Validation of Resource Utilization Groups version III for Home Care (RUG-III/HC): evidence from a Canadian home care jurisdiction.

    PubMed

    Poss, Jeffrey W; Hirdes, John P; Fries, Brant E; McKillop, Ian; Chase, Mary

    2008-04-01

    The case-mix system Resource Utilization Groups version III for Home Care (RUG-III/HC) was derived using a modest data sample from Michigan, but to date no comprehensive large scale validation has been done. This work examines the performance of the RUG-III/HC classification using a large sample from Ontario, Canada. Cost episodes over a 13-week period were aggregated from individual level client billing records and matched to assessment information collected using the Resident Assessment Instrument for Home Care, from which classification rules for RUG-III/HC are drawn. The dependent variable, service cost, was constructed using formal services plus informal care valued at approximately one-half that of a replacement worker. An analytic dataset of 29,921 episodes showed a skewed distribution with over 56% of cases falling into the lowest hierarchical level, reduced physical functions. Case-mix index values for formal and informal cost showed very close similarities to those found in the Michigan derivation. Explained variance for a function of combined formal and informal cost was 37.3% (20.5% for formal cost alone), with personal support services as well as informal care showing the strongest fit to the RUG-III/HC classification. RUG-III/HC validates well compared with the Michigan derivation work. Potential enhancements to the present classification should consider the large numbers of undifferentiated cases in the reduced physical function group, and the low explained variance for professional disciplines.

  13. Computation of Sensitivity Derivatives of Navier-Stokes Equations using Complex Variables

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.

    2004-01-01

    Accurate computation of sensitivity derivatives is becoming an important item in Computational Fluid Dynamics (CFD) because of recent emphasis on using nonlinear CFD methods in aerodynamic design, optimization, stability and control related problems. Several techniques are available to compute gradients or sensitivity derivatives of desired flow quantities or cost functions with respect to selected independent (design) variables. Perhaps the most common and oldest method is to use straightforward finite-differences for the evaluation of sensitivity derivatives. Although very simple, this method is prone to errors associated with choice of step sizes and can be cumbersome for geometric variables. The cost per design variable for computing sensitivity derivatives with central differencing is at least equal to the cost of three full analyses, but is usually much larger in practice due to difficulty in choosing step sizes. Another approach gaining popularity is the use of Automatic Differentiation software (such as ADIFOR) to process the source code, which in turn can be used to evaluate the sensitivity derivatives of preselected functions with respect to chosen design variables. In principle, this approach is also very straightforward and quite promising. The main drawback is the large memory requirement because memory use increases linearly with the number of design variables. ADIFOR software can also be cumber-some for large CFD codes and has not yet reached a full maturity level for production codes, especially in parallel computing environments.

  14. Losses from effluent taxes and quotas under uncertainty

    USGS Publications Warehouse

    Watson, W.D.; Ridker, R.G.

    1984-01-01

    Recent theoretical papers by Adar and Griffin (J. Environ. Econ. Manag.3, 178-188 (1976)), Fishelson (J. Environ. Econ. Manag.3, 189-197 (1976)), and Weitzman (Rev. Econ. Studies41, 477-491 (1974)) show that,different expected social losses arise from using effluent taxes and quotas as alternative control instruments when marginal control costs are uncertain. Key assumptions in these analyses are linear marginal cost and benefit functions and an additive error for the marginal cost function (to reflect uncertainty). In this paper, empirically derived nonlinear functions and more realistic multiplicative error terms are used to estimate expected control and damage costs and to identify (empirically) the mix of control instruments that minimizes expected losses. ?? 1984.

  15. The Role of Inflation and Price Escalation Adjustments in Properly Estimating Program Costs: F-35 Case Study

    DTIC Science & Technology

    2016-04-30

    costs of new defense systems. An inappropriate price index can introduce errors in both development of cost estimating relationships ( CERs ) and in...indexes derived from CERs . These indexes isolate changes in price due to factors other than changes in quality over time. We develop a “Baseline” CER ...The hedonic index application has commonalities with cost estimating relationships ( CERs ), which also model system costs as a function of quality

  16. Suppression cost forecasts in advance of wildfire seasons

    Treesearch

    Jeffrey P. Prestemon; Karen Abt; Krista Gebert

    2008-01-01

    Approaches for forecasting wildfire suppression costs in advance of a wildfire season are demonstrated for two lead times: fall and spring of the current fiscal year (Oct. 1–Sept. 30). Model functional forms are derived from aggregate expressions of a least cost plus net value change model. Empirical estimates of these models are used to generate advance-of-season...

  17. A class of solution-invariant transformations of cost functions for minimum cost flow phase unwrapping.

    PubMed

    Hubig, Michael; Suchandt, Steffen; Adam, Nico

    2004-10-01

    Phase unwrapping (PU) represents an important step in synthetic aperture radar interferometry (InSAR) and other interferometric applications. Among the different PU methods, the so called branch-cut approaches play an important role. In 1996 M. Costantini [Proceedings of the Fringe '96 Workshop ERS SAR Interferometry (European Space Agency, Munich, 1996), pp. 261-272] proposed to transform the problem of correctly placing branch cuts into a minimum cost flow (MCF) problem. The crucial point of this new approach is to generate cost functions that represent the a priori knowledge necessary for PU. Since cost functions are derived from measured data, they are random variables. This leads to the question of MCF solution stability: How much can the cost functions be varied without changing the cheapest flow that represents the correct branch cuts? This question is partially answered: The existence of a whole linear subspace in the space of cost functions is shown; this subspace contains all cost differences by which a cost function can be changed without changing the cost difference between any two flows that are discharging any residue configuration. These cost differences are called strictly stable cost differences. For quadrangular nonclosed networks (the most important type of MCF networks for interferometric purposes) a complete classification of strictly stable cost differences is presented. Further, the role of the well-known class of node potentials in the framework of strictly stable cost differences is investigated, and information on the vector-space structure representing the MCF environment is provided.

  18. Novel model of direct and indirect cost-benefit analysis of mechanical embolectomy over IV tPA for large vessel occlusions: a real-world dollar analysis based on improvements in mRS.

    PubMed

    Mangla, Sundeep; O'Connell, Keara; Kumari, Divya; Shahrzad, Maryam

    2016-01-20

    Ischemic strokes result in significant healthcare expenditures (direct costs) and loss of quality-adjusted life years (QALYs) (indirect costs). Interventional therapy has demonstrated improved functional outcomes in patients with large vessel occlusions (LVOs), which are likely to reduce the economic burden of strokes. To develop a novel real-world dollar model to assess the direct and indirect cost-benefit of mechanical embolectomy compared with medical treatment with intravenous tissue plasminogen activator (IV tPA) based on shifts in modified Rankin scores (mRS). A cost model was developed including multiple parameters to account for both direct and indirect stroke costs. These were adjusted based upon functional outcome (mRS). The model compared IV tPA with mechanical embolectomy to assess the costs and benefits of both therapies. Direct stroke-related costs included hospitalization, inpatient and outpatient rehabilitation, home care, skilled nursing facilities, and long-term care facility costs. Indirect costs included years of life expectancy lost and lost QALYs. Values for the model cost parameters were derived from numerous resources and functional outcomes were derived from the MR CLEAN study as a reflective sample of LVOs. Direct and indirect costs and benefits for the two treatments were assessed using Microsoft Excel 2013. This cost-benefit model found a cost-benefit of mechanical embolectomy over IV tPA of $163 624.27 per patient and the cost benefit for 50 000 patients on an annual basis is $8 181 213 653.77. If applied widely within the USA, mechanical embolectomy will significantly reduce the direct and indirect financial burden of stroke ($8 billion/50 000 patients). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  19. Synthesis of Renewable meta-Xylylenediamine from Biomass-Derived Furfural.

    PubMed

    Scodeller, Ivan; Mansouri, Samir; Morvan, Didier; Muller, Eric; de Oliveira Vigier, Karine; Wischert, Raphael; Jérôme, François

    2018-04-30

    We report the synthesis of biomass-derived functionalized aromatic chemicals from furfural, a building block nowadays available in large scale from low-cost biomass. The scientific strategy relies on a Diels-Alder/aromatization sequence. By controlling the rate of each step, it was possible to produce exclusively the meta aromatic isomer. In particular, through this route, we describe the synthesis of renewably sourced meta-xylylenediamine (MXD). Transposition of this work to other furfural-derived chemicals is also discussed and reveals that functionalized biomass-derived aromatics (benzaldehyde, benzylamine, etc.) can be potentially produced, according to this route. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Basic Economic Principles

    NASA Technical Reports Server (NTRS)

    Tideman, T. N.

    1972-01-01

    An economic approach to design efficient transportation systems involves maximizing an objective function that reflects both goals and costs. A demand curve can be derived by finding the quantities of a good that solve the maximization problem as one varies the price of that commodity, holding income and the prices of all other goods constant. A supply curve is derived by applying the idea of profit maximization of firms. The production function determines the relationship between input and output.

  1. Fitting of full Cobb-Douglas and full VRTS cost frontiers by solving goal programming problem

    NASA Astrophysics Data System (ADS)

    Venkateswarlu, B.; Mahaboob, B.; Subbarami Reddy, C.; Madhusudhana Rao, B.

    2017-11-01

    The present research article first defines two popular production functions viz, Cobb-Douglas and VRTS production frontiers and their dual cost functions and then derives their cost limited maximal outputs. This paper tells us that the cost limited maximal output is cost efficient. Here the one side goal programming problem is proposed by which the full Cobb-Douglas cost frontier, full VRTS frontier can be fitted. This paper includes the framing of goal programming by which stochastic cost frontier and stochastic VRTS frontiers are fitted. Hasan et al. [1] used a parameter approach Stochastic Frontier Approach (SFA) to examine the technical efficiency of the Malaysian domestic banks listed in the Kuala Lumpur stock Exchange (KLSE) market over the period 2005-2010. AshkanHassani [2] exposed Cobb-Douglas Production Functions application in construction schedule crashing and project risk analysis related to the duration of construction projects. Nan Jiang [3] applied Stochastic Frontier analysis to a panel of New Zealand dairy forms in 1998/99-2006/2007.

  2. Optimum swimming pathways of fish spawning migrations in rivers

    USGS Publications Warehouse

    McElroy, Brandon; DeLonay, Aaron; Jacobson, Robert

    2012-01-01

    Fishes that swim upstream in rivers to spawn must navigate complex fluvial velocity fields to arrive at their ultimate locations. One hypothesis with substantial implications is that fish traverse pathways that minimize their energy expenditure during migration. Here we present the methodological and theoretical developments necessary to test this and similar hypotheses. First, a cost function is derived for upstream migration that relates work done by a fish to swimming drag. The energetic cost scales with the cube of a fish's relative velocity integrated along its path. By normalizing to the energy requirements of holding a position in the slowest waters at the path's origin, a cost function is derived that depends only on the physical environment and not on specifics of individual fish. Then, as an example, we demonstrate the analysis of a migration pathway of a telemetrically tracked pallid sturgeon (Scaphirhynchus albus) in the Missouri River (USA). The actual pathway cost is lower than 105 random paths through the surveyed reach and is consistent with the optimization hypothesis. The implication—subject to more extensive validation—is that reproductive success in managed rivers could be increased through manipulation of reservoir releases or channel morphology to increase abundance of lower-cost migration pathways.

  3. School Cost Functions: A Meta-Regression Analysis

    ERIC Educational Resources Information Center

    Colegrave, Andrew D.; Giles, Margaret J.

    2008-01-01

    The education cost literature includes econometric studies attempting to determine economies of scale, or estimate an optimal school or district size. Not only do their results differ, but the studies use dissimilar data, techniques, and models. To derive value from these studies requires that the estimates be made comparable. One method to do…

  4. Automated Surgical Approach Planning for Complex Skull Base Targets: Development and Validation of a Cost Function and Semantic At-las.

    PubMed

    Aghdasi, Nava; Whipple, Mark; Humphreys, Ian M; Moe, Kris S; Hannaford, Blake; Bly, Randall A

    2018-06-01

    Successful multidisciplinary treatment of skull base pathology requires precise preoperative planning. Current surgical approach (pathway) selection for these complex procedures depends on an individual surgeon's experiences and background training. Because of anatomical variation in both normal tissue and pathology (eg, tumor), a successful surgical pathway used on one patient is not necessarily the best approach on another patient. The question is how to define and obtain optimized patient-specific surgical approach pathways? In this article, we demonstrate that the surgeon's knowledge and decision making in preoperative planning can be modeled by a multiobjective cost function in a retrospective analysis of actual complex skull base cases. Two different approaches- weighted-sum approach and Pareto optimality-were used with a defined cost function to derive optimized surgical pathways based on preoperative computed tomography (CT) scans and manually designated pathology. With the first method, surgeon's preferences were input as a set of weights for each objective before the search. In the second approach, the surgeon's preferences were used to select a surgical pathway from the computed Pareto optimal set. Using preoperative CT and magnetic resonance imaging, the patient-specific surgical pathways derived by these methods were similar (85% agreement) to the actual approaches performed on patients. In one case where the actual surgical approach was different, revision surgery was required and was performed utilizing the computationally derived approach pathway.

  5. Interrogation of electrical connector faults using miniaturized UWB sources

    NASA Astrophysics Data System (ADS)

    Tokgöz, Çaǧata; Dardona, Sameh

    2017-01-01

    A diagnostic method for the detection, identification, and characterization of precursors of faults due to partial insertion of pin-socket contacts within electrical connectors commonly used in avionics systems is presented. It is demonstrated that a miniaturized ultrawideband (UWB) source and a minispectrum analyzer can be employed to measure resonant frequency shifts in connector S parameters as a small and low-cost alternative to a large and expensive network analyzer. The transfer function of an electrical connector is represented as a ratio of the spectra measured using the spectrum analyzer with and without the connector. Alternatively, the transfer function is derived in terms of the connector S parameters and the reflection coefficients at both ports of the connector. The transfer function data obtained using this derivation agreed well with its representation as a measured spectral ratio. The derivation enabled the extraction of the connector S parameters from the measured transfer function data as a function of the insertion depth of a pin-socket contact within the connector. In comparison with the S parameters measured directly using a network analyzer at multiple insertion depths, the S parameters extracted from the measured transfer function showed consistent and reliable representation of the electrical connector fault. The results demonstrate the potential of integrating a low-cost miniaturized UWB device into a connector harness for real-time detection of precursors to partially inserted connector faults.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granderson, G.D.

    The purpose of the dissertation is to examine the impact of rate-of-return regulation on the cost of transporting natural gas in interstate commerce. Of particular interest is the effect of the regulation on the input choice of a firm. Does regulation induce a regulated firm to produce its selected level of output at greater than minimum cost The theoretical model is based on the work of Rolf Faere and James Logan who investigate the duality relationship between the cost and production functions of a rate-of-return regulated firm. Faere and Logan derive the cost function for a regulated firm as themore » minimum cost of producing the firm's selected level of output, subject to the regulatory constraint. The regulated cost function is used to recover the unregulated cost function. A firm's unregulated cost function is the minimum cost of producing its selected level of output. Characteristics of the production technology are obtained from duality between the production and unregulated cost functions. Using data on 20 pipeline companies from 1977 to 1987, the author estimates a random effects model that consists of a regulated cost function and its associated input share equations. The model is estimated as a set of seemingly unrelated regressions. The empirical results are used to test the Faere and Logan theory and the traditional Averch-Johnson hypothesis of overcapitalization. Parameter estimates are used to recover the unregulated cost function and to calculate the amount by which transportation costs are increased by the regulation of the industry. Empirical results show that a firm's transportation cost decreases as the allowed rate of return increases and the regulatory constraint becomes less tight. Elimination of the regulatory constraint would lead to a reduction in costs on average of 5.278%. There is evidence that firms overcapitalize on pipeline capital. There is inconclusive evidence on whether firms overcapitalized on compressor station capital.« less

  7. Challenges and complexity of functionality evaluation of flavan-3-ol derivatives.

    PubMed

    Saito, Akiko

    2017-06-01

    Flavan-3-ol derivatives are common plant-derived bioactive compounds. In particular, (-)-epigallocatechin-3-O-gallate shows various moderate biological activities without severe toxicity, and its health-promoting effects have been widely studied because it is a main ingredient in green tea and is commercially available at low cost. Although various biologically active flavan-3-ol derivatives are present as minor constituents in plants as well as in green tea, their biological activities have yet to be revealed, mainly due to their relative unavailability. Here, I outline the major factors contributing to the complexity of functionality studies of flavan-3-ol derivatives, including proanthocyanidins and oligomeric flavan-3-ols. I emphasize the importance of conducting structure-activity relationship studies using synthesized flavan-3-ol derivatives that are difficult to obtain from plant extracts in pure form to overcome this challenge. Further discovery of these minor constituents showing strong biological activities is expected to produce useful information for the development of functional health foods.

  8. Optimal control of switching time in switched stochastic systems with multi-switching times and different costs

    NASA Astrophysics Data System (ADS)

    Liu, Xiaomei; Li, Shengtao; Zhang, Kanjian

    2017-08-01

    In this paper, we solve an optimal control problem for a class of time-invariant switched stochastic systems with multi-switching times, where the objective is to minimise a cost functional with different costs defined on the states. In particular, we focus on problems in which a pre-specified sequence of active subsystems is given and the switching times are the only control variables. Based on the calculus of variation, we derive the gradient of the cost functional with respect to the switching times on an especially simple form, which can be directly used in gradient descent algorithms to locate the optimal switching instants. Finally, a numerical example is given, highlighting the validity of the proposed methodology.

  9. An Optimization Principle for Deriving Nonequilibrium Statistical Models of Hamiltonian Dynamics

    NASA Astrophysics Data System (ADS)

    Turkington, Bruce

    2013-08-01

    A general method for deriving closed reduced models of Hamiltonian dynamical systems is developed using techniques from optimization and statistical estimation. Given a vector of resolved variables, selected to describe the macroscopic state of the system, a family of quasi-equilibrium probability densities on phase space corresponding to the resolved variables is employed as a statistical model, and the evolution of the mean resolved vector is estimated by optimizing over paths of these densities. Specifically, a cost function is constructed to quantify the lack-of-fit to the microscopic dynamics of any feasible path of densities from the statistical model; it is an ensemble-averaged, weighted, squared-norm of the residual that results from submitting the path of densities to the Liouville equation. The path that minimizes the time integral of the cost function determines the best-fit evolution of the mean resolved vector. The closed reduced equations satisfied by the optimal path are derived by Hamilton-Jacobi theory. When expressed in terms of the macroscopic variables, these equations have the generic structure of governing equations for nonequilibrium thermodynamics. In particular, the value function for the optimization principle coincides with the dissipation potential that defines the relation between thermodynamic forces and fluxes. The adjustable closure parameters in the best-fit reduced equations depend explicitly on the arbitrary weights that enter into the lack-of-fit cost function. Two particular model reductions are outlined to illustrate the general method. In each example the set of weights in the optimization principle contracts into a single effective closure parameter.

  10. Porous graphitic carbon nitride synthesized via direct polymerization of urea for efficient sunlight-driven photocatalytic hydrogen production

    NASA Astrophysics Data System (ADS)

    Zhang, Yuewei; Liu, Jinghai; Wu, Guan; Chen, Wei

    2012-08-01

    Energy captured directly from sunlight provides an attractive approach towards fulfilling the need for green energy resources on the terawatt scale with minimal environmental impact. Collecting and storing solar energy into fuel through photocatalyzed water splitting to generate hydrogen in a cost-effective way is desirable. To achieve this goal, low cost and environmentally benign urea was used to synthesize the metal-free photocatalyst graphitic carbon nitride (g-C3N4). A porous structure is achieved via one-step polymerization of the single precursor. The porous structure with increased BET surface area and pore volume shows a much higher hydrogen production rate under simulated sunlight irradiation than thiourea-derived and dicyanamide-derived g-C3N4. The presence of an oxygen atom is presumed to play a key role in adjusting the textural properties. Further improvement of the photocatalytic function can be expected with after-treatment due to its rich chemistry in functionalization.Energy captured directly from sunlight provides an attractive approach towards fulfilling the need for green energy resources on the terawatt scale with minimal environmental impact. Collecting and storing solar energy into fuel through photocatalyzed water splitting to generate hydrogen in a cost-effective way is desirable. To achieve this goal, low cost and environmentally benign urea was used to synthesize the metal-free photocatalyst graphitic carbon nitride (g-C3N4). A porous structure is achieved via one-step polymerization of the single precursor. The porous structure with increased BET surface area and pore volume shows a much higher hydrogen production rate under simulated sunlight irradiation than thiourea-derived and dicyanamide-derived g-C3N4. The presence of an oxygen atom is presumed to play a key role in adjusting the textural properties. Further improvement of the photocatalytic function can be expected with after-treatment due to its rich chemistry in functionalization. Electronic supplementary information (ESI) available: Methods for preparing and characterizing UCN, TCN and DCN samples. Methods for examining the photocatalytic hydrogen production. FTIR, XPS, and digital photos of three products are shown in Fig. S1-6. See DOI: 10.1039/c2nr30948c

  11. Nonlocal kinetic energy functionals by functional integration.

    PubMed

    Mi, Wenhui; Genova, Alessandro; Pavanello, Michele

    2018-05-14

    Since the seminal studies of Thomas and Fermi, researchers in the Density-Functional Theory (DFT) community are searching for accurate electron density functionals. Arguably, the toughest functional to approximate is the noninteracting kinetic energy, T s [ρ], the subject of this work. The typical paradigm is to first approximate the energy functional and then take its functional derivative, δT s [ρ]δρ(r), yielding a potential that can be used in orbital-free DFT or subsystem DFT simulations. Here, this paradigm is challenged by constructing the potential from the second-functional derivative via functional integration. A new nonlocal functional for T s [ρ] is prescribed [which we dub Mi-Genova-Pavanello (MGP)] having a density independent kernel. MGP is constructed to satisfy three exact conditions: (1) a nonzero "Kinetic electron" arising from a nonzero exchange hole; (2) the second functional derivative must reduce to the inverse Lindhard function in the limit of homogenous densities; (3) the potential is derived from functional integration of the second functional derivative. Pilot calculations show that MGP is capable of reproducing accurate equilibrium volumes, bulk moduli, total energy, and electron densities for metallic (body-centered cubic, face-centered cubic) and semiconducting (crystal diamond) phases of silicon as well as of III-V semiconductors. The MGP functional is found to be numerically stable typically reaching self-consistency within 12 iterations of a truncated Newton minimization algorithm. MGP's computational cost and memory requirements are low and comparable to the Wang-Teter nonlocal functional or any generalized gradient approximation functional.

  12. Nonlocal kinetic energy functionals by functional integration

    NASA Astrophysics Data System (ADS)

    Mi, Wenhui; Genova, Alessandro; Pavanello, Michele

    2018-05-01

    Since the seminal studies of Thomas and Fermi, researchers in the Density-Functional Theory (DFT) community are searching for accurate electron density functionals. Arguably, the toughest functional to approximate is the noninteracting kinetic energy, Ts[ρ], the subject of this work. The typical paradigm is to first approximate the energy functional and then take its functional derivative, δ/Ts[ρ ] δ ρ (r ) , yielding a potential that can be used in orbital-free DFT or subsystem DFT simulations. Here, this paradigm is challenged by constructing the potential from the second-functional derivative via functional integration. A new nonlocal functional for Ts[ρ] is prescribed [which we dub Mi-Genova-Pavanello (MGP)] having a density independent kernel. MGP is constructed to satisfy three exact conditions: (1) a nonzero "Kinetic electron" arising from a nonzero exchange hole; (2) the second functional derivative must reduce to the inverse Lindhard function in the limit of homogenous densities; (3) the potential is derived from functional integration of the second functional derivative. Pilot calculations show that MGP is capable of reproducing accurate equilibrium volumes, bulk moduli, total energy, and electron densities for metallic (body-centered cubic, face-centered cubic) and semiconducting (crystal diamond) phases of silicon as well as of III-V semiconductors. The MGP functional is found to be numerically stable typically reaching self-consistency within 12 iterations of a truncated Newton minimization algorithm. MGP's computational cost and memory requirements are low and comparable to the Wang-Teter nonlocal functional or any generalized gradient approximation functional.

  13. Optimality of cycle time and inventory decisions in a two echelon inventory system with exponential price dependent demand under credit period

    NASA Astrophysics Data System (ADS)

    Krugon, Seelam; Nagaraju, Dega

    2017-05-01

    This work describes and proposes an two echelon inventory system under supply chain, where the manufacturer offers credit period to the retailer with exponential price dependent demand. The model is framed as demand is expressed as exponential function of retailer’s unit selling price. Mathematical model is framed to demonstrate the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. The major objective of the paper is to provide trade credit concept from the manufacturer to the retailer with exponential price dependent demand. The retailer would like to delay the payments of the manufacturer. At the first stage retailer and manufacturer expressions are expressed with the functions of ordering cost, carrying cost, transportation cost. In second stage combining of the manufacturer and retailer expressions are expressed. A MATLAB program is written to derive the optimality of cycle time, retailer replenishment quantity, number of shipments, and total relevant cost of the supply chain. From the optimality criteria derived managerial insights can be made. From the research findings, it is evident that the total cost of the supply chain is decreased with the increase in credit period under exponential price dependent demand. To analyse the influence of the model parameters, parametric analysis is also done by taking with help of numerical example.

  14. Annual Costs of Care for Pediatric Irritable Bowel Syndrome, Functional Abdominal Pain, and Functional Abdominal Pain Syndrome.

    PubMed

    Hoekman, Daniël R; Rutten, Juliette M T M; Vlieger, Arine M; Benninga, Marc A; Dijkgraaf, Marcel G W

    2015-11-01

    To estimate annual medical and nonmedical costs of care for children diagnosed with irritable bowel syndrome (IBS) or functional abdominal pain (syndrome; FAP/FAPS). Baseline data from children with IBS or FAP/FAPS who were included in a multicenter trial (NTR2725) in The Netherlands were analyzed. Patients' parents completed a questionnaire concerning usage of healthcare resources, travel costs, out-of-pocket expenses, productivity loss of parents, and supportive measures at school. Use of abdominal pain related prescription medication was derived from case reports forms. Total annual costs per patient were calculated as the sum of direct and indirect medical and nonmedical costs. Costs of initial diagnostic investigations were not included. A total of 258 children, mean age 13.4 years (±5.5), were included, and 183 (70.9%) were female. Total annual costs per patient were estimated to be €2512.31. Inpatient and outpatient healthcare use were major cost drivers, accounting for 22.5% and 35.2% of total annual costs, respectively. Parental productivity loss accounted for 22.2% of total annual costs. No difference was found in total costs between children with IBS or FAP/FAPS. Pediatric abdominal pain related functional gastrointestinal disorders impose a large economic burden on patients' families and healthcare systems. More than one-half of total annual costs of IBS and FAP/FAPS consist of inpatient and outpatient healthcare use. Netherlands Trial Registry: NTR2725. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. 3D CSEM data inversion using Newton and Halley class methods

    NASA Astrophysics Data System (ADS)

    Amaya, M.; Hansen, K. R.; Morten, J. P.

    2016-05-01

    For the first time in 3D controlled source electromagnetic data inversion, we explore the use of the Newton and the Halley optimization methods, which may show their potential when the cost function has a complex topology. The inversion is formulated as a constrained nonlinear least-squares problem which is solved by iterative optimization. These methods require the derivatives up to second order of the residuals with respect to model parameters. We show how Green's functions determine the high-order derivatives, and develop a diagrammatical representation of the residual derivatives. The Green's functions are efficiently calculated on-the-fly, making use of a finite-difference frequency-domain forward modelling code based on a multi-frontal sparse direct solver. This allow us to build the second-order derivatives of the residuals keeping the memory cost in the same order as in a Gauss-Newton (GN) scheme. Model updates are computed with a trust-region based conjugate-gradient solver which does not require the computation of a stabilizer. We present inversion results for a synthetic survey and compare the GN, Newton, and super-Halley optimization schemes, and consider two different approaches to set the initial trust-region radius. Our analysis shows that the Newton and super-Halley schemes, using the same regularization configuration, add significant information to the inversion so that the convergence is reached by different paths. In our simple resistivity model examples, the convergence speed of the Newton and the super-Halley schemes are either similar or slightly superior with respect to the convergence speed of the GN scheme, close to the minimum of the cost function. Due to the current noise levels and other measurement inaccuracies in geophysical investigations, this advantageous behaviour is at present of low consequence, but may, with the further improvement of geophysical data acquisition, be an argument for more accurate higher-order methods like those applied in this paper.

  16. Generalized Variance Function Applications in Forestry

    Treesearch

    James Alegria; Charles T. Scott; Charles T. Scott

    1991-01-01

    Adequately predicting the sampling errors of tabular data can reduce printing costs by eliminating the need to publish separate sampling error tables. Two generalized variance functions (GVFs) found in the literature and three GVFs derived for this study were evaluated for their ability to predict the sampling error of tabular forestry estimates. The recommended GVFs...

  17. An improved 3D MoF method based on analytical partial derivatives

    NASA Astrophysics Data System (ADS)

    Chen, Xiang; Zhang, Xiong

    2016-12-01

    MoF (Moment of Fluid) method is one of the most accurate approaches among various surface reconstruction algorithms. As other second order methods, MoF method needs to solve an implicit optimization problem to obtain the optimal approximate surface. Therefore, the partial derivatives of the objective function have to be involved during the iteration for efficiency and accuracy. However, to the best of our knowledge, the derivatives are currently estimated numerically by finite difference approximation because it is very difficult to obtain the analytical derivatives of the object function for an implicit optimization problem. Employing numerical derivatives in an iteration not only increase the computational cost, but also deteriorate the convergence rate and robustness of the iteration due to their numerical error. In this paper, the analytical first order partial derivatives of the objective function are deduced for 3D problems. The analytical derivatives can be calculated accurately, so they are incorporated into the MoF method to improve its accuracy, efficiency and robustness. Numerical studies show that by using the analytical derivatives the iterations are converged in all mixed cells with the efficiency improvement of 3 to 4 times.

  18. PACE 2: Pricing and Cost Estimating Handbook

    NASA Technical Reports Server (NTRS)

    Stewart, R. D.; Shepherd, T.

    1977-01-01

    An automatic data processing system to be used for the preparation of industrial engineering type manhour and material cost estimates has been established. This computer system has evolved into a highly versatile and highly flexible tool which significantly reduces computation time, eliminates computational errors, and reduces typing and reproduction time for estimators and pricers since all mathematical and clerical functions are automatic once basic inputs are derived.

  19. GPU-accelerated adjoint algorithmic differentiation

    NASA Astrophysics Data System (ADS)

    Gremse, Felix; Höfter, Andreas; Razik, Lukas; Kiessling, Fabian; Naumann, Uwe

    2016-03-01

    Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the ;tape;. Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5 ± 4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography.

  20. GPU-Accelerated Adjoint Algorithmic Differentiation.

    PubMed

    Gremse, Felix; Höfter, Andreas; Razik, Lukas; Kiessling, Fabian; Naumann, Uwe

    2016-03-01

    Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the "tape". Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5 ± 4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography.

  1. GPU-Accelerated Adjoint Algorithmic Differentiation

    PubMed Central

    Gremse, Felix; Höfter, Andreas; Razik, Lukas; Kiessling, Fabian; Naumann, Uwe

    2015-01-01

    Many scientific problems such as classifier training or medical image reconstruction can be expressed as minimization of differentiable real-valued cost functions and solved with iterative gradient-based methods. Adjoint algorithmic differentiation (AAD) enables automated computation of gradients of such cost functions implemented as computer programs. To backpropagate adjoint derivatives, excessive memory is potentially required to store the intermediate partial derivatives on a dedicated data structure, referred to as the “tape”. Parallelization is difficult because threads need to synchronize their accesses during taping and backpropagation. This situation is aggravated for many-core architectures, such as Graphics Processing Units (GPUs), because of the large number of light-weight threads and the limited memory size in general as well as per thread. We show how these limitations can be mediated if the cost function is expressed using GPU-accelerated vector and matrix operations which are recognized as intrinsic functions by our AAD software. We compare this approach with naive and vectorized implementations for CPUs. We use four increasingly complex cost functions to evaluate the performance with respect to memory consumption and gradient computation times. Using vectorization, CPU and GPU memory consumption could be substantially reduced compared to the naive reference implementation, in some cases even by an order of complexity. The vectorization allowed usage of optimized parallel libraries during forward and reverse passes which resulted in high speedups for the vectorized CPU version compared to the naive reference implementation. The GPU version achieved an additional speedup of 7.5 ± 4.4, showing that the processing power of GPUs can be utilized for AAD using this concept. Furthermore, we show how this software can be systematically extended for more complex problems such as nonlinear absorption reconstruction for fluorescence-mediated tomography. PMID:26941443

  2. Outcomes and costs of incorporating a multibiomarker disease activity test in the management of patients with rheumatoid arthritis.

    PubMed

    Michaud, Kaleb; Strand, Vibeke; Shadick, Nancy A; Degtiar, Irina; Ford, Kerri; Michalopoulos, Steven N; Hornberger, John

    2015-09-01

    The multibiomarker disease activity (MBDA) blood test has been clinically validated as a measure of disease activity in patients with RA. We aimed to estimate the effect of the MBDA test on physical function for patients with RA (based on HAQ), quality-adjusted life years and costs over 10 years. A decision analysis was conducted to quantify the effect of using the MBDA test on RA-related outcomes and costs to private payers and employers. Results of a clinical management study reporting changes to anti-rheumatic drug recommendations after use of the MBDA test informed clinical utility. The effect of treatment changes on HAQ was derived from 5 tight-control and 13 treatment-switch trials. Baseline HAQ scores and the HAQ score relationship with medical costs and quality of life were derived from published National Data Bank for Rheumatic Diseases data. Use of the MBDA test is projected to improve HAQ scores by 0.09 units in year 1, declining to 0.02 units after 10 years. Over the 10 year time horizon, quality-adjusted life years increased by 0.08 years and costs decreased by US$457 (cost savings in disability-related medical costs, US$659; in productivity costs, US$2137). The most influential variable in the analysis was the effect of the MBDA test on clinician treatment recommendations and subsequent HAQ changes. The MBDA test aids in the assessment of disease activity in patients with RA by changing treatment decisions, improving the functional status of patients and cost savings. Further validation is ongoing and future longitudinal studies are warranted. © The Author 2015. Published by Oxford University Press on behalf of the British Society for Rheumatology.

  3. Replica Approach for Minimal Investment Risk with Cost

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-06-01

    In the present work, the optimal portfolio minimizing the investment risk with cost is discussed analytically, where an objective function is constructed in terms of two negative aspects of investment, the risk and cost. We note the mathematical similarity between the Hamiltonian in the mean-variance model and the Hamiltonians in the Hopfield model and the Sherrington-Kirkpatrick model, show that we can analyze this portfolio optimization problem by using replica analysis, and derive the minimal investment risk with cost and the investment concentration of the optimal portfolio. Furthermore, we validate our proposed method through numerical simulations.

  4. Model reduction by weighted Component Cost Analysis

    NASA Technical Reports Server (NTRS)

    Kim, Jae H.; Skelton, Robert E.

    1990-01-01

    Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.

  5. Model predictive controller design for boost DC-DC converter using T-S fuzzy cost function

    NASA Astrophysics Data System (ADS)

    Seo, Sang-Wha; Kim, Yong; Choi, Han Ho

    2017-11-01

    This paper proposes a Takagi-Sugeno (T-S) fuzzy method to select cost function weights of finite control set model predictive DC-DC converter control algorithms. The proposed method updates the cost function weights at every sample time by using T-S type fuzzy rules derived from the common optimal control engineering knowledge that a state or input variable with an excessively large magnitude can be penalised by increasing the weight corresponding to the variable. The best control input is determined via the online optimisation of the T-S fuzzy cost function for all the possible control input sequences. This paper implements the proposed model predictive control algorithm in real time on a Texas Instruments TMS320F28335 floating-point Digital Signal Processor (DSP). Some experimental results are given to illuminate the practicality and effectiveness of the proposed control system under several operating conditions. The results verify that our method can yield not only good transient and steady-state responses (fast recovery time, small overshoot, zero steady-state error, etc.) but also insensitiveness to abrupt load or input voltage parameter variations.

  6. Traffic routing for multicomputer networks with virtual cut-through capability

    NASA Technical Reports Server (NTRS)

    Kandlur, Dilip D.; Shin, Kang G.

    1992-01-01

    Consideration is given to the problem of selecting routes for interprocess communication in a network with virtual cut-through capability, while balancing the network load and minimizing the number of times that a message gets buffered. An approach is proposed that formulates the route selection problem as a minimization problem with a link cost function that depends upon the traffic through the link. The form of this cost function is derived using the probability of establishing a virtual cut-through route. The route selection problem is shown to be NP-hard, and an algorithm is developed to incrementally reduce the cost by rerouting the traffic. The performance of this algorithm is exemplified by two network topologies: the hypercube and the C-wrapped hexagonal mesh.

  7. A unified convergence theory of a numerical method, and applications to the replenishment policies.

    PubMed

    Mi, Xiang-jiang; Wang, Xing-hua

    2004-01-01

    In determining the replenishment policy for an inventory system, some researchers advocated that the iterative method of Newton could be applied to the derivative of the total cost function in order to get the optimal solution. But this approach requires calculation of the second derivative of the function. Avoiding this complex computation we use another iterative method presented by the second author. One of the goals of this paper is to present a unified convergence theory of this method. Then we give a numerical example to show the application of our theory.

  8. Is Surgery for Displaced, Midshaft Clavicle Fractures in Adults Cost-Effective? Results Based on a Multicenter Randomized Controlled Trial

    PubMed Central

    2010-01-01

    Objectives To determine the cost-effectiveness of open reduction internal fixation (ORIF) of displaced, midshaft clavicle fractures in adults. Design Formal cost-effectiveness analysis based on a prospective, randomized controlled trial. Setting Eight hospitals in Canada (seven university affiliated and one community hospital) Patients/Participants 132 adults with acute, completely displaced, midshaft clavicle fractures Intervention Clavicle ORIF versus nonoperative treatment Main Outcome Measurements Utilities derived from SF-6D Results The base-case cost per quality adjusted life year (QALY) gained for ORIF was $65,000. Cost-effectiveness improved to $28,150/QALY gained when the functional benefit from ORIF was assumed to be permanent, with cost per QALY gained falling below $50,000 when the functional advantage persisted for 9.3 years or more. In other sensitivity analyses, the cost per QALY gained for ORIF fell below $50,000 when ORIF cost less than $10,465 (base case cost $13,668) or the long-term utility difference between nonoperative treatment and ORIF was greater than 0.034 (base-case difference 0.014). Short-term disutility associated with fracture healing also affected cost-effectiveness, with the cost per QALY gained for ORIF falling below $50,000 when the utility of a fracture treated nonoperatively prior to union was less than 0.617 (base-case utility 0.706) or when nonoperative treatment increased the time to union by 20 weeks (base-case difference 12 weeks). Conclusions The cost-effectiveness of ORIF after acute clavicle fracture depended on the durability of functional advantage for ORIF compared to nonoperative treatment. When functional benefits persisted for more than 9 years, ORIF had favorable value compared with many accepted health interventions. PMID:20577073

  9. Assessing administrative costs of mental health and substance abuse services.

    PubMed

    Broyles, Robert W; Narine, Lutchmie; Robertson, Madeline J

    2004-05-01

    Increasing competition in the market for mental health and substance abuse MHSA services and the potential to realize significant administrative savings have created an imperative to monitor, evaluate, and control spending on administrative functions. This paper develops a generic model that evaluates spending on administrative personnel by a group of providers. The precision of the model is demonstrated by examining a set of data assembled from five MHSA service providers. The model examines a differential cost construction derived from inter-facility comparisons of administrative expenses. After controlling for the scale of operations, the results enable MHSA programs to control the efficiency of administrative personnel and related rates of compensation. The results indicate that the efficiency of using the administrative complement and the scale of operations represent the lion's share of the total differential cost. The analysis also indicates that a modest improvement in the use of administrative personnel results in substantial cost savings, an increase in the net cash flow derived from operations, an improvement in the fiscal performance of the provider, and a decline in opportunity costs that assume the form of foregone direct patient care.

  10. A new analytical method for estimating lumped parameter constants of linear viscoelastic models from strain rate tests

    NASA Astrophysics Data System (ADS)

    Mattei, G.; Ahluwalia, A.

    2018-04-01

    We introduce a new function, the apparent elastic modulus strain-rate spectrum, E_{app} ( \\dot{ɛ} ), for the derivation of lumped parameter constants for Generalized Maxwell (GM) linear viscoelastic models from stress-strain data obtained at various compressive strain rates ( \\dot{ɛ}). The E_{app} ( \\dot{ɛ} ) function was derived using the tangent modulus function obtained from the GM model stress-strain response to a constant \\dot{ɛ} input. Material viscoelastic parameters can be rapidly derived by fitting experimental E_{app} data obtained at different strain rates to the E_{app} ( \\dot{ɛ} ) function. This single-curve fitting returns similar viscoelastic constants as the original epsilon dot method based on a multi-curve global fitting procedure with shared parameters. Its low computational cost permits quick and robust identification of viscoelastic constants even when a large number of strain rates or replicates per strain rate are considered. This method is particularly suited for the analysis of bulk compression and nano-indentation data of soft (bio)materials.

  11. Three-dimensional habitat structure and landscape genetics: a step forward in estimating functional connectivity.

    PubMed

    Milanesi, P; Holderegger, R; Bollmann, K; Gugerli, F; Zellweger, F

    2017-02-01

    Estimating connectivity among fragmented habitat patches is crucial for evaluating the functionality of ecological networks. However, current estimates of landscape resistance to animal movement and dispersal lack landscape-level data on local habitat structure. Here, we used a landscape genetics approach to show that high-fidelity habitat structure maps derived from Light Detection and Ranging (LiDAR) data critically improve functional connectivity estimates compared to conventional land cover data. We related pairwise genetic distances of 128 Capercaillie (Tetrao urogallus) genotypes to least-cost path distances at multiple scales derived from land cover data. Resulting β values of linear mixed effects models ranged from 0.372 to 0.495, while those derived from LiDAR ranged from 0.558 to 0.758. The identification and conservation of functional ecological networks suffering from habitat fragmentation and homogenization will thus benefit from the growing availability of detailed and contiguous data on three-dimensional habitat structure and associated habitat quality. © 2016 by the Ecological Society of America.

  12. High-Order Automatic Differentiation of Unmodified Linear Algebra Routines via Nilpotent Matrices

    NASA Astrophysics Data System (ADS)

    Dunham, Benjamin Z.

    This work presents a new automatic differentiation method, Nilpotent Matrix Differentiation (NMD), capable of propagating any order of mixed or univariate derivative through common linear algebra functions--most notably third-party sparse solvers and decomposition routines, in addition to basic matrix arithmetic operations and power series--without changing data-type or modifying code line by line; this allows differentiation across sequences of arbitrarily many such functions with minimal implementation effort. NMD works by enlarging the matrices and vectors passed to the routines, replacing each original scalar with a matrix block augmented by derivative data; these blocks are constructed with special sparsity structures, termed "stencils," each designed to be isomorphic to a particular multidimensional hypercomplex algebra. The algebras are in turn designed such that Taylor expansions of hypercomplex function evaluations are finite in length and thus exactly track derivatives without approximation error. Although this use of the method in the "forward mode" is unique in its own right, it is also possible to apply it to existing implementations of the (first-order) discrete adjoint method to find high-order derivatives with lowered cost complexity; for example, for a problem with N inputs and an adjoint solver whose cost is independent of N--i.e., O(1)--the N x N Hessian can be found in O(N) time, which is comparable to existing second-order adjoint methods that require far more problem-specific implementation effort. Higher derivatives are likewise less expensive--e.g., a N x N x N rank-three tensor can be found in O(N2). Alternatively, a Hessian-vector product can be found in O(1) time, which may open up many matrix-based simulations to a range of existing optimization or surrogate modeling approaches. As a final corollary in parallel to the NMD-adjoint hybrid method, the existing complex-step differentiation (CD) technique is also shown to be capable of finding the Hessian-vector product. All variants are implemented on a stochastic diffusion problem and compared in-depth with various cost and accuracy metrics.

  13. Riemannian geometric approach to human arm dynamics, movement optimization, and invariance

    NASA Astrophysics Data System (ADS)

    Biess, Armin; Flash, Tamar; Liebermann, Dario G.

    2011-03-01

    We present a generally covariant formulation of human arm dynamics and optimization principles in Riemannian configuration space. We extend the one-parameter family of mean-squared-derivative (MSD) cost functionals from Euclidean to Riemannian space, and we show that they are mathematically identical to the corresponding dynamic costs when formulated in a Riemannian space equipped with the kinetic energy metric. In particular, we derive the equivalence of the minimum-jerk and minimum-torque change models in this metric space. Solutions of the one-parameter family of MSD variational problems in Riemannian space are given by (reparametrized) geodesic paths, which correspond to movements with least muscular effort. Finally, movement invariants are derived from symmetries of the Riemannian manifold. We argue that the geometrical structure imposed on the arm’s configuration space may provide insights into the emerging properties of the movements generated by the motor system.

  14. Estimating economic thresholds for pest control: an alternative procedure.

    PubMed

    Ramirez, O A; Saunders, J L

    1999-04-01

    An alternative methodology to determine profit maximizing economic thresholds is developed and illustrated. An optimization problem based on the main biological and economic relations involved in determining a profit maximizing economic threshold is first advanced. From it, a more manageable model of 2 nonsimultaneous reduced-from equations is derived, which represents a simpler but conceptually and statistically sound alternative. The model recognizes that yields and pest control costs are a function of the economic threshold used. Higher (less strict) economic thresholds can result in lower yields and, therefore, a lower gross income from the sale of the product, but could also be less costly to maintain. The highest possible profits will be obtained by using the economic threshold that results in a maximum difference between gross income and pest control cost functions.

  15. Models for forecasting energy use in the US farm sector

    NASA Astrophysics Data System (ADS)

    Christensen, L. R.

    1981-07-01

    Econometric models were developed and estimated for the purpose of forecasting electricity and petroleum demand in US agriculture. A structural approach is pursued which takes account of the fact that the quantity demanded of any one input is a decision made in conjunction with other input decisions. Three different functional forms of varying degrees of complexity are specified for the structural cost function, which describes the cost of production as a function of the level of output and factor prices. Demand for materials (all purchased inputs) is derived from these models. A separate model which break this demand up into demand for the four components of materials is used to produce forecasts of electricity and petroleum is a stepwise manner.

  16. Space processing applications payload equipment study. Volume 2E: Commercial equipment utility

    NASA Technical Reports Server (NTRS)

    Smith, A. G. (Editor)

    1974-01-01

    Examination of commercial equipment technologies revealed that the functional performance requirements of space processing equipment could generally be met by state-of-the-art design practices. Thus, an apparatus could be evolved from a standard item or derived by custom design using present technologies. About 15 percent of the equipment needed has no analogous commercial base of derivation and requires special development. This equipment is involved primarily with contactless heating and position control. The derivation of payloads using commercial equipment sources provides a broad and potentially cost-effective base upon which to draw. The derivation of payload equipment from commercial technologies poses other issues beyond that of the identifiable functional performance, but preliminary results on testing of selected equipment testing appear quite favorable. During this phase of the SPA study, several aspects of commercial equipment utility were assessed and considered. These included safety, packaging and structural, power conditioning (electrical/electronic), thermal and materials of construction.

  17. A Case Study on the Application of a Structured Experimental Method for Optimal Parameter Design of a Complex Control System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2015-01-01

    This report documents a case study on the application of Reliability Engineering techniques to achieve an optimal balance between performance and robustness by tuning the functional parameters of a complex non-linear control system. For complex systems with intricate and non-linear patterns of interaction between system components, analytical derivation of a mathematical model of system performance and robustness in terms of functional parameters may not be feasible or cost-effective. The demonstrated approach is simple, structured, effective, repeatable, and cost and time efficient. This general approach is suitable for a wide range of systems.

  18. Guaranteed cost control of polynomial fuzzy systems via a sum of squares approach.

    PubMed

    Tanaka, Kazuo; Ohtake, Hiroshi; Wang, Hua O

    2009-04-01

    This paper presents the guaranteed cost control of polynomial fuzzy systems via a sum of squares (SOS) approach. First, we present a polynomial fuzzy model and controller that are more general representations of the well-known Takagi-Sugeno (T-S) fuzzy model and controller, respectively. Second, we derive a guaranteed cost control design condition based on polynomial Lyapunov functions. Hence, the design approach discussed in this paper is more general than the existing LMI approaches (to T-S fuzzy control system designs) based on quadratic Lyapunov functions. The design condition realizes a guaranteed cost control by minimizing the upper bound of a given performance function. In addition, the design condition in the proposed approach can be represented in terms of SOS and is numerically (partially symbolically) solved via the recent developed SOSTOOLS. To illustrate the validity of the design approach, two design examples are provided. The first example deals with a complicated nonlinear system. The second example presents micro helicopter control. Both the examples show that our approach provides more extensive design results for the existing LMI approach.

  19. Alternative communication network designs for an operational Plato 4 CAI system

    NASA Technical Reports Server (NTRS)

    Mobley, R. E., Jr.; Eastwood, L. F., Jr.

    1975-01-01

    The cost of alternative communications networks for the dissemination of PLATO IV computer-aided instruction (CAI) was studied. Four communication techniques are compared: leased telephone lines, satellite communication, UHF TV, and low-power microwave radio. For each network design, costs per student contact hour are computed. These costs are derived as functions of student population density, a parameter which can be calculated from census data for one potential market for CAI, the public primary and secondary schools. Calculating costs in this way allows one to determine which of the four communications alternatives can serve this market least expensively for any given area in the U.S. The analysis indicates that radio distribution techniques are cost optimum over a wide range of conditions.

  20. Network formation: neighborhood structures, establishment costs, and distributed learning.

    PubMed

    Chasparis, Georgios C; Shamma, Jeff S

    2013-12-01

    We consider the problem of network formation in a distributed fashion. Network formation is modeled as a strategic-form game, where agents represent nodes that form and sever unidirectional links with other nodes and derive utilities from these links. Furthermore, agents can form links only with a limited set of neighbors. Agents trade off the benefit from links, which is determined by a distance-dependent reward function, and the cost of maintaining links. When each agent acts independently, trying to maximize its own utility function, we can characterize “stable” networks through the notion of Nash equilibrium. In fact, the introduced reward and cost functions lead to Nash equilibria (networks), which exhibit several desirable properties such as connectivity, bounded-hop diameter, and efficiency (i.e., minimum number of links). Since Nash networks may not necessarily be efficient, we also explore the possibility of “shaping” the set of Nash networks through the introduction of state-based utility functions. Such utility functions may represent dynamic phenomena such as establishment costs (either positive or negative). Finally, we show how Nash networks can be the outcome of a distributed learning process. In particular, we extend previous learning processes to so-called “state-based” weakly acyclic games, and we show that the proposed network formation games belong to this class of games.

  1. Hydrodynamics-based functional forms of activity metabolism: a case for the power-law polynomial function in animal swimming energetics.

    PubMed

    Papadopoulos, Anthony

    2009-01-01

    The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.

  2. Organizing Equity Exchanges

    NASA Astrophysics Data System (ADS)

    Schaper, Torsten

    In the last years equity exchanges have diversified their operations into business areas such as derivatives trading, post-trading services, and software sales. Securities trading and post-trading are subject to economies of scale and scope. The integration of these functions into one institution ensures efficiency by economizing on transactions costs.

  3. Medial compartment knee osteoarthritis: age-stratified cost-effectiveness of total knee arthroplasty, unicompartmental knee arthroplasty, and high tibial osteotomy.

    PubMed

    Smith, William B; Steinberg, Joni; Scholtes, Stefan; Mcnamara, Iain R

    2017-03-01

    To compare the age-based cost-effectiveness of total knee arthroplasty (TKA), unicompartmental knee arthroplasty (UKA), and high tibial osteotomy (HTO) for the treatment of medial compartment knee osteoarthritis (MCOA). A Markov model was used to simulate theoretical cohorts of patients 40, 50, 60, and 70 years of age undergoing primary TKA, UKA, or HTO. Costs and outcomes associated with initial and subsequent interventions were estimated by following these virtual cohorts over a 10-year period. Revision and mortality rates, costs, and functional outcome data were estimated from a systematic review of the literature. Probabilistic analysis was conducted to accommodate these parameters' inherent uncertainty, and both discrete and probabilistic sensitivity analyses were utilized to assess the robustness of the model's outputs to changes in key variables. HTO was most likely to be cost-effective in cohorts under 60, and UKA most likely in those 60 and over. Probabilistic results did not indicate one intervention to be significantly more cost-effective than another. The model was exquisitely sensitive to changes in utility (functional outcome), somewhat sensitive to changes in cost, and least sensitive to changes in 10-year revision risk. HTO may be the most cost-effective option when treating MCOA in younger patients, while UKA may be preferred in older patients. Functional utility is the primary driver of the cost-effectiveness of these interventions. For the clinician, this study supports HTO as a competitive treatment option in young patient populations. It also validates each one of the three interventions considered as potentially optimal, depending heavily on patient preferences and functional utility derived over time.

  4. Defining landscape resistance values in least-cost connectivity models for the invasive grey squirrel: a comparison of approaches using expert-opinion and habitat suitability modelling.

    PubMed

    Stevenson-Holt, Claire D; Watts, Kevin; Bellamy, Chloe C; Nevin, Owen T; Ramsey, Andrew D

    2014-01-01

    Least-cost models are widely used to study the functional connectivity of habitat within a varied landscape matrix. A critical step in the process is identifying resistance values for each land cover based upon the facilitating or impeding impact on species movement. Ideally resistance values would be parameterised with empirical data, but due to a shortage of such information, expert-opinion is often used. However, the use of expert-opinion is seen as subjective, human-centric and unreliable. This study derived resistance values from grey squirrel habitat suitability models (HSM) in order to compare the utility and validity of this approach with more traditional, expert-led methods. Models were built and tested with MaxEnt, using squirrel presence records and a categorical land cover map for Cumbria, UK. Predictions on the likelihood of squirrel occurrence within each land cover type were inverted, providing resistance values which were used to parameterise a least-cost model. The resulting habitat networks were measured and compared to those derived from a least-cost model built with previously collated information from experts. The expert-derived and HSM-inferred least-cost networks differ in precision. The HSM-informed networks were smaller and more fragmented because of the higher resistance values attributed to most habitats. These results are discussed in relation to the applicability of both approaches for conservation and management objectives, providing guidance to researchers and practitioners attempting to apply and interpret a least-cost approach to mapping ecological networks.

  5. The Discounted Method and Equivalence of Average Criteria for Risk-Sensitive Markov Decision Processes on Borel Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavazos-Cadena, Rolando, E-mail: rcavazos@uaaan.m; Salem-Silva, Francisco, E-mail: frsalem@uv.m

    2010-04-15

    This note concerns discrete-time controlled Markov chains with Borel state and action spaces. Given a nonnegative cost function, the performance of a control policy is measured by the superior limit risk-sensitive average criterion associated with a constant and positive risk sensitivity coefficient. Within such a framework, the discounted approach is used (a) to establish the existence of solutions for the corresponding optimality inequality, and (b) to show that, under mild conditions on the cost function, the optimal value functions corresponding to the superior and inferior limit average criteria coincide on a certain subset of the state space. The approach ofmore » the paper relies on standard dynamic programming ideas and on a simple analytical derivation of a Tauberian relation.« less

  6. Derivation of spatial patterns of soil hydraulic properties based on pedotransfer functions

    USDA-ARS?s Scientific Manuscript database

    Spatial patterns in soil hydrology are the product of the spatial distribution of soil hydraulic properties. These properties are notorious for the difficulties and high labor costs involved in measuring them. Often, there is a need to resort to estimating these parameters from other, more readily a...

  7. Solid rocket motor cost model

    NASA Technical Reports Server (NTRS)

    Harney, A. G.; Raphael, L.; Warren, S.; Yakura, J. K.

    1972-01-01

    A systematic and standardized procedure for estimating life cycle costs of solid rocket motor booster configurations. The model consists of clearly defined cost categories and appropriate cost equations in which cost is related to program and hardware parameters. Cost estimating relationships are generally based on analogous experience. In this model the experience drawn on is from estimates prepared by the study contractors. Contractors' estimates are derived by means of engineering estimates for some predetermined level of detail of the SRM hardware and program functions of the system life cycle. This method is frequently referred to as bottom-up. A parametric cost analysis is a useful technique when rapid estimates are required. This is particularly true during the planning stages of a system when hardware designs and program definition are conceptual and constantly changing as the selection process, which includes cost comparisons or trade-offs, is performed. The use of cost estimating relationships also facilitates the performance of cost sensitivity studies in which relative and comparable cost comparisons are significant.

  8. Life-history evolution when Lestes damselflies invaded vernal ponds.

    PubMed

    De Block, Marjan; McPeek, Mark A; Stoks, Robby

    2008-02-01

    We know little about the macroevolution of life-history traits along environmental gradients, especially with regard to the directionality compared to the ancestral states and the associated costs to other functions. Here we examine how age and size at maturity evolved when Lestes damselflies shifted from their ancestral temporary pond habitat (i.e., ponds that may dry once every decade or so) to extremely ephemeral vernal ponds (ponds that routinely dry completely each year). Larvae of three species were reared from eggs until emergence under different levels of photoperiod and transient starvation stress. Compared to the two temporary-pond Lestes, the phylogenetically derived vernal-pond Lestes dryas developed more rapidly across photoperiod treatments until the final instar, and only expressed plasticity in development time in the final instar under photoperiod levels that simulated a later hatching date. The documented change in development rate can be considered adaptive and underlies the success of the derived species in vernal ponds. Results suggest associated costs of faster development are lower mass at maturity and lower immune function after transient starvation stress. These costs may not only have impeded further evolution of the routine development rate to what is physiologically maximal, but also maintained some degree of plasticity to time constraints when the habitat shift occurred.

  9. Knowledge-based assistance in costing the space station DMS

    NASA Technical Reports Server (NTRS)

    Henson, Troy; Rone, Kyle

    1988-01-01

    The Software Cost Engineering (SCE) methodology developed over the last two decades at IBM Systems Integration Division (SID) in Houston is utilized to cost the NASA Space Station Data Management System (DMS). An ongoing project to capture this methodology, which is built on a foundation of experiences and lessons learned, has resulted in the development of an internal-use-only, PC-based prototype that integrates algorithmic tools with knowledge-based decision support assistants. This prototype Software Cost Engineering Automation Tool (SCEAT) is being employed to assist in the DMS costing exercises. At the same time, DMS costing serves as a forcing function and provides a platform for the continuing, iterative development, calibration, and validation and verification of SCEAT. The data that forms the cost engineering database is derived from more than 15 years of development of NASA Space Shuttle software, ranging from low criticality, low complexity support tools to highly complex and highly critical onboard software.

  10. Evaluation of linearly solvable Markov decision process with dynamic model learning in a mobile robot navigation task.

    PubMed

    Kinjo, Ken; Uchibe, Eiji; Doya, Kenji

    2013-01-01

    Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.

  11. Quaternion Averaging

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov

    2007-01-01

    Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.

  12. Dispersion correction derived from first principles for density functional theory and Hartree-Fock theory.

    PubMed

    Guidez, Emilie B; Gordon, Mark S

    2015-03-12

    The modeling of dispersion interactions in density functional theory (DFT) is commonly performed using an energy correction that involves empirically fitted parameters for all atom pairs of the system investigated. In this study, the first-principles-derived dispersion energy from the effective fragment potential (EFP) method is implemented for the density functional theory (DFT-D(EFP)) and Hartree-Fock (HF-D(EFP)) energies. Overall, DFT-D(EFP) performs similarly to the semiempirical DFT-D corrections for the test cases investigated in this work. HF-D(EFP) tends to underestimate binding energies and overestimate intermolecular equilibrium distances, relative to coupled cluster theory, most likely due to incomplete accounting for electron correlation. Overall, this first-principles dispersion correction yields results that are in good agreement with coupled-cluster calculations at a low computational cost.

  13. Risk aversion and uncertainty in cost-effectiveness analysis: the expected-utility, moment-generating function approach.

    PubMed

    Elbasha, Elamin H

    2005-05-01

    The availability of patient-level data from clinical trials has spurred a lot of interest in developing methods for quantifying and presenting uncertainty in cost-effectiveness analysis (CEA). Although the majority has focused on developing methods for using sample data to estimate a confidence interval for an incremental cost-effectiveness ratio (ICER), a small strand of the literature has emphasized the importance of incorporating risk preferences and the trade-off between the mean and the variance of returns to investment in health and medicine (mean-variance analysis). This paper shows how the exponential utility-moment-generating function approach is a natural extension to this branch of the literature for modelling choices from healthcare interventions with uncertain costs and effects. The paper assumes an exponential utility function, which implies constant absolute risk aversion, and is based on the fact that the expected value of this function results in a convenient expression that depends only on the moment-generating function of the random variables. The mean-variance approach is shown to be a special case of this more general framework. The paper characterizes the solution to the resource allocation problem using standard optimization techniques and derives the summary measure researchers need to estimate for each programme, when the assumption of risk neutrality does not hold, and compares it to the standard incremental cost-effectiveness ratio. The importance of choosing the correct distribution of costs and effects and the issues related to estimation of the parameters of the distribution are also discussed. An empirical example to illustrate the methods and concepts is provided. Copyright 2004 John Wiley & Sons, Ltd

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiao, Hongzhu; Rao, N.S.V.; Protopopescu, V.

    Regression or function classes of Euclidean type with compact support and certain smoothness properties are shown to be PAC learnable by the Nadaraya-Watson estimator based on complete orthonormal systems. While requiring more smoothness properties than typical PAC formulations, this estimator is computationally efficient, easy to implement, and known to perform well in a number of practical applications. The sample sizes necessary for PAC learning of regressions or functions under sup norm cost are derived for a general orthonormal system. The result covers the widely used estimators based on Haar wavelets, trignometric functions, and Daubechies wavelets.

  15. Gain optimization with non-linear controls

    NASA Technical Reports Server (NTRS)

    Slater, G. L.; Kandadai, R. D.

    1984-01-01

    An algorithm has been developed for the analysis and design of controls for non-linear systems. The technical approach is to use statistical linearization to model the non-linear dynamics of a system by a quasi-Gaussian model. A covariance analysis is performed to determine the behavior of the dynamical system and a quadratic cost function. Expressions for the cost function and its derivatives are determined so that numerical optimization techniques can be applied to determine optimal feedback laws. The primary application for this paper is centered about the design of controls for nominally linear systems but where the controls are saturated or limited by fixed constraints. The analysis is general, however, and numerical computation requires only that the specific non-linearity be considered in the analysis.

  16. Manual of phosphoric acid fuel cell power plant optimization model and computer program

    NASA Technical Reports Server (NTRS)

    Lu, C. Y.; Alkasab, K. A.

    1984-01-01

    An optimized cost and performance model for a phosphoric acid fuel cell power plant system was derived and developed into a modular FORTRAN computer code. Cost, energy, mass, and electrochemical analyses were combined to develop a mathematical model for optimizing the steam to methane ratio in the reformer, hydrogen utilization in the PAFC plates per stack. The nonlinear programming code, COMPUTE, was used to solve this model, in which the method of mixed penalty function combined with Hooke and Jeeves pattern search was chosen to evaluate this specific optimization problem.

  17. A Carrier Estimation Method Based on MLE and KF for Weak GNSS Signals.

    PubMed

    Zhang, Hongyang; Xu, Luping; Yan, Bo; Zhang, Hua; Luo, Liyan

    2017-06-22

    Maximum likelihood estimation (MLE) has been researched for some acquisition and tracking applications of global navigation satellite system (GNSS) receivers and shows high performance. However, all current methods are derived and operated based on the sampling data, which results in a large computation burden. This paper proposes a low-complexity MLE carrier tracking loop for weak GNSS signals which processes the coherent integration results instead of the sampling data. First, the cost function of the MLE of signal parameters such as signal amplitude, carrier phase, and Doppler frequency are used to derive a MLE discriminator function. The optimal value of the cost function is searched by an efficient Levenberg-Marquardt (LM) method iteratively. Its performance including Cramér-Rao bound (CRB), dynamic characteristics and computation burden are analyzed by numerical techniques. Second, an adaptive Kalman filter is designed for the MLE discriminator to obtain smooth estimates of carrier phase and frequency. The performance of the proposed loop, in terms of sensitivity, accuracy and bit error rate, is compared with conventional methods by Monte Carlo (MC) simulations both in pedestrian-level and vehicle-level dynamic circumstances. Finally, an optimal loop which combines the proposed method and conventional method is designed to achieve the optimal performance both in weak and strong signal circumstances.

  18. Plantibodies in human and animal health: a review.

    PubMed

    Oluwayelu, Daniel O; Adebiyi, Adebowale I

    2016-06-01

    Antibodies are essential part of vertebrates' adaptive immune system; they can now be produced by transforming plants with antibody-coding genes from mammals/humans. Although plants do not naturally make antibodies, the plant-derived antibodies (plantibodies) have been shown to function in the same way as mammalian antibodies. PubMed and Google search engines were used to download relevant publications on plantibodies in medical and veterinary fields; the papers were reviewed and findings qualitatively described. The process of bioproduction of plantibodies offers several advantages over the conventional method of antibody production in mammalian cells with the cost of antibody production in plants being substantially lesser. Contrary to what is possible with animal-derived antibodies, the process of making plantibodies almost exclusively precludes transfer of pathogens to the end product. Additionally, plants not only produce a relatively high yield of antibodies in a comparatively faster time, they also serve as cost-effective bioreactors to produce antibodies of diverse specificities. Plantibodies are safe, cost-effective and offer more advantages over animal-derived antibodies. Methods of producing them are described with a view to inspiring African scientists on the need to embrace and harness this rapidly evolving biotechnology in solving human and animal health challenges on the continent where the climate supports growth of diverse plants.

  19. Cost-utility analysis of percutaneous mitral valve repair in inoperable patients with functional mitral regurgitation in German settings.

    PubMed

    Borisenko, Oleg; Haude, Michael; Hoppe, Uta C; Siminiak, Tomasz; Lipiecki, Janusz; Goldberg, Steve L; Mehta, Nawzer; Bouknight, Omari V; Bjessmo, Staffan; Reuter, David G

    2015-05-14

    To determine the cost-effectiveness of the percutaneous mitral valve repair (PMVR) using Carillon® Mitral Contour System® (Cardiac Dimensions Inc., Kirkland, WA, USA) in patients with congestive heart failure accompanied by moderate to severe functional mitral regurgitation (FMR) compared to the prolongation of optimal medical treatment (OMT). Cost-utility analysis using a combination of a decision tree and Markov process was performed. The clinical effectiveness was determined based on the results of the Transcatheter Implantation of Carillon Mitral Annuloplasty Device (TITAN) trial. The mean age of the target population was 62 years, 77% of the patients were males, 64% of the patients had severe FMR and all patients had New York Heart Association functional class III. The epidemiological, cost and utility data were derived from the literature. The analysis was performed from the German statutory health insurance perspective over 10-year time horizon. Over 10 years, the total cost was €36,785 in the PMVR arm and €18,944 in the OMT arm. However, PMVR provided additional benefits to patients with an 1.15 incremental quality-adjusted life years (QALY) and an 1.41 incremental life years. The percutaneous procedure was cost-effective in comparison to OMT with an incremental cost-effectiveness ratio of €15,533/QALY. Results were robust in the deterministic sensitivity analysis. In the probabilistic sensitivity analysis with a willingness-to-pay threshold of €35,000/QALY, PMVR had a 84 % probability of being cost-effective. Percutaneous mitral valve repair may be cost-effective in inoperable patients with FMR due to heart failure.

  20. Artificial neural networks using complex numbers and phase encoded weights.

    PubMed

    Michel, Howard E; Awwal, Abdul Ahad S

    2010-04-01

    The model of a simple perceptron using phase-encoded inputs and complex-valued weights is proposed. The aggregation function, activation function, and learning rule for the proposed neuron are derived and applied to Boolean logic functions and simple computer vision tasks. The complex-valued neuron (CVN) is shown to be superior to traditional perceptrons. An improvement of 135% over the theoretical maximum of 104 linearly separable problems (of three variables) solvable by conventional perceptrons is achieved without additional logic, neuron stages, or higher order terms such as those required in polynomial logic gates. The application of CVN in distortion invariant character recognition and image segmentation is demonstrated. Implementation details are discussed, and the CVN is shown to be very attractive for optical implementation since optical computations are naturally complex. The cost of the CVN is less in all cases than the traditional neuron when implemented optically. Therefore, all the benefits of the CVN can be obtained without additional cost. However, on those implementations dependent on standard serial computers, CVN will be more cost effective only in those applications where its increased power can offset the requirement for additional neurons.

  1. Digital robust active control law synthesis for large order systems using constrained optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1987-01-01

    This paper presents a direct digital control law synthesis procedure for a large order, sampled data, linear feedback system using constrained optimization techniques to meet multiple design requirements. A linear quadratic Gaussian type cost function is minimized while satisfying a set of constraints on the design loads and responses. General expressions for gradients of the cost function and constraints, with respect to the digital control law design variables are derived analytically and computed by solving a set of discrete Liapunov equations. The designer can choose the structure of the control law and the design variables, hence a stable classical control law as well as an estimator-based full or reduced order control law can be used as an initial starting point. Selected design responses can be treated as constraints instead of lumping them into the cost function. This feature can be used to modify a control law, to meet individual root mean square response limitations as well as minimum single value restrictions. Low order, robust digital control laws were synthesized for gust load alleviation of a flexible remotely piloted drone aircraft.

  2. Joint Chance-Constrained Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob

    2012-01-01

    This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.

  3. Prime focus architectures for large space telescopes: reduce surfaces to save cost

    NASA Astrophysics Data System (ADS)

    Breckinridge, J. B.; Lillie, C. F.

    2016-07-01

    Conceptual architectures are now being developed to identify future directions for post JWST large space telescope systems to operate in the UV Optical and near IR regions of the spectrum. Here we show that the cost of optical surfaces within large aperture telescope/instrument systems can exceed $100M/reflection when expressed in terms of the aperture increase needed to over come internal absorption loss. We recommend a program in innovative optical design to minimize the number of surfaces by considering multiple functions for mirrors. An example is given using the Rowland circle imaging spectrometer systems for UV space science. With few exceptions, current space telescope architectures are based on systems optimized for ground-based astronomy. Both HST and JWST are classical "Cassegrain" telescopes derived from the ground-based tradition to co-locate the massive primary mirror and the instruments at the same end of the metrology structure. This requirement derives from the dual need to minimize observatory dome size and cost in the presence of the Earth's 1-g gravitational field. Space telescopes, however function in the zero gravity of space and the 1- g constraint is relieved to the advantage of astronomers. Here we suggest that a prime focus large aperture telescope system in space may have potentially have higher transmittance, better pointing, improved thermal and structural control, less internal polarization and broader wavelength coverage than Cassegrain telescopes. An example is given showing how UV astronomy telescopes use single optical elements for multiple functions and therefore have a minimum number of reflections.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canavan, G.H.

    Optimal offensive missile allocations for moderate offensive and defensive forces are derived and used to study their sensitivity to force structure parameters levels. It is shown that the first strike cost is a product of the number of missiles and a function of the optimum allocation. Thus, the conditions under which the number of missiles should increase or decrease in time is also determined by this allocation.

  5. Teradata University Network: A No Cost Web-Portal for Teaching Database, Data Warehousing, and Data-Related Subjects

    ERIC Educational Resources Information Center

    Jukic, Nenad; Gray, Paul

    2008-01-01

    This paper describes the value that information systems faculty and students in classes dealing with database management, data warehousing, decision support systems, and related topics, could derive from the use of the Teradata University Network (TUN), a free comprehensive web-portal. A detailed overview of TUN functionalities and content is…

  6. A variational data assimilation system for the range dependent acoustic model using the representer method: Theoretical derivations.

    PubMed

    Ngodock, Hans; Carrier, Matthew; Fabre, Josette; Zingarelli, Robert; Souopgui, Innocent

    2017-07-01

    This study presents the theoretical framework for variational data assimilation of acoustic pressure observations into an acoustic propagation model, namely, the range dependent acoustic model (RAM). RAM uses the split-step Padé algorithm to solve the parabolic equation. The assimilation consists of minimizing a weighted least squares cost function that includes discrepancies between the model solution and the observations. The minimization process, which uses the principle of variations, requires the derivation of the tangent linear and adjoint models of the RAM. The mathematical derivations are presented here, and, for the sake of brevity, a companion study presents the numerical implementation and results from the assimilation simulated acoustic pressure observations.

  7. Implementation of "Marginalism" in Day to Day Life.

    DTIC Science & Technology

    1998-06-01

    1985. 4. Golden, B.L., E.A Wasil and P.T. Harker, The Analytic Hierarchy Process, Spring-Verlag Berlin, Heidelberg, 1989. 5. Agor , Weston H , The...A. H . Maslow was a psychologist whose work on human motivation has been influential in fields such as organization development and industrial...benefit from Xi is the partial derivative of the objective function, 50/aXi. Consider a constraint function: < H >(X1,X2, ,Xn)=0 The marginal cost of

  8. Near-Optimal Operation of Dual-Fuel Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Ardema, M. D.; Chou, H. C.; Bowles, J. V.

    1996-01-01

    A near-optimal guidance law for the ascent trajectory from earth surface to earth orbit of a fully reusable single-stage-to-orbit pure rocket launch vehicle is derived. Of interest are both the optimal operation of the propulsion system and the optimal flight path. A methodology is developed to investigate the optimal throttle switching of dual-fuel engines. The method is based on selecting propulsion system modes and parameters that maximize a certain performance function. This function is derived from consideration of the energy-state model of the aircraft equations of motion. Because the density of liquid hydrogen is relatively low, the sensitivity of perturbations in volume need to be taken into consideration as well as weight sensitivity. The cost functional is a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize vehicle empty weight for a given payload mass and volume in orbit.

  9. On defense strategies for system of systems using aggregated correlations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Imam, Neena; Ma, Chris Y. T.

    2017-04-01

    We consider a System of Systems (SoS) wherein each system Si, i = 1; 2; ... ;N, is composed of discrete cyber and physical components which can be attacked and reinforced. We characterize the disruptions using aggregate failure correlation functions given by the conditional failure probability of SoS given the failure of an individual system. We formulate the problem of ensuring the survival of SoS as a game between an attacker and a provider, each with a utility function composed of asurvival probability term and a cost term, both expressed in terms of the number of components attacked and reinforced.more » The survival probabilities of systems satisfy simple product-form, first-order differential conditions, which simplify the Nash Equilibrium (NE) conditions. We derive the sensitivity functions that highlight the dependence of SoS survival probability at NE on cost terms, correlation functions, and individual system survival probabilities.We apply these results to a simplified model of distributed cloud computing infrastructure.« less

  10. Optimal subhourly electricity resource dispatch under multiple price signals with high renewable generation availability

    DOE PAGES

    Chassin, David P.; Behboodi, Sahand; Djilali, Ned

    2018-01-28

    This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less

  11. Optimal subhourly electricity resource dispatch under multiple price signals with high renewable generation availability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Behboodi, Sahand; Djilali, Ned

    This article proposes a system-wide optimal resource dispatch strategy that enables a shift from a primarily energy cost-based approach, to a strategy using simultaneous price signals for energy, power and ramping behavior. A formal method to compute the optimal sub-hourly power trajectory is derived for a system when the price of energy and ramping are both significant. Optimal control functions are obtained in both time and frequency domains, and a discrete-time solution suitable for periodic feedback control systems is presented. The method is applied to North America Western Interconnection for the planning year 2024, and it is shown that anmore » optimal dispatch strategy that simultaneously considers both the cost of energy and the cost of ramping leads to significant cost savings in systems with high levels of renewable generation: the savings exceed 25% of the total system operating cost for a 50% renewables scenario.« less

  12. Sneak Analysis Application Guidelines

    DTIC Science & Technology

    1982-06-01

    Hardware Program Change Cost Trend, Airborne Environment ....... ....................... 111 3-11 Relative Software Program Change Costs...113 3-50 Derived Software Program Change Cost by Phase,* Airborne Environment ..... ............... 114 3-51 Derived Software Program Change...Cost by Phase, Ground/Water Environment ... ............. .... 114 3-52 Total Software Program Change Costs ................ 115 3-53 Sneak Analysis

  13. Dense image registration through MRFs and efficient linear programming.

    PubMed

    Glocker, Ben; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir; Paragios, Nikos

    2008-12-01

    In this paper, we introduce a novel and efficient approach to dense image registration, which does not require a derivative of the employed cost function. In such a context, the registration problem is formulated using a discrete Markov random field objective function. First, towards dimensionality reduction on the variables we assume that the dense deformation field can be expressed using a small number of control points (registration grid) and an interpolation strategy. Then, the registration cost is expressed using a discrete sum over image costs (using an arbitrary similarity measure) projected on the control points, and a smoothness term that penalizes local deviations on the deformation field according to a neighborhood system on the grid. Towards a discrete approach, the search space is quantized resulting in a fully discrete model. In order to account for large deformations and produce results on a high resolution level, a multi-scale incremental approach is considered where the optimal solution is iteratively updated. This is done through successive morphings of the source towards the target image. Efficient linear programming using the primal dual principles is considered to recover the lowest potential of the cost function. Very promising results using synthetic data with known deformations and real data demonstrate the potentials of our approach.

  14. A binary linear programming formulation of the graph edit distance.

    PubMed

    Justice, Derek; Hero, Alfred

    2006-08-01

    A binary linear programming formulation of the graph edit distance for unweighted, undirected graphs with vertex attributes is derived and applied to a graph recognition problem. A general formulation for editing graphs is used to derive a graph edit distance that is proven to be a metric, provided the cost function for individual edit operations is a metric. Then, a binary linear program is developed for computing this graph edit distance, and polynomial time methods for determining upper and lower bounds on the solution of the binary program are derived by applying solution methods for standard linear programming and the assignment problem. A recognition problem of comparing a sample input graph to a database of known prototype graphs in the context of a chemical information system is presented as an application of the new method. The costs associated with various edit operations are chosen by using a minimum normalized variance criterion applied to pairwise distances between nearest neighbors in the database of prototypes. The new metric is shown to perform quite well in comparison to existing metrics when applied to a database of chemical graphs.

  15. The potential impact of ozone on materials in the U.K.

    NASA Astrophysics Data System (ADS)

    Lee, David S.; Holland, Michael R.; Falla, Norman

    Recent reports have highlighted the potential damage caused to a range of media, including materials, by ozone (O 3). The limited data available indicate significant damage to rubber products and surface coatings but either insignificant or unquantifiable damage to textiles and other polymeric materials at the range of atmospheric concentrations encountered in the U.K. Materials in the indoor environment have been excluded from economic analyses. Legislation was put in place in 1993 in the U.K. in order to reduce NO x (NO x = NO + NO 2) and VOC (volatile organic compounds) emissions from motor vehicles which is likely to result in reduced peak O 3 episodes but increased average levels of O 3 in urban areas which may result in increased damage to materials. A detailed assessment of the costs of O 3 damage to materials is not currently possible because of insufficient information on relevant dose-response functions and the stock at risk. Alternative methods were thus adopted to determine the potential scale of the problem. Scaling of U.S. estimates made in the late 1960s provides a range for the U.K. of £170 million-£345 million yr -1 in current terms. This includes damage to surface coatings and elastomers, and the cost of antiozonant protection applied to rubber goods. Independent estimates were made of the costs of protecting rubber goods in the U.K. These were based on the size of the antiozonant market, and provide cost ranges of £25 million-£63 million yr -1 to manufacturers and £25 million-£189 million yr -1 to consumers. The only rubber goods for which a damage estimate (not including protection costs) could be made were tyres, using data from the U.S.A. and information on annual tyre sales in the U.K. A range of £0-£4 million yr -1 was estimated. The cost of damage to other rubber goods could not be quantified because of a lack of data on both the stock at risk and exposure-response functions. The effect of O 3 on the costs of repainting were estimated under scenarios of increased urban concentrations of O 3 using damage functions derived from the literature. The cost was estimated to be in the range of £0-£60 million yr -1 for a change from 15 to 20 ppb O 3, and £0 to £182 million yr -1 for a change from 15 to 30 ppb O 3. The wide ranges derived for effects on surface coatings are a reflection of the uncertainty associated with the dose-response functions used.

  16. Dense velocity reconstruction from tomographic PTV with material derivatives

    NASA Astrophysics Data System (ADS)

    Schneiders, Jan F. G.; Scarano, Fulvio

    2016-09-01

    A method is proposed to reconstruct the instantaneous velocity field from time-resolved volumetric particle tracking velocimetry (PTV, e.g., 3D-PTV, tomographic PTV and Shake-the-Box), employing both the instantaneous velocity and the velocity material derivative of the sparse tracer particles. The constraint to the measured temporal derivative of the PTV particle tracks improves the consistency of the reconstructed velocity field. The method is christened as pouring time into space, as it leverages temporal information to increase the spatial resolution of volumetric PTV measurements. This approach becomes relevant in cases where the spatial resolution is limited by the seeding concentration. The method solves an optimization problem to find the vorticity and velocity fields that minimize a cost function, which includes next to instantaneous velocity, also the velocity material derivative. The velocity and its material derivative are related through the vorticity transport equation, and the cost function is minimized using the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm. The procedure is assessed numerically with a simulated PTV experiment in a turbulent boundary layer from a direct numerical simulation (DNS). The experimental validation considers a tomographic particle image velocimetry (PIV) experiment in a similar turbulent boundary layer and the additional case of a jet flow. The proposed technique (`vortex-in-cell plus', VIC+) is compared to tomographic PIV analysis (3D iterative cross-correlation), PTV interpolation methods (linear and adaptive Gaussian windowing) and to vortex-in-cell (VIC) interpolation without the material derivative. A visible increase in resolved details in the turbulent structures is obtained with the VIC+ approach, both in numerical simulations and experiments. This results in a more accurate determination of the turbulent stresses distribution in turbulent boundary layer investigations. Data from a jet experiment, where the vortex topology is retrieved with a small number of tracers indicate the potential utilization of VIC+ in low-concentration experiments as for instance occurring in large-scale volumetric PTV measurements.

  17. A multipurpose fusion tag derived from an unstructured and hyperacidic region of the amyloid precursor protein

    PubMed Central

    Sangawa, Takeshi; Tabata, Sanae; Suzuki, Kei; Saheki, Yasushi; Tanaka, Keiji; Takagi, Junichi

    2013-01-01

    Expression and purification of aggregation-prone and disulfide-containing proteins in Escherichia coli remains as a major hurdle for structural and functional analyses of high-value target proteins. Here, we present a novel gene-fusion strategy that greatly simplifies purification and refolding procedure at very low cost using a unique hyperacidic module derived from the human amyloid precursor protein. Fusion with this polypeptide (dubbed FATT for Flag-Acidic-Target Tag) results in near-complete soluble expression of variety of extracellular proteins, which can be directly refolded in the crude bacterial lysate and purified in one-step by anion exchange chromatography. Application of this system enabled preparation of functionally active extracellular enzymes and antibody fragments without the need for condition optimization. PMID:23526492

  18. Efficient scheme for parametric fitting of data in arbitrary dimensions.

    PubMed

    Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching

    2008-07-01

    We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.

  19. A kernel adaptive algorithm for quaternion-valued inputs.

    PubMed

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  20. Algal autolysate medium to label proteins for NMR in mammalian cells.

    PubMed

    Fuccio, Carmelo; Luchinat, Enrico; Barbieri, Letizia; Neri, Sara; Fragai, Marco

    2016-04-01

    In-cell NMR provides structural and functional information on proteins directly inside living cells. At present, the high costs of the labeled media for mammalian cells represent a limiting factor for the development of this methodology. Here we report a protocol to prepare a homemade growth medium from Spirulina platensis autolysate, suitable to express uniformly labeled proteins inside mammalian cells at a reduced cost-per-sample. The human proteins SOD1 and Mia40 were overexpressed in human cells grown in (15)N-enriched S. platensis algal-derived medium, and high quality in-cell NMR spectra were obtained.

  1. A decision model for planetary missions

    NASA Technical Reports Server (NTRS)

    Hazelrigg, G. A., Jr.; Brigadier, W. L.

    1976-01-01

    Many techniques developed for the solution of problems in economics and operations research are directly applicable to problems involving engineering trade-offs. This paper investigates the use of utility theory for decision making in planetary exploration space missions. A decision model is derived that accounts for the objectives of the mission - science - the cost of flying the mission and the risk of mission failure. A simulation methodology for obtaining the probability distribution of science value and costs as a function spacecraft and mission design is presented and an example application of the decision methodology is given for various potential alternatives in a comet Encke mission.

  2. Infrared spectroscopic monitoring of urea addition to oriented strandboard resins

    Treesearch

    Chi-Leung So; Thomas L. Eberhardt; Ernest Hsu; Brian K. Via; Chung Y. Hse

    2007-01-01

    One of the variables in phenol formaldehyde adhesive resin formulation is the addition of urea, which allows the resin manufacturer to manipulate both product functionality and cost. Nitrogen content can be used as a measure of the level of urea addition because most of the nitrogen present is derived from urea added at the end of the preparation process. Nitrogen...

  3. Extended screened exchange functional derived from transcorrelated density functional theory.

    PubMed

    Umezawa, Naoto

    2017-09-14

    We propose a new formulation of the correlation energy functional derived from the transcorrelated method in use in density functional theory (TC-DFT). An effective Hamiltonian, H TC , is introduced by a similarity transformation of a many-body Hamiltonian, H, with respect to a complex function F: H TC =1FHF. It is proved that an expectation value of H TC for a normalized single Slater determinant, D n , corresponds to the total energy: E[n] = ⟨Ψ n |H|Ψ n ⟩/⟨Ψ n |Ψ n ⟩ = ⟨D n |H TC |D n ⟩ under the two assumptions: (1) The electron density nr associated with a trial wave function Ψ n = D n F is v-representable and (2) Ψ n and D n give rise to the same electron density nr. This formulation, therefore, provides an alternative expression of the total energy that is useful for the development of novel correlation energy functionals. By substituting a specific function for F, we successfully derived a model correlation energy functional, which resembles the functional form of the screened exchange method. The proposed functional, named the extended screened exchange (ESX) functional, is described within two-body integrals and is parametrized for a numerically exact correlation energy of the homogeneous electron gas. The ESX functional does not contain any ingredients of (semi-)local functionals and thus is totally free from self-interactions. The computational cost for solving the self-consistent-field equation is comparable to that of the Hartree-Fock method. We apply the ESX functional to electronic structure calculations for a solid silicon, H - ion, and small atoms. The results demonstrate that the TC-DFT formulation is promising for the systematic improvement of the correlation energy functional.

  4. Autonomous underwater vehicle adaptive path planning for target classification

    NASA Astrophysics Data System (ADS)

    Edwards, Joseph R.; Schmidt, Henrik

    2002-11-01

    Autonomous underwater vehicles (AUVs) are being rapidly developed to carry sensors into the sea in ways that have previously not been possible. The full use of the vehicles, however, is still not near realization due to lack of the true vehicle autonomy that is promised in the label (AUV). AUVs today primarily attempt to follow as closely as possible a preplanned trajectory. The key to increasing the autonomy of the AUV is to provide the vehicle with a means to make decisions based on its sensor receptions. The current work examines the use of active sonar returns from mine-like objects (MLOs) as a basis for sensor-based adaptive path planning, where the path planning objective is to discriminate between real mines and rocks. Once a target is detected in the mine hunting phase, the mine classification phase is initialized with a derivative cost function to emphasize signal differences and enhance classification capability. The AUV moves adaptively to minimize the cost function. The algorithm is verified using at-sea data derived from the joint MIT/SACLANTCEN GOATS experiments and advanced acoustic simulation using SEALAB. The mission oriented operating system (MOOS) real-time simulator is then used to test the onboard implementation of the algorithm.

  5. Sustainability of a public system for plasma collection, contract fractionation and plasma-derived medicinal product manufacturing

    PubMed Central

    Grazzini, Giuliano; Ceccarelli, Anna; Calteri, Deanna; Catalano, Liviana; Calizzani, Gabriele; Cicchetti, Americo

    2013-01-01

    Background In Italy, the financial reimbursement for labile blood components exchanged between Regions is regulated by national tariffs defined in 1991 and updated in 1993–2003. Over the last five years, the need for establishing standard costs of healthcare services has arisen critically. In this perspective, the present study is aimed at defining both the costs of production of blood components and the related prices, as well as the prices of plasma-derived medicinal products obtained by national plasma, to be used for interregional financial reimbursement. Materials and methods In order to analyse the costs of production of blood components, 12 out 318 blood establishments were selected in 8 Italian Regions. For each step of the production process, driving costs were identified and production costs were. To define the costs of plasma-derived medicinal products obtained by national plasma, industrial costs currently sustained by National Health Service for contract fractionation were taken into account. Results The production costs of plasma-derived medicinal products obtained from national plasma showed a huge variability among blood establishments, which was much lower after standardization. The new suggested plasma tariffs were quite similar to those currently in force. Comparing the overall costs theoretically sustained by the National Health Service for plasma-derived medicinal products obtained from national plasma to current commercial costs, demonstrates that the national blood system could gain a 10% cost saving if it were able to produce plasma for fractionation within the standard costs defined in this study. Discussion Achieving national self-sufficiency through the production of plasma-derived medicinal products from national plasma, is a strategic goal of the National Health Service which must comply not only with quality, safety and availability requirements but also with the increasingly pressing need for economic sustainability. PMID:24333307

  6. Sustainability of a public system for plasma collection, contract fractionation and plasma-derived medicinal product manufacturing.

    PubMed

    Grazzini, Giuliano; Ceccarelli, Anna; Calteri, Deanna; Catalano, Liviana; Calizzani, Gabriele; Cicchetti, Americo

    2013-09-01

    In Italy, the financial reimbursement for labile blood components exchanged between Regions is regulated by national tariffs defined in 1991 and updated in 1993-2003. Over the last five years, the need for establishing standard costs of healthcare services has arisen critically. In this perspective, the present study is aimed at defining both the costs of production of blood components and the related prices, as well as the prices of plasma-derived medicinal products obtained by national plasma, to be used for interregional financial reimbursement. In order to analyse the costs of production of blood components, 12 out 318 blood establishments were selected in 8 Italian Regions. For each step of the production process, driving costs were identified and production costs were. To define the costs of plasma-derived medicinal products obtained by national plasma, industrial costs currently sustained by National Health Service for contract fractionation were taken into account. The production costs of plasma-derived medicinal products obtained from national plasma showed a huge variability among blood establishments, which was much lower after standardization. The new suggested plasma tariffs were quite similar to those currently in force. Comparing the overall costs theoretically sustained by the National Health Service for plasma-derived medicinal products obtained from national plasma to current commercial costs, demonstrates that the national blood system could gain a 10% cost saving if it were able to produce plasma for fractionation within the standard costs defined in this study. Achieving national self-sufficiency through the production of plasma-derived medicinal products from national plasma, is a strategic goal of the National Health Service which must comply not only with quality, safety and availability requirements but also with the increasingly pressing need for economic sustainability.

  7. On the sensitivity of complex, internally coupled systems

    NASA Technical Reports Server (NTRS)

    Sobieszczanskisobieski, Jaroslaw

    1988-01-01

    A method is presented for computing sensitivity derivatives with respect to independent (input) variables for complex, internally coupled systems, while avoiding the cost and inaccuracy of finite differencing performed on the entire system analysis. The method entails two alternative algorithms: the first is based on the classical implicit function theorem formulated on residuals of governing equations, and the second develops the system sensitivity equations in a new form using the partial (local) sensitivity derivatives of the output with respect to the input of each part of the system. A few application examples are presented to illustrate the discussion.

  8. Life Cycle Cost Analysis of Shuttle-Derived Launch Vehicles, Volume 1

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The design, performance, and programmatic definition of shuttle derived launch vehicles (SDLV) established by two different contractors were assessed and the relative life cycle costs of space transportation systems using the shuttle alone were compared with costs for a mix of shuttles and SDLV's. The ground rules and assumptions used in the evaluation are summarized and the work breakdown structure is included. Approaches used in deriving SDLV costs, including calibration factors and historical data are described. Both SDLV cost estimates and SDLV/STS cost comparisons are summarized. Standard formats are used to report comprehensive SDLV life cycle estimates. Hardware cost estimates (below subsystem level) obtained using the RCA PRICE 84 cost model are included along with other supporting data.

  9. Game-Theoretic strategies for systems of components using product-form utilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; Ma, Cheng-Yu; Hausken, K.

    Many critical infrastructures are composed of multiple systems of components which are correlated so that disruptions to one may propagate to others. We consider such infrastructures with correlations characterized in two ways: (i) an aggregate failure correlation function specifies the conditional failure probability of the infrastructure given the failure of an individual system, and (ii) a pairwise correlation function between two systems specifies the failure probability of one system given the failure of the other. We formulate a game for ensuring the resilience of the infrastructure, wherein the utility functions of the provider and attacker are products of an infrastructuremore » survival probability term and a cost term, both expressed in terms of the numbers of system components attacked and reinforced. The survival probabilities of individual systems satisfy first-order differential conditions that lead to simple Nash Equilibrium conditions. We then derive sensitivity functions that highlight the dependence of infrastructure resilience on the cost terms, correlation functions, and individual system survival probabilities. We apply these results to simplified models of distributed cloud computing and energy grid infrastructures.« less

  10. Procedure for minimizing the cost per watt of photovoltaic systems

    NASA Technical Reports Server (NTRS)

    Redfield, D.

    1977-01-01

    A general analytic procedure is developed that provides a quantitative method for optimizing any element or process in the fabrication of a photovoltaic energy conversion system by minimizing its impact on the cost per watt of the complete system. By determining the effective value of any power loss associated with each element of the system, this procedure furnishes the design specifications that optimize the cost-performance tradeoffs for each element. A general equation is derived that optimizes the properties of any part of the system in terms of appropriate cost and performance functions, although the power-handling components are found to have a different character from the cell and array steps. Another principal result is that a fractional performance loss occurring at any cell- or array-fabrication step produces that same fractional increase in the cost per watt of the complete array. It also follows that no element or process step can be optimized correctly by considering only its own cost and performance

  11. Adjoint-Based Methodology for Time-Dependent Optimization

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2008-01-01

    This paper presents a discrete adjoint method for a broad class of time-dependent optimization problems. The time-dependent adjoint equations are derived in terms of the discrete residual of an arbitrary finite volume scheme which approximates unsteady conservation law equations. Although only the 2-D unsteady Euler equations are considered in the present analysis, this time-dependent adjoint method is applicable to the 3-D unsteady Reynolds-averaged Navier-Stokes equations with minor modifications. The discrete adjoint operators involving the derivatives of the discrete residual and the cost functional with respect to the flow variables are computed using a complex-variable approach, which provides discrete consistency and drastically reduces the implementation and debugging cycle. The implementation of the time-dependent adjoint method is validated by comparing the sensitivity derivative with that obtained by forward mode differentiation. Our numerical results show that O(10) optimization iterations of the steepest descent method are needed to reduce the objective functional by 3-6 orders of magnitude for test problems considered.

  12. Uncertainty importance analysis using parametric moment ratio functions.

    PubMed

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2014-02-01

    This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.

  13. Integrated Photogrammetric Survey and Bim Modelling for the Protection of School Heritage, Applications on a Case Study

    NASA Astrophysics Data System (ADS)

    Palestini, C.; Basso, A.; Graziani, L.

    2018-05-01

    The contribution, considering the use of low-cost photogrammetric detection methodologies and the use of asset Historical-BIM, has as its aim the theme of knowledge and the adaptation of safety in school buildings, a topic brought to attention by the many situations of seismic risk that have interested the central Apennines in Italy. The specific investigation is referred to the Abruzzo region, hit by the recent earthquakes of 2016 and 2009 that have highlighted the vulnerability of the building structures involved in a large seismic crater covering large areas of the territory. The need to consider in advance the performance standards of building components, especially concerning the strategic ways of the functions contained in them, starts here. In this sense, the school buildings have emerged among the types on which to pay attention, a study theme to be promptly considered, considering the functions performed within them and the possible criticality of such constructions, often dated, enlarged or readjusted without appropriate seismic adaptation plans. From here derives the purpose of the research that is directed towards a systematic recognition of the scholastic heritage, deriving from objective and rapid surveys at low cost, taking into consideration the as-built and the different formal and structural aspects that define the architectural organisms to analyse and manage through three-dimensional models that can be interrogated using HBIM connected to databases containing information of a structural and functional nature. In summary, through the implementation of information in the BIM model, it will be possible to query and obtain in real time all the necessary information to optimize, in terms of efficiency, costs, and future maintenance operations.

  14. Flood loss modelling with FLF-IT: a new flood loss function for Italian residential structures

    NASA Astrophysics Data System (ADS)

    Hasanzadeh Nafari, Roozbeh; Amadio, Mattia; Ngo, Tuan; Mysiak, Jaroslav

    2017-07-01

    The damage triggered by different flood events costs the Italian economy millions of euros each year. This cost is likely to increase in the future due to climate variability and economic development. In order to avoid or reduce such significant financial losses, risk management requires tools which can provide a reliable estimate of potential flood impacts across the country. Flood loss functions are an internationally accepted method for estimating physical flood damage in urban areas. In this study, we derived a new flood loss function for Italian residential structures (FLF-IT), on the basis of empirical damage data collected from a recent flood event in the region of Emilia-Romagna. The function was developed based on a new Australian approach (FLFA), which represents the confidence limits that exist around the parameterized functional depth-damage relationship. After model calibration, the performance of the model was validated for the prediction of loss ratios and absolute damage values. It was also contrasted with an uncalibrated relative model with frequent usage in Europe. In this regard, a three-fold cross-validation procedure was carried out over the empirical sample to measure the range of uncertainty from the actual damage data. The predictive capability has also been studied for some sub-classes of water depth. The validation procedure shows that the newly derived function performs well (no bias and only 10 % mean absolute error), especially when the water depth is high. Results of these validation tests illustrate the importance of model calibration. The advantages of the FLF-IT model over other Italian models include calibration with empirical data, consideration of the epistemic uncertainty of data, and the ability to change parameters based on building practices across Italy.

  15. Defense Strategies for Asymmetric Networked Systems with Discrete Components.

    PubMed

    Rao, Nageswara S V; Ma, Chris Y T; Hausken, Kjell; He, Fei; Yau, David K Y; Zhuang, Jun

    2018-05-03

    We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models.

  16. Defense Strategies for Asymmetric Networked Systems with Discrete Components

    PubMed Central

    Rao, Nageswara S. V.; Ma, Chris Y. T.; Hausken, Kjell; He, Fei; Yau, David K. Y.

    2018-01-01

    We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models. PMID:29751588

  17. Capturing Micro-topography of an Arctic Tundra Landscape through Digital Elevation Models (DEMs) Acquired from Various Remote Sensing Platforms

    NASA Astrophysics Data System (ADS)

    Vargas, S. A., Jr.; Tweedie, C. E.; Oberbauer, S. F.

    2013-12-01

    The need to improve the spatial and temporal scaling and extrapolation of plot level measurements of ecosystem structure and function to the landscape level has been identified as a persistent research challenge in the arctic terrestrial sciences. Although there has been a range of advances in remote sensing capabilities on satellite, fixed wing, helicopter and unmanned aerial vehicle platforms over the past decade, these present costly, logistically challenging (especially in the Arctic), technically demanding solutions for applications in an arctic environment. Here, we present a relatively low cost alternative to these platforms that uses kite aerial photography (KAP). Specifically, we demonstrate how digital elevation models (DEMs) were derived from this system for a coastal arctic landscape near Barrow, Alaska. DEMs of this area acquired from other remote sensing platforms such as Terrestrial Laser Scanning (TLS), Airborne Laser Scanning, and satellite imagery were also used in this study to determine accuracy and validity of results. DEMs interpolated using the KAP system were comparable to DEMs derived from the other platforms. For remotely sensing acre to kilometer square areas of interest, KAP has proven to be a low cost solution from which derived products that interface ground and satellite platforms can be developed by users with access to low-tech solutions and a limited knowledge of remote sensing.

  18. Elderly Taiwanese who spend more on fruits and vegetables and less on animal-derived foods use less medical services and incur lower medical costs.

    PubMed

    Lo, Yuan-Ting C; Wahlqvist, Mark L; Huang, Yi-Chen; Lee, Meei-Shyuan

    2016-03-14

    A higher intake of fruits and vegetables (F&V) compared with animal-derived foods is associated with lower risks of all-cause-, cancer- and CVD-related mortalities. However, the association between consumption patterns and medical costs remains unclear. The effects of various food group costs on medical service utilisation and costs were investigated. The study cohort was recruited through the Elderly Nutrition and Health Survey in Taiwan between 1999 and 2000 and followed-up for 8 years until 2006. It comprised free-living elderly participants who provided a 24-h dietary recall. Daily energy-adjusted food group costs were estimated. Annual medical service utilisation and costs for 1445 participants aged 65-79 years were calculated from the National Health Insurance claim data. Generalised linear models were used to appraise the associations between the food group costs and medical service utilisation and costs. Older adults with the highest F&V cost tertile had significantly fewer hospital days (30%) and total medical costs (19%), whereas those in the highest animal-derived group had a higher number of hospital days (28%) and costs (83%) as well as total medical costs (38%). Participants in the high F&V and low animal-derived cost groups had the shortest annual hospitalisation stays (5·78 d) and lowest costs (NT$38,600) as well as the lowest total medical costs (NT$75,800), a mean annual saving of NT$45 200/person. Older adults who spend more on F&V and less on animal-derived foods have a reduced medical-care system burden. This provides opportunities for nutritionally related healthcare system investment strategies.

  19. Prospects of banana waste utilization in wastewater treatment: A review.

    PubMed

    Ahmad, Tanweer; Danish, Mohammed

    2018-01-15

    This review article explores utilization of banana waste (fruit peels, pseudo-stem, trunks, and leaves) as precursor materials to produce an adsorbent, and its application against environmental pollutants such as heavy metals, dyes, organic pollutants, pesticides, and various other gaseous pollutants. In recent past, quite a good number of research articles have been published on the utilization of low-cost adsorbents derived from biomass wastes. The literature survey on banana waste derived adsorbents shown that due to the abundance of banana waste worldwide, it also considered as low-cost adsorbents with promising future application against various environmental pollutants. Furthermore, raw banana biomass can be chemically modified to prepare efficient adsorbent as per requirement; chemical surface functional group modification may enhance the multiple uses of the adsorbent with industrial standard. It was evident from a literature survey that banana waste derived adsorbents have significant removal efficiency against various pollutants. Most of the published articles on banana waste derived adsorbents have been discussed critically, and the conclusion is drawn based on the results reported. Some results with poorly performed experiments were also discussed and pointed out their lacking in reporting. Based on literature survey, the future research prospect on banana wastes has a significant impact on upcoming research strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Upon Accounting for the Impact of Isoenzyme Loss, Gene Deletion Costs Anticorrelate with Their Evolutionary Rates.

    PubMed

    Jacobs, Christopher; Lambourne, Luke; Xia, Yu; Segrè, Daniel

    2017-01-01

    System-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism's genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene's fitness contribution to an organism "here and now" and the same gene's historical importance as evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call "function-loss cost", which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.

  1. Estimating pharmacy level prescription drug acquisition costs for third-party reimbursement.

    PubMed

    Kreling, D H; Kirk, K W

    1986-07-01

    Accurate payment for the acquisition costs of drug products dispensed is an important consideration in a third-party prescription drug program. Two alternative methods of estimating these costs among pharmacies were derived and compared. First, pharmacists were surveyed to determine the purchase discounts offered to them by wholesalers. A 10.00% modal and 11.35% mean discount resulted for 73 responding pharmacists. Second, cost-plus percents derived from gross profit margins of wholesalers were calculated and applied to wholesaler product costs to estimate pharmacy level acquisition costs. Cost-plus percents derived from National Median and Southwestern Region wholesaler figures were 9.27% and 10.10%, respectively. A comparison showed the two methods of estimating acquisition costs would result in similar acquisition cost estimates. Adopting a cost-plus estimating approach is recommended because it avoids potential pricing manipulations by wholesalers and manufacturers that would negate improvements in drug product reimbursement accuracy.

  2. EPQ model with learning consideration, imperfect production and partial backlogging in fuzzy random environment

    NASA Astrophysics Data System (ADS)

    Shankar Kumar, Ravi; Goswami, A.

    2015-06-01

    The article scrutinises the learning effect of the unit production time on optimal lot size for the uncertain and imprecise imperfect production process, wherein shortages are permissible and partially backlogged. Contextually, we contemplate the fuzzy chance of production process shifting from an 'in-control' state to an 'out-of-control' state and re-work facility of imperfect quality of produced items. The elapsed time until the process shifts is considered as a fuzzy random variable, and consequently, fuzzy random total cost per unit time is derived. Fuzzy expectation and signed distance method are used to transform the fuzzy random cost function into an equivalent crisp function. The results are illustrated with the help of numerical example. Finally, sensitivity analysis of the optimal solution with respect to major parameters is carried out.

  3. Resting-State Functional Connectivity Underlying Costly Punishment: A Machine-Learning Approach.

    PubMed

    Feng, Chunliang; Zhu, Zhiyuan; Gu, Ruolei; Wu, Xia; Luo, Yue-Jia; Krueger, Frank

    2018-06-08

    A large number of studies have demonstrated costly punishment to unfair events across human societies. However, individuals exhibit a large heterogeneity in costly punishment decisions, whereas the neuropsychological substrates underlying the heterogeneity remain poorly understood. Here, we addressed this issue by applying a multivariate machine-learning approach to compare topological properties of resting-state brain networks as a potential neuromarker between individuals exhibiting different punishment propensities. A linear support vector machine classifier obtained an accuracy of 74.19% employing the features derived from resting-state brain networks to distinguish two groups of individuals with different punishment tendencies. Importantly, the most discriminative features that contributed to the classification were those regions frequently implicated in costly punishment decisions, including dorsal anterior cingulate cortex (dACC) and putamen (salience network), dorsomedial prefrontal cortex (dmPFC) and temporoparietal junction (mentalizing network), and lateral prefrontal cortex (central-executive network). These networks are previously implicated in encoding norm violation and intentions of others and integrating this information for punishment decisions. Our findings thus demonstrated that resting-state functional connectivity (RSFC) provides a promising neuromarker of social preferences, and bolster the assertion that human costly punishment behaviors emerge from interactions among multiple neural systems. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  4. The Cost of Ankylosing Spondylitis in the UK Using Linked Routine and Patient-Reported Survey Data

    PubMed Central

    Cooksey, Roxanne; Husain, Muhammad J.; Brophy, Sinead; Davies, Helen; Rahman, Muhammad A.; Atkinson, Mark D.; Phillips, Ceri J.; Siebert, Stefan

    2015-01-01

    Background Ankylosing spondylitis (AS) is a chronic inflammatory arthritis which typically begins in early adulthood and impacts on healthcare resource utilisation and the ability to work. Previous studies examining the cost of AS have relied on patient-reported questionnaires based on recall. This study uses a combination of patient-reported and linked-routine data to examine the cost of AS in Wales, UK. Methods Participants in an existing AS cohort study (n = 570) completed questionnaires regarding work status, out-of-pocket expenses, visits to health professionals and disease severity. Participants gave consent for their data to be linked to routine primary and secondary care clinical datasets. Health resource costs were calculated using a bottom-up micro-costing approach. Human capital costs methods were used to estimate work productivity loss costs, particularly relating to work and early retirement. Regression analyses were used to account for age, gender, disease activity. Results The total cost of AS in the UK is estimated at £19016 per patient per year, calculated to include GP attendance, administration costs and hospital costs derived from routine data records, plus patient-reported non-NHS costs, out-of-pocket AS-related expenses, early retirement, absenteeism, presenteeism and unpaid assistance costs. The majority of the cost (>80%) was as a result of work-related costs. Conclusion The major cost of AS is as a result of loss of working hours, early retirement and unpaid carer’s time. Therefore, much of AS costs are hidden and not easy to quantify. Functional impairment is the main factor associated with increased cost of AS. Interventions which keep people in work to retirement age and reduce functional impairment would have the greatest impact on reducing costs of AS. The combination of patient-reported and linked routine data significantly enhanced the health economic analysis and this methodology that can be applied to other chronic conditions. PMID:26185984

  5. The Cost of Ankylosing Spondylitis in the UK Using Linked Routine and Patient-Reported Survey Data.

    PubMed

    Cooksey, Roxanne; Husain, Muhammad J; Brophy, Sinead; Davies, Helen; Rahman, Muhammad A; Atkinson, Mark D; Phillips, Ceri J; Siebert, Stefan

    2015-01-01

    Ankylosing spondylitis (AS) is a chronic inflammatory arthritis which typically begins in early adulthood and impacts on healthcare resource utilisation and the ability to work. Previous studies examining the cost of AS have relied on patient-reported questionnaires based on recall. This study uses a combination of patient-reported and linked-routine data to examine the cost of AS in Wales, UK. Participants in an existing AS cohort study (n = 570) completed questionnaires regarding work status, out-of-pocket expenses, visits to health professionals and disease severity. Participants gave consent for their data to be linked to routine primary and secondary care clinical datasets. Health resource costs were calculated using a bottom-up micro-costing approach. Human capital costs methods were used to estimate work productivity loss costs, particularly relating to work and early retirement. Regression analyses were used to account for age, gender, disease activity. The total cost of AS in the UK is estimated at £19016 per patient per year, calculated to include GP attendance, administration costs and hospital costs derived from routine data records, plus patient-reported non-NHS costs, out-of-pocket AS-related expenses, early retirement, absenteeism, presenteeism and unpaid assistance costs. The majority of the cost (>80%) was as a result of work-related costs. The major cost of AS is as a result of loss of working hours, early retirement and unpaid carer's time. Therefore, much of AS costs are hidden and not easy to quantify. Functional impairment is the main factor associated with increased cost of AS. Interventions which keep people in work to retirement age and reduce functional impairment would have the greatest impact on reducing costs of AS. The combination of patient-reported and linked routine data significantly enhanced the health economic analysis and this methodology that can be applied to other chronic conditions.

  6. Assessment of risk to Boeing commerical transport aircraft from carbon fibers. [fiber release from graphite/epxoy materials

    NASA Technical Reports Server (NTRS)

    Clarke, C. A.; Brown, E. L.

    1980-01-01

    The possible effects of free carbon fibers on aircraft avionic equipment operation, removal costs, and safety were investigated. Possible carbon fiber flow paths, flow rates, and transfer functions into the Boeing 707, 727, 737, 747 aircraft and potentially vulnerable equipment were identified. Probabilities of equipment removal and probabilities of aircraft exposure to carbon fiber were derived.

  7. Cyber-Physical Correlations for Infrastructure Resilience: A Game-Theoretic Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; He, Fei; Ma, Chris Y. T.

    In several critical infrastructures, the cyber and physical parts are correlated so that disruptions to one affect the other and hence the whole system. These correlations may be exploited to strategically launch components attacks, and hence must be accounted for ensuring the infrastructure resilience, specified by its survival probability. We characterize the cyber-physical interactions at two levels: (i) the failure correlation function specifies the conditional survival probability of cyber sub-infrastructure given the physical sub-infrastructure as a function of their marginal probabilities, and (ii) the individual survival probabilities of both sub-infrastructures are characterized by first-order differential conditions. We formulate a resiliencemore » problem for infrastructures composed of discrete components as a game between the provider and attacker, wherein their utility functions consist of an infrastructure survival probability term and a cost term expressed in terms of the number of components attacked and reinforced. We derive Nash Equilibrium conditions and sensitivity functions that highlight the dependence of infrastructure resilience on the cost term, correlation function and sub-infrastructure survival probabilities. These results generalize earlier ones based on linear failure correlation functions and independent component failures. We apply the results to models of cloud computing infrastructures and energy grids.« less

  8. On the Convergence Analysis of the Optimized Gradient Method.

    PubMed

    Kim, Donghwan; Fessler, Jeffrey A

    2017-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov's fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization.

  9. On the Convergence Analysis of the Optimized Gradient Method

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2016-01-01

    This paper considers the problem of unconstrained minimization of smooth convex functions having Lipschitz continuous gradients with known Lipschitz constant. We recently proposed the optimized gradient method for this problem and showed that it has a worst-case convergence bound for the cost function decrease that is twice as small as that of Nesterov’s fast gradient method, yet has a similarly efficient practical implementation. Drori showed recently that the optimized gradient method has optimal complexity for the cost function decrease over the general class of first-order methods. This optimality makes it important to study fully the convergence properties of the optimized gradient method. The previous worst-case convergence bound for the optimized gradient method was derived for only the last iterate of a secondary sequence. This paper provides an analytic convergence bound for the primary sequence generated by the optimized gradient method. We then discuss additional convergence properties of the optimized gradient method, including the interesting fact that the optimized gradient method has two types of worstcase functions: a piecewise affine-quadratic function and a quadratic function. These results help complete the theory of an optimal first-order method for smooth convex minimization. PMID:28461707

  10. [Recent advances of synthetic biology for production of functional ingredients in Chinese materia medica].

    PubMed

    Su, Xin-Yao; Xue, Jian-Ping; Wang, Cai-Xia

    2016-11-01

    The functional ingredients in Chinese materia medica are the main active substance for traditional Chinese medicine and most of them are secondary metabolites derivatives. Until now,the main method to obtain those functional ingredients is through direct extraction from the Chinese materia medica. However, the income is very low because of the high extraction costs and the decreased medicinal plants. Synthetic biology technology, as a new and microbial approach, can be able to carry out large-scale production of functional ingredients and greatly ease the shortage of traditional Chinese medicine ingredients. This review mainly focused on the recent advances in synthetic biology for the functional ingredients production. Copyright© by the Chinese Pharmaceutical Association.

  11. A space-based public service platform for terrestrial rescue operations

    NASA Technical Reports Server (NTRS)

    Fleisig, R.; Bernstein, J.; Cramblit, D. C.

    1977-01-01

    The space-based Public Service Platform (PSP) is a multibeam, high-gain communications relay satellite that can provide a variety of functions for a large number of people on earth equipped with extremely small, very low cost transceivers. This paper describes the PSP concept, the rationale used to derive the concept, the criteria for selecting specific communication functions to be performed, and the advantages of performing such functions via satellite. The discussion focuses on the benefits of using a PSP for natural disaster warning; control of attendant rescue/assistance operations; and rescue of people in downed aircraft, aboard sinking ships, lost or injured on land.

  12. Optimal Guaranteed Cost Sliding Mode Control for Constrained-Input Nonlinear Systems With Matched and Unmatched Disturbances.

    PubMed

    Zhang, Huaguang; Qu, Qiuxia; Xiao, Geyang; Cui, Yang

    2018-06-01

    Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN) is applied to obtain the approximate optimal control law for the auxiliary system. Lyapunov techniques are used to demonstrate the convergence of the NN weight errors. In addition, the derived approximate optimal control is verified to guarantee the sliding mode dynamics system to be stable in the sense of uniform ultimate boundedness. Some simulation results are presented to verify the feasibility of the proposed control scheme.

  13. Cost benefit analysis of the transfer of NASA remote sensing technology to the state of Georgia

    NASA Technical Reports Server (NTRS)

    Zimmer, R. P. (Principal Investigator); Wilkins, R. D.; Kelly, D. L.; Brown, D. M.

    1977-01-01

    The author has identified the following significant results. First order benefits can generally be quantified, thus allowing quantitative comparisons of candidate land cover data systems. A meaningful dollar evaluation of LANDSAT can be made by a cost comparison with equally effective data systems. Users of LANDSAT data can be usefully categorized as performing three general functions: planning, permitting, and enforcing. The value of LANDSAT data to the State of Georgia is most sensitive to the parameters: discount rate, digitization cost, and photo acquisition cost. Under a constrained budget, LANDSAT could provide digitized land cover information roughly seven times more frequently than could otherwise be obtained. Thus on one hand, while the services derived from LANDSAT data in comparison to the baseline system has a positive net present value, on the other hand if the budget were constrained, more frequent information could be provided using the LANDSAT system than otherwise be obtained.

  14. Development of a funding, cost, and spending model for satellite projects

    NASA Technical Reports Server (NTRS)

    Johnson, Jesse P.

    1989-01-01

    The need for a predictive budget/funging model is obvious. The current models used by the Resource Analysis Office (RAO) are used to predict the total costs of satellite projects. An effort to extend the modeling capabilities from total budget analysis to total budget and budget outlays over time analysis was conducted. A statistical based and data driven methodology was used to derive and develop the model. Th budget data for the last 18 GSFC-sponsored satellite projects were analyzed and used to build a funding model which would describe the historical spending patterns. This raw data consisted of dollars spent in that specific year and their 1989 dollar equivalent. This data was converted to the standard format used by the RAO group and placed in a database. A simple statistical analysis was performed to calculate the gross statistics associated with project length and project cost ant the conditional statistics on project length and project cost. The modeling approach used is derived form the theory of embedded statistics which states that properly analyzed data will produce the underlying generating function. The process of funding large scale projects over extended periods of time is described by Life Cycle Cost Models (LCCM). The data was analyzed to find a model in the generic form of a LCCM. The model developed is based on a Weibull function whose parameters are found by both nonlinear optimization and nonlinear regression. In order to use this model it is necessary to transform the problem from a dollar/time space to a percentage of total budget/time space. This transformation is equivalent to moving to a probability space. By using the basic rules of probability, the validity of both the optimization and the regression steps are insured. This statistically significant model is then integrated and inverted. The resulting output represents a project schedule which relates the amount of money spent to the percentage of project completion.

  15. Performances of One-Round Walks in Linear Congestion Games

    NASA Astrophysics Data System (ADS)

    Bilò, Vittorio; Fanelli, Angelo; Flammini, Michele; Moscardelli, Luca

    We investigate the approximation ratio of the solutions achieved after a one-round walk in linear congestion games. We consider the social functions {Stextsc{um}}, defined as the sum of the players’ costs, and {Mtextsc{ax}}, defined as the maximum cost per player, as a measure of the quality of a given solution. For the social function {Stextsc{um}} and one-round walks starting from the empty strategy profile, we close the gap between the upper bound of 2+sqrt{5}≈ 4.24 given in [8] and the lower bound of 4 derived in [4] by providing a matching lower bound whose construction and analysis require non-trivial arguments. For the social function {Mtextsc{ax}}, for which, to the best of our knowledge, no results were known prior to this work, we show an approximation ratio of Θ(sqrt[4]{n^3}) (resp. Θ(nsqrt{n})), where n is the number of players, for one-round walks starting from the empty (resp. an arbitrary) strategy profile.

  16. Defense strategies for asymmetric networked systems under composite utilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell

    We consider an infrastructure of networked systems with discrete components that can be reinforced at certain costs to guard against attacks. The communications network plays a critical, asymmetric role of providing the vital connectivity between the systems. We characterize the correlations within this infrastructure at two levels using (a) aggregate failure correlation function that specifies the infrastructure failure probability giventhe failure of an individual system or network, and (b) first order differential conditions on system survival probabilities that characterize component-level correlations. We formulate an infrastructure survival game between an attacker and a provider, who attacks and reinforces individual components, respectively.more » They use the composite utility functions composed of a survival probability term and a cost term, and the previously studiedsum-form and product-form utility functions are their special cases. At Nash Equilibrium, we derive expressions for individual system survival probabilities and the expected total number of operational components. We apply and discuss these estimates for a simplified model of distributed cloud computing infrastructure« less

  17. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr; Picarelli, Athena, E-mail: athena.picarelli@inria.fr; Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system ofmore » controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.« less

  18. Civil Uses of Remotely Piloted Aircraft

    NASA Technical Reports Server (NTRS)

    Aderhold, J. R.; Gordon, G.; Scott, G. W.

    1976-01-01

    The economic, technical, and environmental implications of remotely piloted vehicles (RVP) are examined. The time frame is 1980-85. Representative uses are selected; detailed functional and performance requirements are derived for RPV systems; and conceptual system designs are devised. Total system cost comparisons are made with non-RPV alternatives. The potential market demand for RPV systems is estimated. Environmental and safety requirements are examined, and legal and regulatory concerns are identified. A potential demand for 2,000-11,000 RVP systems is estimated. Typical cost savings of 25 to 35% compared to non-RPV alternatives are determined. There appear to be no environmental problems, and the safety issue appears manageable.

  19. Analysis of a parallelized nonlinear elliptic boundary value problem solver with application to reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.; Smooke, Mitchell D.

    1987-01-01

    A parallelized finite difference code based on the Newton method for systems of nonlinear elliptic boundary value problems in two dimensions is analyzed in terms of computational complexity and parallel efficiency. An approximate cost function depending on 15 dimensionless parameters is derived for algorithms based on stripwise and boxwise decompositions of the domain and a one-to-one assignment of the strip or box subdomains to processors. The sensitivity of the cost functions to the parameters is explored in regions of parameter space corresponding to model small-order systems with inexpensive function evaluations and also a coupled system of nineteen equations with very expensive function evaluations. The algorithm was implemented on the Intel Hypercube, and some experimental results for the model problems with stripwise decompositions are presented and compared with the theory. In the context of computational combustion problems, multiprocessors of either message-passing or shared-memory type may be employed with stripwise decompositions to realize speedup of O(n), where n is mesh resolution in one direction, for reasonable n.

  20. Is higher nursing home quality more costly?

    PubMed

    Giorgio, L Di; Filippini, M; Masiero, G

    2016-11-01

    Widespread issues regarding quality in nursing homes call for an improved understanding of the relationship with costs. This relationship may differ in European countries, where care is mainly delivered by nonprofit providers. In accordance with the economic theory of production, we estimate a total cost function for nursing home services using data from 45 nursing homes in Switzerland between 2006 and 2010. Quality is measured by means of clinical indicators regarding process and outcome derived from the minimum data set. We consider both composite and single quality indicators. Contrary to most previous studies, we use panel data and control for omitted variables bias. This allows us to capture features specific to nursing homes that may explain differences in structural quality or cost levels. Additional analysis is provided to address simultaneity bias using an instrumental variable approach. We find evidence that poor levels of quality regarding outcome, as measured by the prevalence of severe pain and weight loss, lead to higher costs. This may have important implications for the design of payment schemes for nursing homes.

  1. The evolution of sexes: A specific test of the disruptive selection theory.

    PubMed

    da Silva, Jack

    2018-01-01

    The disruptive selection theory of the evolution of anisogamy posits that the evolution of a larger body or greater organismal complexity selects for a larger zygote, which in turn selects for larger gametes. This may provide the opportunity for one mating type to produce more numerous, small gametes, forcing the other mating type to produce fewer, large gametes. Predictions common to this and related theories have been partially upheld. Here, a prediction specific to the disruptive selection theory is derived from a previously published game-theoretic model that represents the most complete description of the theory. The prediction, that the ratio of macrogamete to microgamete size should be above three for anisogamous species, is supported for the volvocine algae. A fully population genetic implementation of the model, involving mutation, genetic drift, and selection, is used to verify the game-theoretic approach and accurately simulates the evolution of gamete sizes in anisogamous species. This model was extended to include a locus for gamete motility and shows that oogamy should evolve whenever there is costly motility. The classic twofold cost of sex may be derived from the fitness functions of these models, showing that this cost is ultimately due to genetic conflict.

  2. Optimal Mortgage Refinancing: A Closed Form Solution.

    PubMed

    Agarwal, Sumit; Driscoll, John C; Laibson, David I

    2013-06-01

    We derive the first closed-form optimal refinancing rule: Refinance when the current mortgage interest rate falls below the original rate by at least [Formula: see text] In this formula W (.) is the Lambert W -function, [Formula: see text] ρ is the real discount rate, λ is the expected real rate of exogenous mortgage repayment, σ is the standard deviation of the mortgage rate, κ/M is the ratio of the tax-adjusted refinancing cost and the remaining mortgage value, and τ is the marginal tax rate. This expression is derived by solving a tractable class of refinancing problems. Our quantitative results closely match those reported by researchers using numerical methods.

  3. Aerodynamic Design on Unstructured Grids for Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Anderson, W. Kyle; Bonhaus, Daryl L.

    1997-01-01

    An aerodynamic design algorithm for turbulent flows using unstructured grids is described. The current approach uses adjoint (costate) variables for obtaining derivatives of the cost function. The solution of the adjoint equations is obtained using an implicit formulation in which the turbulence model is fully coupled with the flow equations when solving for the costate variables. The accuracy of the derivatives is demonstrated by comparison with finite-difference gradients and a few example computations are shown. In addition, a user interface is described which significantly reduces the time required for setting up the design problems. Recommendations on directions of further research into the Navier Stokes design process are made.

  4. A PC program to optimize system configuration for desired reliability at minimum cost

    NASA Technical Reports Server (NTRS)

    Hills, Steven W.; Siahpush, Ali S.

    1994-01-01

    High reliability is desired in all engineered systems. One way to improve system reliability is to use redundant components. When redundant components are used, the problem becomes one of allocating them to achieve the best reliability without exceeding other design constraints such as cost, weight, or volume. Systems with few components can be optimized by simply examining every possible combination but the number of combinations for most systems is prohibitive. A computerized iteration of the process is possible but anything short of a super computer requires too much time to be practical. Many researchers have derived mathematical formulations for calculating the optimum configuration directly. However, most of the derivations are based on continuous functions whereas the real system is composed of discrete entities. Therefore, these techniques are approximations of the true optimum solution. This paper describes a computer program that will determine the optimum configuration of a system of multiple redundancy of both standard and optional components. The algorithm is a pair-wise comparative progression technique which can derive the true optimum by calculating only a small fraction of the total number of combinations. A designer can quickly analyze a system with this program on a personal computer.

  5. Removal of hexavalent Cr by coconut coir and derived chars--the effect of surface functionality.

    PubMed

    Shen, Ying-Shuian; Wang, Shan-Li; Tzou, Yu-Min; Yan, Ya-Yi; Kuan, Wen-Hui

    2012-01-01

    The Cr(VI) removal by coconut coir (CC) and chars obtained at various pyrolysis temperatures were evaluated. Increasing the pyrolysis temperature resulted in an increased surface area of the chars, while the corresponding content of oxygen-containing functional groups of the chars decreased. The Cr(VI) removal by CC and CC-derived chars was primarily attributed to the reduction of Cr(VI) to Cr(III) by the materials and the extent and rate of the Cr(VI) reduction were determined by the oxygen-containing functional groups in the materials. The contribution of pure Cr(VI) adsorption to the overall Cr(VI) removal became relatively significant for the chars obtained at higher temperatures. Accordingly, to develop a cost-effective method for removing Cr(VI) from water, the original CC is more advantageous than the carbonaceous counterparts because no pyrolysis is required for the application and CC has a higher content of functional groups for reducing Cr(VI) to less toxic Cr(III). Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Cost-utility of a specific collaborative group intervention for patients with functional somatic syndromes.

    PubMed

    Konnopka, Alexander; König, Hans-Helmut; Kaufmann, Claudia; Egger, Nina; Wild, Beate; Szecsenyi, Joachim; Herzog, Wolfgang; Schellberg, Dieter; Schaefert, Rainer

    2016-11-01

    Collaborative group intervention (CGI) in patients with functional somatic syndromes (FSS) has been shown to improve mental quality of life. To analyse incremental cost-utility of CGI compared to enhanced medical care in patients with FSS. An economic evaluation alongside a cluster-randomised controlled trial was performed. 35 general practitioners (GPs) recruited 300 FSS patients. Patients in the CGI arm were offered 10 group sessions within 3months and 2 booster sessions 6 and 12months after baseline. Costs were assessed via questionnaire. Quality adjusted life years (QALYs) were calculated using the SF-6D index, derived from the 36-item short-form health survey (SF-36). We calculated patients' net-monetary-benefit (NMB), estimated the treatment effect via regression, and generated cost-effectiveness acceptability curves. Using intention-to-treat analysis, total costs during the 12-month study period were 5777EUR in the intervention, and 6858EUR in the control group. Controlling for possible confounders, we found a small, but significant positive intervention effect on QALYs (+0.017; p=0.019) and an insignificant cost saving resulting from a cost-increase in the control group (-10.5%; p=0.278). NMB regression showed that the probability of CGI to be cost-effective was 69% for a willingness to pay (WTP) of 0EUR/QALY, increased to 92% for a WTP of 50,000EUR/QALY and reached the level of 95% at a WTP of 70,375EUR/QALY. Subgroup analyses yielded that CGI was only cost-effective in severe somatic symptom severity (PHQ-15≥15). CGI has a high probability to be a cost-effective treatment for FSS, in particular for patients with severe somatic symptom severity. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Abatement Cost of GHG Emissions for Wood-Based Electricity and Ethanol at Production and Consumption Levels

    PubMed Central

    Dwivedi, Puneet; Khanna, Madhu

    2014-01-01

    Woody feedstocks will play a critical role in meeting the demand for biomass-based energy products in the US. We developed an integrated model using comparable system boundaries and common set of assumptions to ascertain unit cost and greenhouse gas (GHG) intensity of electricity and ethanol derived from slash pine (Pinus elliottii) at the production and consumption levels by considering existing automobile technologies. We also calculated abatement cost of greenhouse gas (GHG) emissions with respect to comparable energy products derived from fossil fuels. The production cost of electricity derived using wood chips was at least cheaper by 1 ¢ MJ−1 over electricity derived from wood pellets. The production cost of ethanol without any income from cogenerated electricity was costlier by about 0.7 ¢ MJ−1 than ethanol with income from cogenerated electricity. The production cost of electricity derived from wood chips was cheaper by at least 0.7 ¢ MJ−1 than the energy equivalent cost of ethanol produced in presence of cogenerated electricity. The cost of using ethanol as a fuel in a flex-fuel vehicle was at least higher by 6 ¢ km−1 than a comparable electric vehicle. The GHG intensity of per km distance traveled in a flex-fuel vehicle was greater or lower than an electric vehicle running on electricity derived from wood chips depending on presence and absence of GHG credits related with co-generated electricity. A carbon tax of at least $7 Mg CO2e−1 and $30 Mg CO2e−1 is needed to promote wood-based electricity and ethanol production in the US, respectively. The range of abatement cost of GHG emissions is significantly dependent on the harvest age and selected baseline especially for electricity generation. PMID:24937461

  8. Abatement cost of GHG emissions for wood-based electricity and ethanol at production and consumption levels.

    PubMed

    Dwivedi, Puneet; Khanna, Madhu

    2014-01-01

    Woody feedstocks will play a critical role in meeting the demand for biomass-based energy products in the US. We developed an integrated model using comparable system boundaries and common set of assumptions to ascertain unit cost and greenhouse gas (GHG) intensity of electricity and ethanol derived from slash pine (Pinus elliottii) at the production and consumption levels by considering existing automobile technologies. We also calculated abatement cost of greenhouse gas (GHG) emissions with respect to comparable energy products derived from fossil fuels. The production cost of electricity derived using wood chips was at least cheaper by 1 ¢ MJ-1 over electricity derived from wood pellets. The production cost of ethanol without any income from cogenerated electricity was costlier by about 0.7 ¢ MJ-1 than ethanol with income from cogenerated electricity. The production cost of electricity derived from wood chips was cheaper by at least 0.7 ¢ MJ-1 than the energy equivalent cost of ethanol produced in presence of cogenerated electricity. The cost of using ethanol as a fuel in a flex-fuel vehicle was at least higher by 6 ¢ km-1 than a comparable electric vehicle. The GHG intensity of per km distance traveled in a flex-fuel vehicle was greater or lower than an electric vehicle running on electricity derived from wood chips depending on presence and absence of GHG credits related with co-generated electricity. A carbon tax of at least $7 Mg CO2e-1 and $30 Mg CO2e-1 is needed to promote wood-based electricity and ethanol production in the US, respectively. The range of abatement cost of GHG emissions is significantly dependent on the harvest age and selected baseline especially for electricity generation.

  9. The principles of quality-associated costing: derivation from clinical transfusion practice.

    PubMed

    Trenchard, P M; Dixon, R

    1997-01-01

    As clinical transfusion practice works towards achieving cost-effectiveness, prescribers of blood and its derivatives must be certain that the prices of such products are based on real manufacturing costs and not market forces. Using clinical cost-benefit analysis as the context for the costing and pricing of blood products, this article identifies the following two principles: (1) the product price must equal the product cost (the "price = cost" rule) and (2) the product cost must equal the real cost of product manufacture. In addition, the article describes a new method of blood product costing, quality-associated costing (QAC), that will enable valid cost-benefit analysis of blood products.

  10. Using the cost-effectiveness of allogeneic islet transplantation to inform induced pluripotent stem cell-derived β-cell therapy reimbursement.

    PubMed

    Archibald, Peter R T; Williams, David J

    2015-11-01

    In the present study a cost-effectiveness analysis of allogeneic islet transplantation was performed and the financial feasibility of a human induced pluripotent stem cell-derived β-cell therapy was explored. Previously published cost and health benefit data for islet transplantation were utilized to perform the cost-effectiveness and sensitivity analyses. It was determined that, over a 9-year time horizon, islet transplantation would become cost saving and 'dominate' the comparator. Over a 20-year time horizon, islet transplantation would incur significant cost savings over the comparator (GB£59,000). Finally, assuming a similar cost of goods to islet transplantation and a lack of requirement for immunosuppression, a human induced pluripotent stem cell-derived β-cell therapy would dominate the comparator over an 8-year time horizon.

  11. Maximum Principle in the Optimal Design of Plates with Stratified Thickness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roubicek, Tomas

    2005-03-15

    An optimal design problem for a plate governed by a linear, elliptic equation with bounded thickness varying only in a single prescribed direction and with unilateral isoperimetrical-type constraints is considered. Using Murat-Tartar's homogenization theory for stratified plates and Young-measure relaxation theory, smoothness of the extended cost and constraint functionals is proved, and then the maximum principle necessary for an optimal relaxed design is derived.

  12. Pluripotent stem cell derived hepatocyte like cells and their potential in toxicity screening.

    PubMed

    Greenhough, Sebastian; Medine, Claire N; Hay, David C

    2010-12-30

    Despite considerable progress in modelling human liver toxicity, the requirement still exists for efficient, predictive and cost effective in vitro models to reduce attrition during drug development. Thousands of compounds fail in this process, with hepatotoxicity being one of the significant causes of failure. The cost of clinical studies is substantial, therefore it is essential that toxicological screening is performed early on in the drug development process. Human hepatocytes represent the gold standard model for evaluating drug toxicity, but are a limited resource. Current alternative models are based on immortalised cell lines and animal tissue, but these are limited by poor function, exhibit species variability and show instability in culture. Pluripotent stem cells are an attractive alternative as they are capable of self-renewal and differentiation to all three germ layers, and thereby represent a potentially inexhaustible source of somatic cells. The differentiation of human embryonic stem cells and induced pluripotent stem cells to functional hepatocyte like cells has recently been reported. Further development of this technology could lead to the scalable production of hepatocyte like cells for liver toxicity screening and clinical therapies. Additionally, induced pluripotent stem cell derived hepatocyte like cells may permit in vitro modelling of gene polymorphisms and genetic diseases. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  13. Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques

    NASA Astrophysics Data System (ADS)

    Elliott, Louie C.

    This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.

  14. Combining controlled-source seismology and receiver function information to derive 3-D Moho topography for Italy

    NASA Astrophysics Data System (ADS)

    Spada, M.; Bianchi, I.; Kissling, E.; Agostinetti, N. Piana; Wiemer, S.

    2013-08-01

    The accurate definition of 3-D crustal structures and, in primis, the Moho depth, are the most important requirement for seismological, geophysical and geodynamic modelling in complex tectonic regions. In such areas, like the Mediterranean region, various active and passive seismic experiments are performed, locally reveal information on Moho depth, average and gradient crustal Vp velocity and average Vp/Vs velocity ratios. Until now, the most reliable information on crustal structures stems from controlled-source seismology experiments. In most parts of the Alpine region, a relatively large number of controlled-source seismology information are available though the overall coverage in the central Mediterranean area is still sparse due to high costs of such experiments. Thus, results from other seismic methodologies, such as local earthquake tomography, receiver functions and ambient noise tomography can be used to complement the controlled-source seismology information to increase coverage and thus the quality of 3-D crustal models. In this paper, we introduce a methodology to directly combine controlled-source seismology and receiver functions information relying on the strengths of each method and in relation to quantitative uncertainty estimates for all data to derive a well resolved Moho map for Italy. To obtain a homogeneous elaboration of controlled-source seismology and receiver functions results, we introduce a new classification/weighting scheme based on uncertainty assessment for receiver functions data. In order to tune the receiver functions information quality, we compare local receiver functions Moho depths and uncertainties with a recently derived well-resolved local earthquake tomography-derived Moho map and with controlled-source seismology information. We find an excellent correlation in the Moho information obtained by these three methodologies in Italy. In the final step, we interpolate the controlled-source seismology and receiver functions information to derive the map of Moho topography in Italy and surrounding regions. Our results show high-frequency undulation in the Moho topography of three different Moho interfaces, the European, the Adriatic-Ionian, and the Liguria-Corsica-Sardinia-Tyrrhenia, reflecting the complexity of geodynamical evolution.

  15. Spray-drying process preserves the protective capacity of a breast milk-derived Bifidobacterium lactis strain on acute and chronic colitis in mice

    PubMed Central

    Burns, Patricia; Alard, Jeanne; Hrdỳ, Jiri; Boutillier, Denise; Páez, Roxana; Reinheimer, Jorge; Pot, Bruno; Vinderola, Gabriel; Grangette, Corinne

    2017-01-01

    Gut microbiota dysbiosis plays a central role in the development and perpetuation of chronic inflammation in inflammatory bowel disease (IBD) and therefore is key target for interventions with high quality and functional probiotics. The local production of stable probiotic formulations at limited cost is considered an advantage as it reduces transportation cost and time, thereby increasing the effective period at the consumer side. In the present study, we compared the anti-inflammatory capacities of the Bifidobacterium animalis subsp. lactis (B. lactis) INL1, a probiotic strain isolated in Argentina from human breast milk, with the commercial strain B. animalis subsp. lactis BB12. The impact of spray-drying, a low-cost alternative of bacterial dehydration, on the functionality of both bifidobacteria was also investigated. We showed for both bacteria that the spray-drying process did not impact on bacterial survival nor on their protective capacities against acute and chronic colitis in mice, opening future perspectives for the use of strain INL1 in populations with IBD. PMID:28233848

  16. Aircraft parameter estimation

    NASA Technical Reports Server (NTRS)

    Iliff, Kenneth W.

    1987-01-01

    The aircraft parameter estimation problem is used to illustrate the utility of parameter estimation, which applies to many engineering and scientific fields. Maximum likelihood estimation has been used to extract stability and control derivatives from flight data for many years. This paper presents some of the basic concepts of aircraft parameter estimation and briefly surveys the literature in the field. The maximum likelihood estimator is discussed, and the basic concepts of minimization and estimation are examined for a simple simulated aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Some of the major conclusions for the simulated example are also developed for the analysis of flight data from the F-14, highly maneuverable aircraft technology (HiMAT), and space shuttle vehicles.

  17. A systematic review of quality and cost-effectiveness derived from Markov models evaluating smoking cessation interventions in patients with chronic obstructive pulmonary disease.

    PubMed

    Kirsch, Florian

    2015-04-01

    Smoking cessation is the only strategy that has shown a lasting reduction in the decline of lung function in patients with chronic obstructive pulmonary disease. This study aims to evaluate the cost-effectiveness of smoking cessation interventions in patients with chronic obstructive pulmonary disease, to assess the quality of the Markov models and to estimate the consequences of model structure and input data on cost-effectiveness. A systematic literature search was conducted in PubMed, Embase, BusinessSourceComplete and Econlit on June 11, 2014. Data were extracted, and costs were inflated. Model quality was evaluated by a quality appraisal, and results were interpreted. Ten studies met the inclusion criteria. The results varied widely from cost savings to additional costs of €17,004 per quality adjusted life year. The models scored best in the category structure, followed by data and consistency. The quality of the models seems to rise over time, and regarding the results there is no economic reason to refuse the reimbursement of any smoking cessation intervention.

  18. Feasibility and Supply Analysis of U.S. Geothermal District Heating and Cooling System

    NASA Astrophysics Data System (ADS)

    He, Xiaoning

    Geothermal energy is a globally distributed sustainable energy with the advantages of a stable base load energy production with a high capacity factor and zero SOx, CO, and particulates emissions. It can provide a potential solution to the depletion of fossil fuels and air pollution problems. The geothermal district heating and cooling system is one of the most common applications of geothermal energy, and consists of geothermal wells to provide hot water from a fractured geothermal reservoir, a surface energy distribution system for hot water transmission, and heating/cooling facilities to provide water and space heating as well as air conditioning for residential and commercial buildings. To gain wider recognition for the geothermal district heating and cooling (GDHC) system, the potential to develop such a system was evaluated in the western United States, and in the state of West Virginia. The geothermal resources were categorized into identified hydrothermal resources, undiscovered hydrothermal resources, near hydrothermal enhanced geothermal system (EGS), and deep EGS. Reservoir characteristics of the first three categories were estimated individually, and their thermal potential calculated. A cost model for such a system was developed for technical performance and economic analysis at each geothermally active location. A supply curve for the system was then developed, establishing the quantity and the cost of potential geothermal energy which can be used for the GDHC system. A West Virginia University (WVU) case study was performed to compare the competiveness of a geothermal energy system to the current steam based system. An Aspen Plus model was created to simulate the year-round campus heating and cooling scenario. Five cases of varying water flow rates and temperatures were simulated to find the lowest levelized cost of heat (LCOH) for the WVU case study. The model was then used to derive a levelized cost of heat as a function of the population density at a constant geothermal gradient. By use of such functions in West Virginia at a census tract level, the most promising census tracts in WV for the development of geothermal district heating and cooling systems were mapped. This study is unique in that its purpose was to utilize supply analyses for the GDHC systems and determine an appropriate economic assessment of the viability and sustainability of the systems. It was found that the market energy demand, production temperature, and project lifetime have negative effects on the levelized cost, while the drilling cost, discount rate, and capital cost have positive effects on the levelized cost by sensitivity analysis. Moreover, increasing the energy demand is the most effective way to decrease the levelized cost. The derived levelized cost function shows that for EGS based systems, the population density has a strong negative effect on the LCOH at any geothermal gradient, while the gradient only has a negative effect on the LCOH at a low population density.

  19. High-risk population health management--achieving improved patient outcomes and near-term financial results.

    PubMed

    Lynch, J P; Forman, S A; Graff, S; Gunby, M C

    2000-07-01

    A managed care organization sought to achieve efficiencies in care delivery and cost savings by anticipating and better caring for its frail and least stable members. Time sequence case study of program intervention across an entire managed care population in its first year compared with the prior baseline year. Key attributes of the intervention included predictive registries of at-risk members based on existing data, relentless focus on the high-risk group, an integrated clinical and psychosocial approach to assessments and are planning, a reengineered care management process, secured Internet applications enabling rapid implementation and broad connectivity, and population-based outcomes metrics derived from widely used measures of resource utilization and functional status. Concentrating on the highest-risk group, which averaged just 1.1% prevalence in the total membership, yielded bottom line results. When the year before program implementation (July 1997 through June 1998) was compared with the subsequent year, the total population's annualized commercial admission rate was reduced 5.3%, and seniors' was reduced 3.0%. A claims-paid analysis exclusively of the highest-risk group revealed that their efficiencies and savings overwhelmingly contributed to the membershipwide effect. This subgroup's costs dropped 35.7% from preprogram levels of $2590 per member per month (excluding pharmaceuticals). During the same time, patient-derived cross-sectional functional status rose 12.5%. A sharply focused, Internet-deployed case management strategy achieved economic and functional status results on a population basis and produced systemwide savings in its first year of implementation.

  20. Efficient evaluation of the Coulomb force in the Gaussian and finite-element Coulomb method.

    PubMed

    Kurashige, Yuki; Nakajima, Takahito; Sato, Takeshi; Hirao, Kimihiko

    2010-06-28

    We propose an efficient method for evaluating the Coulomb force in the Gaussian and finite-element Coulomb (GFC) method, which is a linear-scaling approach for evaluating the Coulomb matrix and energy in large molecular systems. The efficient evaluation of the analytical gradient in the GFC is not straightforward as well as the evaluation of the energy because the SCF procedure with the Coulomb matrix does not give a variational solution for the Coulomb energy. Thus, an efficient approximate method is alternatively proposed, in which the Coulomb potential is expanded in the Gaussian and finite-element auxiliary functions as done in the GFC. To minimize the error in the gradient not just in the energy, the derived functions of the original auxiliary functions of the GFC are used additionally for the evaluation of the Coulomb gradient. In fact, the use of the derived functions significantly improves the accuracy of this approach. Although these additional auxiliary functions enlarge the size of the discretized Poisson equation and thereby increase the computational cost, it maintains the near linear scaling as the GFC and does not affects the overall efficiency of the GFC approach.

  1. Bessel function expansion to reduce the calculation time and memory usage for cylindrical computer-generated holograms.

    PubMed

    Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko

    2017-07-10

    This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.

  2. Cost effectiveness of lung-volume-reduction surgery for patients with severe emphysema.

    PubMed

    Ramsey, Scott D; Berry, Kristin; Etzioni, Ruth; Kaplan, Robert M; Sullivan, Sean D; Wood, Douglas E

    2003-05-22

    The National Emphysema Treatment Trial, a randomized clinical trial comparing lung-volume-reduction surgery with medical therapy for severe emphysema, included a prospective economic analysis. After pulmonary rehabilitation, 1218 patients at 17 medical centers were randomly assigned to lung-volume-reduction surgery or continued medical treatment. Costs for the use of medical care, medications, transportation, and time spent receiving treatment were derived from Medicare claims and data from the trial. Cost effectiveness was calculated over the duration of the trial and was estimated for 10 years of follow-up with the use of modeling based on observed trends in survival, cost, and quality of life. Interim analyses identified a group of patients with excess mortality and little chance of improved functional status after surgery. When these patients were excluded, the cost-effectiveness ratio for lung-volume-reduction surgery as compared with medical therapy was 190,000 dollars per quality-adjusted life-year gained at 3 years and 53,000 dollars per quality-adjusted life-year gained at 10 years. Subgroup analyses identified patients with predominantly upper-lobe emphysema and low exercise capacity after pulmonary rehabilitation who had lower mortality and better functional status than patients who received medical therapy. The cost-effectiveness ratio in this subgroup was 98,000 dollars per quality-adjusted life-year gained at 3 years and 21,000 dollars at 10 years. Bootstrap analysis revealed substantial uncertainty for the subgroup and 10-year estimates. Given its cost and benefits over three years of follow-up, lung-volume-reduction surgery is costly relative to medical therapy. Although the predictions are subject to substantial uncertainty, the procedure may be cost effective if benefits can be maintained over time. Copyright 2003 Massachusetts Medical Society

  3. Sludge digestion instead of aerobic stabilisation - a cost benefit analysis based on experiences in Germany.

    PubMed

    Gretzschel, Oliver; Schmitt, Theo G; Hansen, Joachim; Siekmann, Klaus; Jakob, Jürgen

    2014-01-01

    As a consequence of a worldwide increase of energy costs, the efficient use of sewage sludge as a renewable energy resource must be considered, even for smaller wastewater treatment plants (WWTPs) with design capacities between 10,000 and 50,000 population equivalent (PE). To find the lower limit for an economical conversion of an aerobic stabilisation plant into an anaerobic stabilisation plant, we derived cost functions for specific capital costs and operating cost savings. With these tools, it is possible to evaluate if it would be promising to further investigate refitting aerobic plants into plants that produce biogas. By comparing capital costs with operation cost savings, a break-even point for process conversion could be determined. The break-even point varies depending on project specific constraints and assumptions related to future energy and operation costs and variable interest rates. A 5% increase of energy and operation costs leads to a cost efficient conversion for plants above 7,500 PE. A conversion of WWTPs results in different positive effects on energy generation and plant operations: increased efficiency, energy savings, and on-site renewable power generation by digester gas which can be used in the plant. Also, the optimisation of energy efficiency results in a reduction of primary energy consumption.

  4. Communication: Time-dependent optimized coupled-cluster method for multielectron dynamics

    NASA Astrophysics Data System (ADS)

    Sato, Takeshi; Pathak, Himadri; Orimo, Yuki; Ishikawa, Kenichi L.

    2018-02-01

    Time-dependent coupled-cluster method with time-varying orbital functions, called time-dependent optimized coupled-cluster (TD-OCC) method, is formulated for multielectron dynamics in an intense laser field. We have successfully derived the equations of motion for CC amplitudes and orthonormal orbital functions based on the real action functional, and implemented the method including double excitations (TD-OCCD) and double and triple excitations (TD-OCCDT) within the optimized active orbitals. The present method is size extensive and gauge invariant, a polynomial cost-scaling alternative to the time-dependent multiconfiguration self-consistent-field method. The first application of the TD-OCC method of intense-laser driven correlated electron dynamics in Ar atom is reported.

  5. Communication: Time-dependent optimized coupled-cluster method for multielectron dynamics.

    PubMed

    Sato, Takeshi; Pathak, Himadri; Orimo, Yuki; Ishikawa, Kenichi L

    2018-02-07

    Time-dependent coupled-cluster method with time-varying orbital functions, called time-dependent optimized coupled-cluster (TD-OCC) method, is formulated for multielectron dynamics in an intense laser field. We have successfully derived the equations of motion for CC amplitudes and orthonormal orbital functions based on the real action functional, and implemented the method including double excitations (TD-OCCD) and double and triple excitations (TD-OCCDT) within the optimized active orbitals. The present method is size extensive and gauge invariant, a polynomial cost-scaling alternative to the time-dependent multiconfiguration self-consistent-field method. The first application of the TD-OCC method of intense-laser driven correlated electron dynamics in Ar atom is reported.

  6. Extension of suboptimal control theory for flow around a square cylinder

    NASA Astrophysics Data System (ADS)

    Fujita, Yosuke; Fukagata, Koji

    2017-11-01

    We extend the suboptimal control theory to control of flow around a square cylinder, which has no point symmetry on the impulse response from the wall in contrast to circular cylinders and spheres previously studied. The cost functions examined are the pressure drag (J1), the friction drag (J2), the squared difference between target pressure and wall pressure (J3) and the time-averaged dissipation (J4). The control input is assumed to be continuous blowing and suction on the cylinder wall and the feedback sensors are assumued on the entire wall surface. The control law is derived so as to minimize the cost function under the constraint of linearized Navier-Stokes equation, and the impulse response field to be convolved with the instantaneous flow quanties are numerically obtained. The amplitide of control input is fixed so that the maximum blowing/suction velocity is 40% of the freestream velocity. When J2 is used as the cost function, the friction drag is reduced as expected but the mean drag is found to increase. In constast, when J1, J3, and J4 were used, the mean drag was found to decrease by 21%, 12%, and 22%, respectively; in addition, vortex shedding is suppressed, which leads to reduction of lift fluctuations.

  7. Systems and methods for energy cost optimization in a building system

    DOEpatents

    Turney, Robert D.; Wenzel, Michael J.

    2016-09-06

    Methods and systems to minimize energy cost in response to time-varying energy prices are presented for a variety of different pricing scenarios. A cascaded model predictive control system is disclosed comprising an inner controller and an outer controller. The inner controller controls power use using a derivative of a temperature setpoint and the outer controller controls temperature via a power setpoint or power deferral. An optimization procedure is used to minimize a cost function within a time horizon subject to temperature constraints, equality constraints, and demand charge constraints. Equality constraints are formulated using system model information and system state information whereas demand charge constraints are formulated using system state information and pricing information. A masking procedure is used to invalidate demand charge constraints for inactive pricing periods including peak, partial-peak, off-peak, critical-peak, and real-time.

  8. The next generation of low-cost personal air quality sensors for quantitative exposure monitoring

    NASA Astrophysics Data System (ADS)

    Piedrahita, R.; Xiang, Y.; Masson, N.; Ortega, J.; Collier, A.; Jiang, Y.; Li, K.; Dick, R.; Lv, Q.; Hannigan, M.; Shang, L.

    2014-03-01

    Advances in embedded systems and low-cost gas sensors are enabling a new wave of low cost air quality monitoring tools. Our team has been engaged in the development of low-cost wearable air quality monitors (M-Pods) using the Arduino platform. The M-Pods use commercially available metal oxide semiconductor (MOx) sensors to measure CO, O3, NO2, and total VOCs, and NDIR sensors to measure CO2. MOx sensors are low in cost and show high sensitivity near ambient levels; however they display non-linear output signals and have cross sensitivity effects. Thus, a quantification system was developed to convert the MOx sensor signals into concentrations. Two deployments were conducted at a regulatory monitoring station in Denver, Colorado. M-Pod concentrations were determined using laboratory calibration techniques and co-location calibrations, in which we place the M-Pods near regulatory monitors to then derive calibration function coefficients using the regulatory monitors as the standard. The form of the calibration function was derived based on laboratory experiments. We discuss various techniques used to estimate measurement uncertainties. A separate user study was also conducted to assess personal exposure and M-Pod reliability. In this study, 10 M-Pods were calibrated via co-location multiple times over 4 weeks and sensor drift was analyzed with the result being a calibration function that included drift. We found that co-location calibrations perform better than laboratory calibrations. Lab calibrations suffer from bias and difficulty in covering the necessary parameter space. During co-location calibrations, median standard errors ranged between 4.0-6.1 ppb for O3, 6.4-8.4 ppb for NO2, 0.28-0.44 ppm for CO, and 16.8 ppm for CO2. Median signal to noise (S/N) ratios for the M-Pod sensors were higher for M-Pods than the regulatory instruments: for NO2, 3.6 compared to 23.4; for O3, 1.4 compared to 1.6; for CO, 1.1 compared to 10.0; and for CO2, 42.2 compared to 300-500. The user study provided trends and location-specific information on pollutants, and affected change in user behavior. The study demonstrated the utility of the M-Pod as a tool to assess personal exposure.

  9. Solar-cell interconnect design for terrestrial photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.

    1984-01-01

    Useful solar cell interconnect reliability design and life prediction algorithms are presented, together with experimental data indicating that the classical strain cycle (fatigue) curve for the interconnect material does not account for the statistical scatter that is required in reliability predictions. This shortcoming is presently addressed by fitting a functional form to experimental cumulative interconnect failure rate data, which thereby yields statistical fatigue curves enabling not only the prediction of cumulative interconnect failures during the design life of an array field, but also the quantitative interpretation of data from accelerated thermal cycling tests. Optimal interconnect cost reliability design algorithms are also derived which may allow the minimization of energy cost over the design life of the array field.

  10. Energy Savings Lifetimes and Persistence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, Ian M.; Schiller, Steven R.; Todd, Annika

    2016-02-01

    This technical brief explains the concepts of energy savings lifetimes and savings persistence and discusses how program administrators use these factors to calculate savings for efficiency measures, programs and portfolios. Savings lifetime is the length of time that one or more energy efficiency measures or activities save energy, and savings persistence is the change in savings throughout the functional life of a given efficiency measure or activity. Savings lifetimes are essential for assessing the lifecycle benefits and cost effectiveness of efficiency activities and for forecasting loads in resource planning. The brief also provides estimates of savings lifetimes derived from amore » national collection of costs and savings for electric efficiency programs and portfolios.« less

  11. Solar-cell interconnect design for terrestrial photovoltaic modules

    NASA Astrophysics Data System (ADS)

    Mon, G. R.; Moore, D. M.; Ross, R. G., Jr.

    1984-11-01

    Useful solar cell interconnect reliability design and life prediction algorithms are presented, together with experimental data indicating that the classical strain cycle (fatigue) curve for the interconnect material does not account for the statistical scatter that is required in reliability predictions. This shortcoming is presently addressed by fitting a functional form to experimental cumulative interconnect failure rate data, which thereby yields statistical fatigue curves enabling not only the prediction of cumulative interconnect failures during the design life of an array field, but also the quantitative interpretation of data from accelerated thermal cycling tests. Optimal interconnect cost reliability design algorithms are also derived which may allow the minimization of energy cost over the design life of the array field.

  12. Space station data system analysis/architecture study. Task 4: System definition report

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Functional/performance requirements for the Space Station Data System (SSDS) are analyzed and architectural design concepts are derived and evaluated in terms of their performance and growth potential, technical feasibility and risk, and cost effectiveness. The design concepts discussed are grouped under five major areas: SSDS top-level architecture overview, end-to-end SSDS design and operations perspective, communications assumptions and traffic analysis, onboard SSDS definition, and ground SSDS definition.

  13. Kalman filters for fractional discrete-time stochastic systems along with time-delay in the observation signal

    NASA Astrophysics Data System (ADS)

    Torabi, H.; Pariz, N.; Karimpour, A.

    2016-02-01

    This paper investigates fractional Kalman filters when time-delay is entered in the observation signal in the discrete-time stochastic fractional order state-space representation. After investigating the common fractional Kalman filter, we try to derive a fractional Kalman filter for time-delay fractional systems. A detailed derivation is given. Fractional Kalman filters will be used to estimate recursively the states of fractional order state-space systems based on minimizing the cost function when there is a constant time delay (d) in the observation signal. The problem will be solved by converting the filtering problem to a usual d-step prediction problem for delay-free fractional systems.

  14. Study of the convergence behavior of the complex kernel least mean square algorithm.

    PubMed

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2013-09-01

    The complex kernel least mean square (CKLMS) algorithm is recently derived and allows for online kernel adaptive learning for complex data. Kernel adaptive methods can be used in finding solutions for neural network and machine learning applications. The derivation of CKLMS involved the development of a modified Wirtinger calculus for Hilbert spaces to obtain the cost function gradient. We analyze the convergence of the CKLMS with different kernel forms for complex data. The expressions obtained enable us to generate theory-predicted mean-square error curves considering the circularity of the complex input signals and their effect on nonlinear learning. Simulations are used for verifying the analysis results.

  15. Sorption of heavy metal ions onto carboxylate chitosan derivatives--a mini-review.

    PubMed

    Boamah, Peter Osei; Huang, Yan; Hua, Mingqing; Zhang, Qi; Wu, Jingbo; Onumah, Jacqueline; Sam-Amoah, Livingstone K; Boamah, Paul Osei

    2015-06-01

    Chitosan is of importance for the elimination of heavy metals due to their outstanding characteristics such as the presence of NH2 and -OH functional groups, non-toxicity, low cost and, large available quantities. Modifying a chitosan structure with -COOH group improves it in terms of solubility at pH ≤7 without affecting the aforementioned characteristics. Chitosan modified with a carboxylic group possess carboxyl, amino and hydroxyl multifunctional groups which are good for elimination of metal ions. The focal point of this mini-review will be on the preparation and characterization of some carboxylate chitosan derivatives as a sorbent for heavy metal sorption. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Principal Component Geostatistical Approach for large-dimensional inverse problems

    PubMed Central

    Kitanidis, P K; Lee, J

    2014-01-01

    The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m, and the number of observations, n, is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m2n, though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n. The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m2 as in the textbook approach. For problems of very large m, this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best. PMID:25558113

  17. Principal Component Geostatistical Approach for large-dimensional inverse problems.

    PubMed

    Kitanidis, P K; Lee, J

    2014-07-01

    The quasi-linear geostatistical approach is for weakly nonlinear underdetermined inverse problems, such as Hydraulic Tomography and Electrical Resistivity Tomography. It provides best estimates as well as measures for uncertainty quantification. However, for its textbook implementation, the approach involves iterations, to reach an optimum, and requires the determination of the Jacobian matrix, i.e., the derivative of the observation function with respect to the unknown. Although there are elegant methods for the determination of the Jacobian, the cost is high when the number of unknowns, m , and the number of observations, n , is high. It is also wasteful to compute the Jacobian for points away from the optimum. Irrespective of the issue of computing derivatives, the computational cost of implementing the method is generally of the order of m 2 n , though there are methods to reduce the computational cost. In this work, we present an implementation that utilizes a matrix free in terms of the Jacobian matrix Gauss-Newton method and improves the scalability of the geostatistical inverse problem. For each iteration, it is required to perform K runs of the forward problem, where K is not just much smaller than m but can be smaller that n . The computational and storage cost of implementation of the inverse procedure scales roughly linearly with m instead of m 2 as in the textbook approach. For problems of very large m , this implementation constitutes a dramatic reduction in computational cost compared to the textbook approach. Results illustrate the validity of the approach and provide insight in the conditions under which this method perform best.

  18. Upon accounting for the impact of isoenzyme loss, gene deletion costs anticorrelate with their evolutionary rates

    DOE PAGES

    Jacobs, Christopher; Lambourne, Luke; Xia, Yu; ...

    2017-01-20

    Here, system-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism's genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene's fitness contribution to an organism "here and now"º and the same gene's historical importance asmore » evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call "function-loss cost", which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.« less

  19. Upon accounting for the impact of isoenzyme loss, gene deletion costs anticorrelate with their evolutionary rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacobs, Christopher; Lambourne, Luke; Xia, Yu

    Here, system-level metabolic network models enable the computation of growth and metabolic phenotypes from an organism's genome. In particular, flux balance approaches have been used to estimate the contribution of individual metabolic genes to organismal fitness, offering the opportunity to test whether such contributions carry information about the evolutionary pressure on the corresponding genes. Previous failure to identify the expected negative correlation between such computed gene-loss cost and sequence-derived evolutionary rates in Saccharomyces cerevisiae has been ascribed to a real biological gap between a gene's fitness contribution to an organism "here and now"º and the same gene's historical importance asmore » evidenced by its accumulated mutations over millions of years of evolution. Here we show that this negative correlation does exist, and can be exposed by revisiting a broadly employed assumption of flux balance models. In particular, we introduce a new metric that we call "function-loss cost", which estimates the cost of a gene loss event as the total potential functional impairment caused by that loss. This new metric displays significant negative correlation with evolutionary rate, across several thousand minimal environments. We demonstrate that the improvement gained using function-loss cost over gene-loss cost is explained by replacing the base assumption that isoenzymes provide unlimited capacity for backup with the assumption that isoenzymes are completely non-redundant. We further show that this change of the assumption regarding isoenzymes increases the recall of epistatic interactions predicted by the flux balance model at the cost of a reduction in the precision of the predictions. In addition to suggesting that the gene-to-reaction mapping in genome-scale flux balance models should be used with caution, our analysis provides new evidence that evolutionary gene importance captures much more than strict essentiality.« less

  20. Critical Zone services as environmental assessment criteria in intensively managed landscapes

    NASA Astrophysics Data System (ADS)

    Richardson, Meredith; Kumar, Praveen

    2017-06-01

    The Critical Zone (CZ) includes the biophysical processes occurring from the top of the vegetation canopy to the weathering zone below the groundwater table. CZ services provide a measure for the goods and benefits derived from CZ processes. In intensively managed landscapes, cropland is altered through anthropogenic energy inputs to derive more productivity, as agricultural products, than would be possible under natural conditions. However, the actual costs of alterations to CZ functions within landscape profiles are unknown. Through comparisons of corn feed and corn-based ethanol, we show that valuation of these CZ services in monetary terms provides a more concrete tool for characterizing seemingly abstract environmental damages from agricultural production systems. Multiple models are combined to simulate the movement of nutrients throughout the soil system, enabling the measurement of agricultural anthropogenic impacts to the CZ's regulating services. Results indicate water quality and atmospheric stabilizing services, measured by soil carbon storage, carbon respiration, and nitrate leaching, among others, can cost more than double that of emissions estimated in previous studies. Energy efficiency in addition to environmental impact is assessed to demonstrate how the inclusion of CZ services is necessary in accounting for the entire life cycle of agricultural production systems. These results conclude that feed production systems are more energy efficient and less environmentally costly than corn-based ethanol.

  1. Approximate analytical relationships for linear optimal aeroelastic flight control laws

    NASA Astrophysics Data System (ADS)

    Kassem, Ayman Hamdy

    1998-09-01

    This dissertation introduces new methods to uncover functional relationships between design parameters of a contemporary control design technique and the resulting closed-loop properties. Three new methods are developed for generating such relationships through analytical expressions: the Direct Eigen-Based Technique, the Order of Magnitude Technique, and the Cost Function Imbedding Technique. Efforts concentrated on the linear-quadratic state-feedback control-design technique applied to an aeroelastic flight control task. For this specific application, simple and accurate analytical expressions for the closed-loop eigenvalues and zeros in terms of basic parameters such as stability and control derivatives, structural vibration damping and natural frequency, and cost function weights are generated. These expressions explicitly indicate how the weights augment the short period and aeroelastic modes, as well as the closed-loop zeros, and by what physical mechanism. The analytical expressions are used to address topics such as damping, nonminimum phase behavior, stability, and performance with robustness considerations, and design modifications. This type of knowledge is invaluable to the flight control designer and would be more difficult to formulate when obtained from numerical-based sensitivity analysis.

  2. Integrated Analysis and Visualization of Group Differences in Structural and Functional Brain Connectivity: Applications in Typical Ageing and Schizophrenia.

    PubMed

    Langen, Carolyn D; White, Tonya; Ikram, M Arfan; Vernooij, Meike W; Niessen, Wiro J

    2015-01-01

    Structural and functional brain connectivity are increasingly used to identify and analyze group differences in studies of brain disease. This study presents methods to analyze uni- and bi-modal brain connectivity and evaluate their ability to identify differences. Novel visualizations of significantly different connections comparing multiple metrics are presented. On the global level, "bi-modal comparison plots" show the distribution of uni- and bi-modal group differences and the relationship between structure and function. Differences between brain lobes are visualized using "worm plots". Group differences in connections are examined with an existing visualization, the "connectogram". These visualizations were evaluated in two proof-of-concept studies: (1) middle-aged versus elderly subjects; and (2) patients with schizophrenia versus controls. Each included two measures derived from diffusion weighted images and two from functional magnetic resonance images. The structural measures were minimum cost path between two anatomical regions according to the "Statistical Analysis of Minimum cost path based Structural Connectivity" method and the average fractional anisotropy along the fiber. The functional measures were Pearson's correlation and partial correlation of mean regional time series. The relationship between structure and function was similar in both studies. Uni-modal group differences varied greatly between connectivity types. Group differences were identified in both studies globally, within brain lobes and between regions. In the aging study, minimum cost path was highly effective in identifying group differences on all levels; fractional anisotropy and mean correlation showed smaller differences on the brain lobe and regional levels. In the schizophrenia study, minimum cost path and fractional anisotropy showed differences on the global level and within brain lobes; mean correlation showed small differences on the lobe level. Only fractional anisotropy and mean correlation showed regional differences. The presented visualizations were helpful in comparing and evaluating connectivity measures on multiple levels in both studies.

  3. Relationship between profitability and type traits and derivation of economic values for reproduction and survival traits in Chianina beef cows.

    PubMed

    Forabosco, F; Bozzi, R; Boettcher, P; Filippini, F; Bijma, P; Van Arendonk, J A M

    2005-09-01

    The objectives of this study were 1) to propose a profit function for Italian Chianina beef cattle; 2) to derive economic values for some biological variables in beef cows, specifically, production expressed as the number of calves born alive per year (NACY), age at the insemination that resulted in the birth of the first calf (FI), and length of productive life (LPL); and 3) to investigate the relationship between the phenotypic profit function and type traits as early predictors of profitability in the Chianina beef cattle population. The average profit was 196 Euros/(cow.yr) for the length of productive life (LPL) and was obtained as the difference between the average income of 1,375 Euros/(cow.yr) for LPL and costs of 1,178 Euros/(cow.yr) of LPL. The mean LPL was equal to 5.97 yr, so the average total phenotypic profit per cow on a lifetime basis was 1,175 Euros. A normative approach was used to derive the economic weights for the biological variables. The most important trait was the number of calves born alive (+4.03.cow(-1).yr(-1) and +24.06 Euros/cow). An increase of 1 d in LPL was associated with an increase of +0.19 Euros/(cow.yr) and +1.65 Euros/cow on a lifetime basis. Increasing FI by 1 d decreased profit by 0.42 Euros/(cow.yr) and 2.51 Euros/cow. Phenotypic profit per cow had a heritability of 0.29. Heritabilities for eight muscularity traits ranged from 0.16 to 0.23, and for the seven body size traits between 0.21 and 0.30. The conformation trait final score can be used as an early predictor of profitability. The sale price of the animal and differences in the revenue and costs of offspring due to muscularity should be included in a future profit function.

  4. Multimodal Diffuse Optical Imaging

    NASA Astrophysics Data System (ADS)

    Intes, Xavier; Venugopal, Vivek; Chen, Jin; Azar, Fred S.

    Diffuse optical imaging, particularly diffuse optical tomography (DOT), is an emerging clinical modality capable of providing unique functional information, at a relatively low cost, and with nonionizing radiation. Multimodal diffuse optical imaging has enabled a synergistic combination of functional and anatomical information: the quality of DOT reconstructions has been significantly improved by incorporating the structural information derived by the combined anatomical modality. In this chapter, we will review the basic principles of diffuse optical imaging, including instrumentation and reconstruction algorithm design. We will also discuss the approaches for multimodal imaging strategies that integrate DOI with clinically established modalities. The merit of the multimodal imaging approaches is demonstrated in the context of optical mammography, but the techniques described herein can be translated to other clinical scenarios such as brain functional imaging or muscle functional imaging.

  5. Cell-based cytotoxicity assays for engineered nanomaterials safety screening: exposure of adipose derived stromal cells to titanium dioxide nanoparticles.

    PubMed

    Xu, Yan; Hadjiargyrou, M; Rafailovich, Miriam; Mironava, Tatsiana

    2017-07-11

    Increasing production of nanomaterials requires fast and proper assessment of its potential toxicity. Therefore, there is a need to develop new assays that can be performed in vitro, be cost effective, and allow faster screening of engineered nanomaterials (ENMs). Herein, we report that titanium dioxide (TiO 2 ) nanoparticles (NPs) can induce damage to adipose derived stromal cells (ADSCs) at concentrations which are rated as safe by standard assays such as measuring proliferation, reactive oxygen species (ROS), and lactate dehydrogenase (LDH) levels. Specifically, we demonstrated that low concentrations of TiO 2 NPs, at which cellular LDH, ROS, or proliferation profiles were not affected, induced changes in the ADSCs secretory function and differentiation capability. These two functions are essential for ADSCs in wound healing, energy expenditure, and metabolism with serious health implications in vivo. We demonstrated that cytotoxicity assays based on specialized cell functions exhibit greater sensitivity and reveal damage induced by ENMs that was not otherwise detected by traditional ROS, LDH, and proliferation assays. For proper toxicological assessment of ENMs standard ROS, LDH, and proliferation assays should be combined with assays that investigate cellular functions relevant to the specific cell type.

  6. Organizational Cost of Quality Improvement for Depression Care

    PubMed Central

    Liu, Chuan-Fen; Rubenstein, Lisa V; Kirchner, JoAnn E; Fortney, John C; Perkins, Mark W; Ober, Scott K; Pyne, Jeffrey M; Chaney, Edmund F

    2009-01-01

    Objective We documented organizational costs for depression care quality improvement (QI) to develop an evidence-based, Veterans Health Administration (VA) adapted depression care model for primary care practices that performed well for patients, was sustained over time, and could be spread nationally in VA. Data Sources and Study Setting Project records and surveys from three multistate VA administrative regions and seven of their primary care practices. Study Design Descriptive analysis. Data Collection We documented project time commitments and expenses for 86 clinical QI and 42 technical expert support team participants for 4 years from initial contact through care model design, Plan–Do–Study–Act cycles, and achievement of stable workloads in which models functioned as routine care. We assessed time, salary costs, and costs for conference calls, meetings, e-mails, and other activities. Principle Findings Over an average of 27 months, all clinics began referring patients to care managers. Clinical participants spent 1,086 hours at a cost of $84,438. Technical experts spent 2,147 hours costing $197,787. Eighty-five percent of costs derived from initial regional engagement activities and care model design. Conclusions Organizational costs of the QI process for depression care in a large health care system were significant, and should be accounted for when planning for implementation of evidence-based depression care. PMID:19146566

  7. Optimal investment strategies and hedging of derivatives in the presence of transaction costs (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Muratore-Ginanneschi, Paolo

    2005-05-01

    Investment strategies in multiplicative Markovian market models with transaction costs are defined using growth optimal criteria. The optimal strategy is shown to consist in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon. The inclusion of financial derivatives in the models is also considered. All the results presented in this contributions were previously derived in collaboration with E. Aurell.

  8. Adjoint-based optimization of PDEs in moving domains

    NASA Astrophysics Data System (ADS)

    Protas, Bartosz; Liao, Wenyuan

    2008-02-01

    In this investigation we address the problem of adjoint-based optimization of PDE systems in moving domains. As an example we consider the one-dimensional heat equation with prescribed boundary temperatures and heat fluxes. We discuss two methods of deriving an adjoint system necessary to obtain a gradient of a cost functional. In the first approach we derive the adjoint system after mapping the problem to a fixed domain, whereas in the second approach we derive the adjoint directly in the moving domain by employing methods of the noncylindrical calculus. We show that the operations of transforming the system from a variable to a fixed domain and deriving the adjoint do not commute and that, while the gradient information contained in both systems is the same, the second approach results in an adjoint problem with a simpler structure which is therefore easier to implement numerically. This approach is then used to solve a moving boundary optimization problem for our model system.

  9. Civil Uses of Remotely Piloted Aircraft

    NASA Technical Reports Server (NTRS)

    Aderhold, J. R.; Gordon, G.; Scott, G. W.

    1976-01-01

    The technology effort is identified and assessed that is required to bring the civil uses of RPVs to fruition and to determine whether or not the potential market is real and economically practical, the technologies are within reach, the operational problems are manageable, and the benefits are worth the cost. To do so, the economic, technical, and environmental implications are examined. The time frame is 1980-85. Representative uses are selected; detailed functional and performance requirements are derived for RPV systems; and conceptual system designs are devised. Total system cost comparisons are made with non-RPV alternatives. The potential market demand for RPV systems is estimated. Environmental and safety requirements are examined, and legal and regulatory concerns are identified. A potential demand for 2,000-11,000 RPV systems is estimated. Typical cost savings of 25-35% compared to non-RPV alternatives are determined. There appear to be no environmental problems, and the safety issue appears manageable.

  10. Preparation, characterization and environmental/electrochemical energy storage testing of low-cost biochar from natural chitin obtained via pyrolysis at mild conditions

    NASA Astrophysics Data System (ADS)

    Magnacca, Giuliana; Guerretta, Federico; Vizintin, Alen; Benzi, Paola; Valsania, Maria C.; Nisticò, Roberto

    2018-01-01

    Chitin (a biopolymer obtained from shellfish industry) was used as precursor for the production of biochars obtained via pyrolysis treatments performed at mild conditions (in the 290-540 °C range). Biochars were physicochemical characterized in order to evaluate the pyrolysis-induced effects in terms of both functional groups and material structure. Moreover, such carbonaceous materials were tested as adsorbent substrates for the removal of target molecules from aqueous environment as well as in solid-gas experiments, to measure the adsorption capacities and selectivity toward CO2. Lastly, biochars were also investigated as possible cathode materials in sustainable and low-cost electrochemical energy storage devices, such as lithium-sulphur (Li-S) batteries. Interestingly, experimental results evidenced that such chitin-derived biochars obtained via pyrolysis at mild conditions are sustainable, low-cost and easy scalable alternative materials suitable for both environmental and energetic applications.

  11. Reconstruction of a piecewise constant conductivity on a polygonal partition via shape optimization in EIT

    NASA Astrophysics Data System (ADS)

    Beretta, Elena; Micheletti, Stefano; Perotto, Simona; Santacesaria, Matteo

    2018-01-01

    In this paper, we develop a shape optimization-based algorithm for the electrical impedance tomography (EIT) problem of determining a piecewise constant conductivity on a polygonal partition from boundary measurements. The key tool is to use a distributed shape derivative of a suitable cost functional with respect to movements of the partition. Numerical simulations showing the robustness and accuracy of the method are presented for simulated test cases in two dimensions.

  12. Finite difference schemes for long-time integration

    NASA Technical Reports Server (NTRS)

    Haras, Zigo; Taasan, Shlomo

    1993-01-01

    Finite difference schemes for the evaluation of first and second derivatives are presented. These second order compact schemes were designed for long-time integration of evolution equations by solving a quadratic constrained minimization problem. The quadratic cost function measures the global truncation error while taking into account the initial data. The resulting schemes are applicable for integration times fourfold, or more, longer than similar previously studied schemes. A similar approach was used to obtain improved integration schemes.

  13. Prioritization Methodology for Chemical Replacement

    NASA Technical Reports Server (NTRS)

    Cruit, W.; Schutzenhofer, S.; Goldberg, B.; Everhart, K.

    1993-01-01

    This project serves to define an appropriate methodology for effective prioritization of efforts required to develop replacement technologies mandated by imposed and forecast legislation. The methodology used is a semiquantitative approach derived from quality function deployment techniques (QFD Matrix). This methodology aims to weigh the full environmental, cost, safety, reliability, and programmatic implications of replacement technology development to allow appropriate identification of viable candidates and programmatic alternatives. The results are being implemented as a guideline for consideration for current NASA propulsion systems.

  14. Data-Based Predictive Control with Multirate Prediction Step

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan S.

    2010-01-01

    Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.

  15. Electrochemical Study of Hydrocarbon-Derived Electrolytes for Supercapacitors

    NASA Astrophysics Data System (ADS)

    Noorden, Zulkarnain A.; Matsumoto, Satoshi

    2013-10-01

    In this paper, we evaluate the essential electrochemical properties - capacitive and resistive behaviors - of hydrocarbon-derived electrolytes for supercapacitor application using cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS). The electrolytes were systematically prepared from three hydrocarbon-derived compounds, which have different molecular structures and functional groups, by treatment with high-concentration sulfuric acid (H2SO4) at room temperature. Two-electrode cells were assembled by sandwiching an electrolyte-containing glass wool separator with two active electrodes of activated carbon sheets. The dc electrical properties of the tested cells in terms of their capacitive behavior were investigated by CV, and in order to observe the frequency characteristics of the constructed cells, EIS was carried out. Compared with the tested cell with only high-concentration H2SO4 as the electrolyte, the cell with the derived electrolytes exhibit a capacitance as high as 135 F/g with an improved overall internal resistance of 2.5 Ω. Through the use of a simple preparation method and low-cost precursors, hydrocarbon-derived electrolytes could potentially find large-scale and higher-rating supercapacitor applications.

  16. Synthesis of δ- and α-Carbolines via Nickel-Catalyzed [2 + 2 + 2] Cycloaddition of Functionalized Alkyne-Nitriles with Alkynes.

    PubMed

    Wang, Gaonan; You, Xu; Gan, Yi; Liu, Yuanhong

    2017-01-06

    A new method for the synthesis of δ- and α-carbolines through Ni-catalyzed [2 + 2 + 2] cycloaddition of ynamide-nitriles or alkyne-cyanamides with alkynes has been developed. The catalytic system of NiCl 2 (DME)/dppp/Zn with a low-cost Ni(II)-precursor was first utilized in Ni-catalyzed [2 + 2 + 2] cycloaddition reactions, and the in situ generated Lewis acid may play an important role for the successful transformation. Not only internal alkynes but also terminal alkynes undergo the desired cycloaddition reactions efficiently to furnish the carboline derivatives with wide diversity and functional group tolerance.

  17. Optical properties from time-dependent current-density-functional theory: the case of the alkali metals Na, K, Rb, and Cs

    NASA Astrophysics Data System (ADS)

    Ferradás, R.; Berger, J. A.; Romaniello, Pina

    2018-06-01

    We present the optical conductivity as well as the electron-energy loss spectra of the alkali metals Na, K, Rb, and Cs calculated within time-dependent current-density functional theory. Our ab initio formulation describes from first principles both the Drude-tail and the interband absorption of these metals as well as the most dominant relativistic effects. We show that by using a recently derived current functional [Berger, Phys. Rev. Lett. 115, 137402 (2015)] we obtain an overall good agreement with experiment at a computational cost that is equivalent to the random-phase approximation. We also highlight the importance of the choice of the exchange-correlation potential of the ground state.

  18. Practical aspects of photovoltaic technology, applications and cost (revised)

    NASA Technical Reports Server (NTRS)

    Rosenblum, L.

    1985-01-01

    The purpose of this text is to provide the reader with the background, understanding, and computational tools needed to master the practical aspects of photovoltaic (PV) technology, application, and cost. The focus is on stand-alone, silicon solar cell, flat-plate systems in the range of 1 to 25 kWh/day output. Technology topics covered include operation and performance of each of the major system components (e.g., modules, array, battery, regulators, controls, and instrumentation), safety, installation, operation and maintenance, and electrical loads. Application experience and trends are presented. Indices of electrical service performance - reliability, availability, and voltage control - are discussed, and the known service performance of central station electric grid, diesel-generator, and PV stand-alone systems are compared. PV system sizing methods are reviewed and compared, and a procedure for rapid sizing is described and illustrated by the use of several sample cases. The rapid sizing procedure yields an array and battery size that corresponds to a minimum cost system for a given load requirement, insulation condition, and desired level of service performance. PV system capital cost and levelized energy cost are derived as functions of service performance and insulation. Estimates of future trends in PV system costs are made.

  19. A staggered-grid convolutional differentiator for elastic wave modelling

    NASA Astrophysics Data System (ADS)

    Sun, Weijia; Zhou, Binzhong; Fu, Li-Yun

    2015-11-01

    The computation of derivatives in governing partial differential equations is one of the most investigated subjects in the numerical simulation of physical wave propagation. An analytical staggered-grid convolutional differentiator (CD) for first-order velocity-stress elastic wave equations is derived in this paper by inverse Fourier transformation of the band-limited spectrum of a first derivative operator. A taper window function is used to truncate the infinite staggered-grid CD stencil. The truncated CD operator is almost as accurate as the analytical solution, and as efficient as the finite-difference (FD) method. The selection of window functions will influence the accuracy of the CD operator in wave simulation. We search for the optimal Gaussian windows for different order CDs by minimizing the spectral error of the derivative and comparing the windows with the normal Hanning window function for tapering the CD operators. It is found that the optimal Gaussian window appears to be similar to the Hanning window function for tapering the same CD operator. We investigate the accuracy of the windowed CD operator and the staggered-grid FD method with different orders. Compared to the conventional staggered-grid FD method, a short staggered-grid CD operator achieves an accuracy equivalent to that of a long FD operator, with lower computational costs. For example, an 8th order staggered-grid CD operator can achieve the same accuracy of a 16th order staggered-grid FD algorithm but with half of the computational resources and time required. Numerical examples from a homogeneous model and a crustal waveguide model are used to illustrate the superiority of the CD operators over the conventional staggered-grid FD operators for the simulation of wave propagations.

  20. Separate valuation subsystems for delay and effort decision costs.

    PubMed

    Prévost, Charlotte; Pessiglione, Mathias; Météreau, Elise; Cléry-Melin, Marie-Laure; Dreher, Jean-Claude

    2010-10-20

    Decision making consists of choosing among available options on the basis of a valuation of their potential costs and benefits. Most theoretical models of decision making in behavioral economics, psychology, and computer science propose that the desirability of outcomes expected from alternative options can be quantified by utility functions. These utility functions allow a decision maker to assign subjective values to each option under consideration by weighting the likely benefits and costs resulting from an action and to select the one with the highest subjective value. Here, we used model-based neuroimaging to test whether the human brain uses separate valuation systems for rewards (erotic stimuli) associated with different types of costs, namely, delay and effort. We show that humans devalue rewards associated with physical effort in a strikingly similar fashion to those they devalue that are associated with delays, and that a single computational model derived from economics theory can account for the behavior observed in both delay discounting and effort discounting. However, our neuroimaging data reveal that the human brain uses distinct valuation subsystems for different types of costs, reflecting in opposite fashion delayed reward and future energetic expenses. The ventral striatum and the ventromedial prefrontal cortex represent the increasing subjective value of delayed rewards, whereas a distinct network, composed of the anterior cingulate cortex and the anterior insula, represent the decreasing value of the effortful option, coding the expected expense of energy. Together, these data demonstrate that the valuation processes underlying different types of costs can be fractionated at the cerebral level.

  1. Inverse optimal self-tuning PID control design for an autonomous underwater vehicle

    NASA Astrophysics Data System (ADS)

    Rout, Raja; Subudhi, Bidyadhar

    2017-01-01

    This paper presents a new approach to path following control design for an autonomous underwater vehicle (AUV). A NARMAX model of the AUV is derived first and then its parameters are adapted online using the recursive extended least square algorithm. An adaptive Propotional-Integral-Derivative (PID) controller is developed using the derived parameters to accomplish the path following task of an AUV. The gain parameters of the PID controller are tuned using an inverse optimal control technique, which alleviates the problem of solving Hamilton-Jacobian equation and also satisfies an error cost function. Simulation studies were pursued to verify the efficacy of the proposed control algorithm. From the obtained results, it is envisaged that the proposed NARMAX model-based self-tuning adaptive PID control provides good path following performance even in the presence of uncertainty arising due to ocean current or hydrodynamic parameter.

  2. Sustainable Life Cycles of Natural-Precursor-Derived Nanocarbons.

    PubMed

    Bazaka, Kateryna; Jacob, Mohan V; Ostrikov, Kostya Ken

    2016-01-13

    Sustainable societal and economic development relies on novel nanotechnologies that offer maximum efficiency at minimal environmental cost. Yet, it is very challenging to apply green chemistry approaches across the entire life cycle of nanotech products, from design and nanomaterial synthesis to utilization and disposal. Recently, novel, efficient methods based on nonequilibrium reactive plasma chemistries that minimize the process steps and dramatically reduce the use of expensive and hazardous reagents have been applied to low-cost natural and waste sources to produce value-added nanomaterials with a wide range of applications. This review discusses the distinctive effects of nonequilibrium reactive chemistries and how these effects can aid and advance the integration of sustainable chemistry into each stage of nanotech product life. Examples of the use of enabling plasma-based technologies in sustainable production and degradation of nanotech products are discussed-from selection of precursors derived from natural resources and their conversion into functional building units, to methods for green synthesis of useful naturally degradable carbon-based nanomaterials, to device operation and eventual disintegration into naturally degradable yet potentially reusable byproducts.

  3. Avionics upgrade strategies for the Space Shuttle and derivatives

    NASA Astrophysics Data System (ADS)

    Swaim, Richard A.; Wingert, William B.

    Some approaches aimed at providing a low-cost, low-risk strategy to upgrade the shuttle onboard avionics are described. These approaches allow migration to a shuttle-derived vehicle and provide commonality with Space Station Freedom avionics to the extent practical. Some goals of the Shuttle cockpit upgrade include: offloading of the main computers by distributing avionics display functions, reducing crew workload, reducing maintenance cost, and providing display reconfigurability and context sensitivity. These goals are being met by using a combination of off-the-shelf and newly developed software and hardware. The software will be developed using Ada. Advanced active matrix liquid crystal displays are being used to meet the tight space, weight, and power consumption requirements. Eventually, it is desirable to upgrade the current shuttle data processing system with a system that has more in common with the Space Station data management system. This will involve not only changes in Space Shuttle onboard hardware, but changes in the software. Possible approaches to maximizing the use of the existing software base while taking advantage of new language capabilities are discussed.

  4. Estimating the global costs of vitamin A capsule supplementation: a review of the literature.

    PubMed

    Neidecker-Gonzales, Oscar; Nestel, Penelope; Bouis, Howarth

    2007-09-01

    Vitamin A supplementation reduces child mortality. It is estimated that 500 million vitamin A capsules are distributed annually. Policy recommendations have assumed that the supplementation programs offer a proven technology at a relatively low cost of around US$0.10 per capsule. To review data on costs of vitamin A supplementation to analyze the key factors that determine program costs, and to attempt to model these costs as a function of per capita income figures. Using data from detailed cost studies in seven countries, this study generated comparable cost categories for analysis, and then used the correlation between national incomes and wage rates to postulate a simple model where costs of vitamin A supplementation are regressed on per capita incomes. Costs vary substantially by country and depend principally on the cost of labor, which is highly correlated with per capita income. Two other factors driving costs are whether the program is implemented in conjunction with other health programs, such as National Immunization Days (which lowers costs), and coverage in rural areas (which increases costs). Labor accounts for 70% of total costs, both for paid staff and for volunteers, while the capsules account for less than 5%. Marketing, training, and administration account for the remaining 25%. Total costs are lowest (roughly US$0.50 per capsule) in Africa, where wages and incomes are lowest, US$1 in developing countries in Asia, and US$1.50 in Latin America. Overall, this study derives a much higher global estimate of costs of around US$1 per capsule.

  5. Functional outcome and cost-effectiveness of pulsed electromagnetic fields in the treatment of acute scaphoid fractures: a cost-utility analysis.

    PubMed

    Hannemann, Pascal F W; Essers, Brigitte A B; Schots, Judith P M; Dullaert, Koen; Poeze, Martijn; Brink, Peter R G

    2015-04-11

    Physical forces have been widely used to stimulate bone growth in fracture repair. Addition of bone growth stimulation to the conservative treatment regime is more costly than standard health care. However, it might lead to cost-savings due to a reduction of the total amount of working days lost. This economic evaluation was performed to assess the cost-effectiveness of Pulsed Electromagnetic Fields (PEMF) compared to standard health care in the treatment of acute scaphoid fractures. An economic evaluation was carried out from a societal perspective, alongside a double-blind, randomized, placebo-controlled, multicenter trial involving five centres in The Netherlands. One hundred and two patients with a clinically and radiographically proven fracture of the scaphoid were included in the study and randomly allocated to either active bone growth stimulation or standard health care, using a placebo. All costs (medical costs and costs due to productivity loss) were measured during one year follow up. Functional outcome and general health related quality of life were assessed by the EuroQol-5D and PRWHE (patient rated wrist and hand evaluation) questionnaires. Utility scores were derived from the EuroQol-5D. The average total number of working days lost was lower in the active PEMF group (9.82 days) compared to the placebo group (12.91 days) (p = 0.651). Total medical costs of the intervention group (€1594) were significantly higher compared to the standard health care (€875). The total amount of mean QALY's (quality-adjusted life year) for the active PEMF group was 0.84 and 0.85 for the control group. The cost-effectiveness plane shows that the majority of all cost-effectiveness ratios fall into the quadrant where PEMF is not only less effective in terms of QALY's but also more costly. This study demonstrates that the desired effects in terms of cost-effectiveness are not met. When comparing the effects of PEMF to standard health care in terms of QALY's, PEMF cannot be considered a cost-effective treatment for acute fractures of the scaphoid bone. Netherlands Trial Register (NTR): NTR2064.

  6. The 25 kW power module evolution study. Part 3: Conceptual designs for power module evolutions. Volume 3: Cost estimates

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Cost data generated for the evolutionary power module concepts selected are reported. The initial acquisition costs (design, development, and protoflight unit test costs) were defined and modeled for the baseline 25 kW power module configurations. By building a parametric model of this initial building block, the cost of the 50 kW and the 100 kW power modules were derived by defining only their configuration and programmatic differences from the 25 kW baseline module. Variations in cost for the quantities needed to fulfill the mission scenarios were derived by applying appropriate learning curves.

  7. A learning curve for solar thermal power

    NASA Astrophysics Data System (ADS)

    Platzer, Werner J.; Dinter, Frank

    2016-05-01

    Photovoltaics started its success story by predicting the cost degression depending on cumulated installed capacity. This so-called learning curve was published and used for predictions for PV modules first, then predictions of system cost decrease also were developed. This approach is less sensitive to political decisions and changing market situations than predictions on the time axis. Cost degression due to innovation, use of scaling effects, improved project management, standardised procedures including the search for better sites and optimization of project size are learning effects which can only be utilised when projects are developed. Therefore a presentation of CAPEX versus cumulated installed capacity is proposed in order to show the possible future advancement of the technology to politics and market. However from a wide range of publications on cost for CSP it is difficult to derive a learning curve. A logical cost structure for direct and indirect capital expenditure is needed as the basis for further analysis. Using derived reference cost for typical power plant configurations predictions of future cost have been derived. Only on the basis of that cost structure and the learning curve levelised cost of electricity for solar thermal power plants should be calculated for individual projects with different capacity factors in various locations.

  8. Polyester fabric sheet layers functionalized with graphene oxide for sensitive isolation of circulating tumor cells.

    PubMed

    Bu, Jiyoon; Kim, Young Jun; Kang, Yoon-Tae; Lee, Tae Hee; Kim, Jeongsuk; Cho, Young-Ho; Han, Sae-Won

    2017-05-01

    The metastasis of cancer is strongly associated with the spread of circulating tumor cells (CTCs). Based on the microfluidic devices, which offer rapid recovery of CTCs, a number of studies have demonstrated the potential of CTCs as a diagnostic tool. However, not only the insufficient specificity and sensitivity derived from the rarity and heterogeneity of CTCs, but also the high-cost fabrication processes limit the use of CTC-based medical devices in commercial. Here, we present a low-cost fabric sheet layers for CTC isolation, which are composed of polyester monofilament yarns. Fabric sheet layers are easily functionalized with graphene oxide (GO), which is beneficial for improving both sensitivity and specificity. The GO modification to the low-cost fabrics enhances the binding of anti-EpCAM antibodies, resulting in 10-25% increase of capture efficiency compared to the surface without GO (anti-EpCAM antibodies directly onto the fabric sheets), while achieving high purity by isolating only 50-300 leukocytes in 1 mL of human blood. We investigated CTCs in ten human blood samples and successfully isolated 4-42 CTCs/mL from cancer patients, while none of cancerous cells were found among healthy donors. This remarkable results show the feasibility of GO-functionalized fabric sheet layers to be used in various CTC-based clinical applications, with high sensitivity and selectivity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Construction and Operation Costs of Wastewater Treatment and Implications for the Paper Industry in China.

    PubMed

    Niu, Kunyu; Wu, Jian; Yu, Fang; Guo, Jingli

    2016-11-15

    This paper aims to develop a construction and operation cost model of wastewater treatment for the paper industry in China and explores the main factors that determine these costs. Previous models mainly involved factors relating to the treatment scale and efficiency of treatment facilities for deriving the cost function. We considered the factors more comprehensively by adding a regional variable to represent the economic development level, a corporate ownership factor to represent the plant characteristics, a subsector variable to capture pollutant characteristics, and a detailed-classification technology variable. We applied a unique data set from a national pollution source census for the model simulation. The major findings include the following: (1) Wastewater treatment costs in the paper industry are determined by scale, technology, degree of treatment, ownership, and regional factors; (2) Wastewater treatment costs show a large decreasing scale effect; (3) The current level of pollutant discharge fees is far lower than the marginal treatment costs for meeting the wastewater discharge standard. Key implications are as follows: (1) Cost characteristics and impact factors should be fully recognized when planning or making policies relating to wastewater treatment projects or technology development; (2) There is potential to reduce treatment costs by centralizing wastewater treatment via industrial parks; (3) Wastewater discharge fee rates should be increased; (4) Energy efficient technology should become the future focus of wastewater treatment.

  10. Some new results on stability and synchronization for delayed inertial neural networks based on non-reduced order method.

    PubMed

    Li, Xuanying; Li, Xiaotong; Hu, Cheng

    2017-12-01

    In this paper, without transforming the second order inertial neural networks into the first order differential systems by some variable substitutions, asymptotic stability and synchronization for a class of delayed inertial neural networks are investigated. Firstly, a new Lyapunov functional is constructed to directly propose the asymptotic stability of the inertial neural networks, and some new stability criteria are derived by means of Barbalat Lemma. Additionally, by designing a new feedback control strategy, the asymptotic synchronization of the addressed inertial networks is studied and some effective conditions are obtained. To reduce the control cost, an adaptive control scheme is designed to realize the asymptotic synchronization. It is noted that the dynamical behaviors of inertial neural networks are directly analyzed in this paper by constructing some new Lyapunov functionals, this is totally different from the traditional reduced-order variable substitution method. Finally, some numerical simulations are given to demonstrate the effectiveness of the derived theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Near-Optimal Re-Entry Trajectories for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Chou, H.-C.; Ardema, M. D.; Bowles, J. V.

    1997-01-01

    A near-optimal guidance law for the descent trajectory for earth orbit re-entry of a fully reusable single-stage-to-orbit pure rocket launch vehicle is derived. A methodology is developed to investigate using both bank angle and altitude as control variables and selecting parameters that maximize various performance functions. The method is based on the energy-state model of the aircraft equations of motion. The major task of this paper is to obtain optimal re-entry trajectories under a variety of performance goals: minimum time, minimum surface temperature, minimum heating, and maximum heading change; four classes of trajectories were investigated: no banking, optimal left turn banking, optimal right turn banking, and optimal bank chattering. The cost function is in general a weighted sum of all performance goals. In particular, the trade-off between minimizing heat load into the vehicle and maximizing cross range distance is investigated. The results show that the optimization methodology can be used to derive a wide variety of near-optimal trajectories.

  12. Probabilistic dual heuristic programming-based adaptive critic

    NASA Astrophysics Data System (ADS)

    Herzallah, Randa

    2010-02-01

    Adaptive critic (AC) methods have common roots as generalisations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, non-linear and non-stationary environments. In this study, a novel probabilistic dual heuristic programming (DHP)-based AC controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) AC method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterised by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the probabilistic critic network is then calculated and shown to be equal to the analytically derived correct value. Full derivation of the Riccati solution for this non-standard stochastic linear quadratic control problem is also provided. Moreover, the performance of the proposed probabilistic controller is demonstrated on linear and non-linear control examples.

  13. A method for determining optimum phasing of a multiphase propulsion system for a single-stage vehicle with linearized inert weight

    NASA Technical Reports Server (NTRS)

    Martin, J. A.

    1974-01-01

    A general analytical treatment is presented of a single-stage vehicle with multiple propulsion phases. A closed-form solution for the cost and for the performance and a derivation of the optimal phasing of the propulsion are included. Linearized variations in the inert weight elements are included, and the function to be minimized can be selected. The derivation of optimal phasing results in a set of nonlinear algebraic equations for optimal fuel volumes, for which a solution method is outlined. Three specific example cases are analyzed: minimum gross lift-off weight, minimum inert weight, and a minimized general function for a two-phase vehicle. The results for the two-phase vehicle are applied to the dual-fuel rocket. Comparisons with single-fuel vehicles indicate that dual-fuel vehicles can have lower inert weight either by development of a dual-fuel engine or by parallel burning of separate engines from lift-off.

  14. Isolation and expansion of human pluripotent stem cell-derived hepatic progenitor cells by growth factor defined serum-free culture conditions.

    PubMed

    Fukuda, Takayuki; Takayama, Kazuo; Hirata, Mitsuhi; Liu, Yu-Jung; Yanagihara, Kana; Suga, Mika; Mizuguchi, Hiroyuki; Furue, Miho K

    2017-03-15

    Limited growth potential, narrow ranges of sources, and difference in variability and functions from batch to batch of primary hepatocytes cause a problem for predicting drug-induced hepatotoxicity during drug development. Human pluripotent stem cell (hPSC)-derived hepatocyte-like cells in vitro are expected as a tool for predicting drug-induced hepatotoxicity. Several studies have already reported efficient methods for differentiating hPSCs into hepatocyte-like cells, however its differentiation process is time-consuming, labor-intensive, cost-intensive, and unstable. In order to solve this problem, expansion culture for hPSC-derived hepatic progenitor cells, including hepatic stem cells and hepatoblasts which can self-renewal and differentiate into hepatocytes should be valuable as a source of hepatocytes. However, the mechanisms of the expansion of hPSC-derived hepatic progenitor cells are not yet fully understood. In this study, to isolate hPSC-derived hepatic progenitor cells, we tried to develop serum-free growth factor defined culture conditions using defined components. Our culture conditions were able to isolate and grow hPSC-derived hepatic progenitor cells which could differentiate into hepatocyte-like cells through hepatoblast-like cells. We have confirmed that the hepatocyte-like cells prepared by our methods were able to increase gene expression of cytochrome P450 enzymes upon encountering rifampicin, phenobarbital, or omeprazole. The isolation and expansion of hPSC-derived hepatic progenitor cells in defined culture conditions should have advantages in terms of detecting accurate effects of exogenous factors on hepatic lineage differentiation, understanding mechanisms underlying self-renewal ability of hepatic progenitor cells, and stably supplying functional hepatic cells. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  15. PID Tuning Using Extremum Seeking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Killingsworth, N; Krstic, M

    2005-11-15

    Although proportional-integral-derivative (PID) controllers are widely used in the process industry, their effectiveness is often limited due to poor tuning. Manual tuning of PID controllers, which requires optimization of three parameters, is a time-consuming task. To remedy this difficulty, much effort has been invested in developing systematic tuning methods. Many of these methods rely on knowledge of the plant model or require special experiments to identify a suitable plant model. Reviews of these methods are given in [1] and the survey paper [2]. However, in many situations a plant model is not known, and it is not desirable to openmore » the process loop for system identification. Thus a method for tuning PID parameters within a closed-loop setting is advantageous. In relay feedback tuning [3]-[5], the feedback controller is temporarily replaced by a relay. Relay feedback causes most systems to oscillate, thus determining one point on the Nyquist diagram. Based on the location of this point, PID parameters can be chosen to give the closed-loop system a desired phase and gain margin. An alternative tuning method, which does not require either a modification of the system or a system model, is unfalsified control [6], [7]. This method uses input-output data to determine whether a set of PID parameters meets performance specifications. An adaptive algorithm is used to update the PID controller based on whether or not the controller falsifies a given criterion. The method requires a finite set of candidate PID controllers that must be initially specified [6]. Unfalsified control for an infinite set of PID controllers has been developed in [7]; this approach requires a carefully chosen input signal [8]. Yet another model-free PID tuning method that does not require opening of the loop is iterative feedback tuning (IFT). IFT iteratively optimizes the controller parameters with respect to a cost function derived from the output signal of the closed-loop system, see [9]. This method is based on the performance of the closed-loop system during a step response experiment [10], [11]. In this article we present a method for optimizing the step response of a closed-loop system consisting of a PID controller and an unknown plant with a discrete version of extremum seeking (ES). Specifically, ES is used to minimize a cost function similar to that used in [10], [11], which quantifies the performance of the PID controller. ES, a non-model-based method, iteratively modifies the arguments (in this application the PID parameters) of a cost function so that the output of the cost function reaches a local minimum or local maximum. In the next section we apply ES to PID controller tuning. We illustrate this technique through simulations comparing the effectiveness of ES to other PID tuning methods. Next, we address the importance of the choice of cost function and consider the effect of controller saturation. Furthermore, we discuss the choice of ES tuning parameters. Finally, we offer some conclusions.« less

  16. Cost-effectiveness of surgical decompression for space-occupying hemispheric infarction.

    PubMed

    Hofmeijer, Jeannette; van der Worp, H Bart; Kappelle, L Jaap; Eshuis, Sara; Algra, Ale; Greving, Jacoba P

    2013-10-01

    Surgical decompression reduces mortality and increases the probability of a favorable functional outcome after space-occupying hemispheric infarction. Its cost-effectiveness is uncertain. We assessed clinical outcomes, costs, and cost-effectiveness for the first 3 years in patients who were randomized to surgical decompression or best medical treatment within 48 hours after symptom onset in the Hemicraniectomy After Middle Cerebral Artery Infarction With Life-Threatening Edema Trial (HAMLET). Data on medical consumption were derived from case record files, hospital charts, and general practitioners. We calculated costs per quality-adjusted life year (QALY). Uncertainty was assessed with bootstrapping. A Markov model was constructed to estimate costs and health outcomes after 3 years. Of 39 patients enrolled within 48 hours, 21 were randomized to surgical decompression. After 3 years, 5 surgical (24%) and 14 medical patients (78%) had died. In the first 3 years after enrollment, operated patients had more QALYs than medically treated patients (mean difference, 1.0 QALY [95% confidence interval, 0.6-1.4]), but at higher costs (mean difference, €127,000 [95% confidence interval, 73,100-181,000]), indicating incremental costs of €127,000 per QALY gained. Ninety-eight percent of incremental cost-effectiveness ratios replicated by bootstrapping were >€80,000 per QALY gained. Markov modeling suggested costs of ≈€60,000 per QALY gained for a patient's lifetime. Surgical decompression for space-occupying infarction results in an increase in QALYs, but at very high costs. http://www.controlled-trials.com. Unique identifier: ISRCTN94237756.

  17. Prioritization methodology for chemical replacement

    NASA Technical Reports Server (NTRS)

    Goldberg, Ben; Cruit, Wendy; Schutzenhofer, Scott

    1995-01-01

    This methodology serves to define a system for effective prioritization of efforts required to develop replacement technologies mandated by imposed and forecast legislation. The methodology used is a semi quantitative approach derived from quality function deployment techniques (QFD Matrix). QFD is a conceptual map that provides a method of transforming customer wants and needs into quantitative engineering terms. This methodology aims to weight the full environmental, cost, safety, reliability, and programmatic implications of replacement technology development to allow appropriate identification of viable candidates and programmatic alternatives.

  18. Optimizing Economic Indicators in the Case of Using Two Types of State-Subsidized Chemical Fertilizers for Agricultural Production

    NASA Astrophysics Data System (ADS)

    Boldea, M.; Sala, F.

    2010-09-01

    We admit that the mathematical relation between agricultural production f(x, y) and the two types of fertilizers x and y is given by function (1). The coefficients that appear are determined by using the least squares method by comparison with the experimental data. We took into consideration the following economic indicators: absolute benefit, relative benefit, profitableness and cost price. These are maximized or minimized, thus obtaining the optimal solutions by annulling the partial derivatives.

  19. A new approach to approximating the linear quadratic optimal control law for hereditary systems with control delays

    NASA Technical Reports Server (NTRS)

    Milman, M. H.

    1985-01-01

    A factorization approach is presented for deriving approximations to the optimal feedback gain for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the feedback kernels.

  20. 'Emerging technologies for the changing global market' - Prioritization methodology for chemical replacement

    NASA Technical Reports Server (NTRS)

    Cruit, Wendy; Schutzenhofer, Scott; Goldberg, Ben; Everhart, Kurt

    1993-01-01

    This project served to define an appropriate methodology for effective prioritization of technology efforts required to develop replacement technologies mandated by imposed and forecast legislation. The methodology used is a semiquantitative approach derived from quality function deployment techniques (QFD Matrix). This methodology aims to weight the full environmental, cost, safety, reliability, and programmatic implications of replacement technology development to allow appropriate identification of viable candidates and programmatic alternatives. The results will be implemented as a guideline for consideration for current NASA propulsion systems.

  1. Emerging technologies for the changing global market

    NASA Technical Reports Server (NTRS)

    Cruit, Wendy; Schutzenhofer, Scott; Goldberg, Ben; Everhart, Kurt

    1993-01-01

    This project served to define an appropriate methodology for effective prioritization of technology efforts required to develop replacement technologies mandated by imposed and forecast legislation. The methodology used is a semi-quantative approach derived from quality function deployment techniques (QFD Matrix). This methodology aims to weight the full environmental, cost, safety, reliability, and programmatic implications of replacement technology development to allow appropriate identification of viable candidates and programmatic alternatives. The results will be implemented as a guideline for consideration for current NASA propulsion systems.

  2. Climate targets and cost-effective climate stabilization pathways

    NASA Astrophysics Data System (ADS)

    Held, H.

    2015-08-01

    Climate economics has developed two main tools to derive an economically adequate response to the climate problem. Cost benefit analysis weighs in any available information on mitigation costs and benefits and thereby derives an "optimal" global mean temperature. Quite the contrary, cost effectiveness analysis allows deriving costs of potential policy targets and the corresponding cost- minimizing investment paths. The article highlights pros and cons of both approaches and then focusses on the implications of a policy that strives at limiting global warming to 2 °C compared to pre-industrial values. The related mitigation costs and changes in the energy sector are summarized according to the IPCC report of 2014. The article then points to conceptual difficulties when internalizing uncertainty in these types of analyses and suggests pragmatic solutions. Key statements on mitigation economics remain valid under uncertainty when being given the adequate interpretation. Furthermore, the expected economic value of perfect climate information is found to be on the order of hundreds of billions of Euro per year if a 2°-policy were requested. Finally, the prospects of climate policy are sketched.

  3. Systematic and Automated Development of Quantum Mechanically Derived Force Fields: The Challenging Case of Halogenated Hydrocarbons.

    PubMed

    Prampolini, Giacomo; Campetella, Marco; De Mitri, Nicola; Livotto, Paolo Roberto; Cacelli, Ivo

    2016-11-08

    A robust and automated protocol for the derivation of sound force field parameters, suitable for condensed-phase classical simulations, is here tested and validated on several halogenated hydrocarbons, a class of compounds for which standard force fields have often been reported to deliver rather inaccurate performances. The major strength of the proposed protocol is that all of the parameters are derived only from first principles because all of the information required is retrieved from quantum mechanical data, purposely computed for the investigated molecule. This a priori parametrization is carried out separately for the intra- and intermolecular contributions to the force fields, respectively exploiting the Joyce and Picky programs, previously developed in our group. To avoid high computational costs, all quantum mechanical calculations were performed exploiting the density functional theory. Because the choice of the functional is known to be crucial for the description of the intermolecular interactions, a specific procedure is proposed, which allows for a reliable benchmark of different functionals against higher-level data. The intramolecular and intermolecular contribution are eventually joined together, and the resulting quantum mechanically derived force field is thereafter employed in lengthy molecular dynamics simulations to compute several thermodynamic properties that characterize the resulting bulk phase. The accuracy of the proposed parametrization protocol is finally validated by comparing the computed macroscopic observables with the available experimental counterparts. It is found that, on average, the proposed approach is capable of yielding a consistent description of the investigated set, often outperforming the literature standard force fields, or at least delivering results of similar accuracy.

  4. The Economics of NASA Mission Cost Reserves

    NASA Technical Reports Server (NTRS)

    Whitley, Sally; Shinn, Stephen

    2012-01-01

    Increases in NASA mission costs are well-noted but not well-understood, and there is little evidence that they are decreasing in frequency or amount over time. The need to control spending has led to analysis of the causes and magnitude of historical mission overruns, and many program control efforts are being implemented to attempt to prevent or mitigate the problem (NPR 7120). However, cost overruns have not abated, and while some direct causes of increased spending may be obvious (requirements creep, launch delays, directed changes, etc.), the underlying impetus to spend past the original budget may be more subtle. Gaining better insight into the causes of cost overruns will help NASA and its contracting organizations to avoid .them. This paper hypothesizes that one cause of NASA mission cost overruns is that the availability of reserves gives project team members an incentive to make decisions and behave in ways that increase costs. We theorize that the presence of reserves is a contributing factor to cost overruns because it causes organizations to use their funds less efficiently or to control spending less effectively. We draw a comparison to the insurance industry concept of moral hazard, the phenomenon that the presence of insurance causes insureds to have more frequent and higher insurance losses, and we attempt to apply actuarial techniques to quantifY the increase in the expected cost of a mission due to the availability of reserves. We create a theoretical model of reserve spending motivation by defining a variable ReserveSpending as a function of total reserves. This function has a positive slope; for every dollar of reserves available, there is a positive probability of spending it. Finally, the function should be concave down; the probability of spending each incremental dollar of reserves decreases progressively. We test the model against available NASA CADRe data by examining missions with reserve dollars initially available and testing whether they are more likely to spend those dollars, and whether larger levels of reserves lead to higher cost overruns. Finally, we address the question of how to prevent reserves from increasing mission spending without increasing cost risk to projects budgeted without any reserves. Is there a "sweet spot"? How can we derive the maximum benefit associated with risk reduction from reserves while minimizing the effects of reserve spending motivation?

  5. Joint brain connectivity estimation from diffusion and functional MRI data

    NASA Astrophysics Data System (ADS)

    Chu, Shu-Hsien; Lenglet, Christophe; Parhi, Keshab K.

    2015-03-01

    Estimating brain wiring patterns is critical to better understand the brain organization and function. Anatomical brain connectivity models axonal pathways, while the functional brain connectivity characterizes the statistical dependencies and correlation between the activities of various brain regions. The synchronization of brain activity can be inferred through the variation of blood-oxygen-level dependent (BOLD) signal from functional MRI (fMRI) and the neural connections can be estimated using tractography from diffusion MRI (dMRI). Functional connections between brain regions are supported by anatomical connections, and the synchronization of brain activities arises through sharing of information in the form of electro-chemical signals on axon pathways. Jointly modeling fMRI and dMRI data may improve the accuracy in constructing anatomical connectivity as well as functional connectivity. Such an approach may lead to novel multimodal biomarkers potentially able to better capture functional and anatomical connectivity variations. We present a novel brain network model which jointly models the dMRI and fMRI data to improve the anatomical connectivity estimation and extract the anatomical subnetworks associated with specific functional modes by constraining the anatomical connections as structural supports to the functional connections. The key idea is similar to a multi-commodity flow optimization problem that minimizes the cost or maximizes the efficiency for flow configuration and simultaneously fulfills the supply-demand constraint for each commodity. In the proposed network, the nodes represent the grey matter (GM) regions providing brain functionality, and the links represent white matter (WM) fiber bundles connecting those regions and delivering information. The commodities can be thought of as the information corresponding to brain activity patterns as obtained for instance by independent component analysis (ICA) of fMRI data. The concept of information flow is introduced and used to model the propagation of information between GM areas through WM fiber bundles. The link capacity, i.e., ability to transfer information, is characterized by the relative strength of fiber bundles, e.g., fiber count gathered from the tractography of dMRI data. The node information demand is considered to be proportional to the correlation between neural activity at various cortical areas involved in a particular functional mode (e.g. visual, motor, etc.). These two properties lead to the link capacity and node demand constraints in the proposed model. Moreover, the information flow of a link cannot exceed the demand from either end node. This is captured by the feasibility constraints. Two different cost functions are considered in the optimization formulation in this paper. The first cost function, the reciprocal of fiber strength represents the unit cost for information passing through the link. In the second cost function, a min-max (minimizing the maximal link load) approach is used to balance the usage of each link. Optimizing the first cost function selects the pathway with strongest fiber strength for information propagation. In the second case, the optimization procedure finds all the possible propagation pathways and allocates the flow proportionally to their strength. Additionally, a penalty term is incorporated with both the cost functions to capture the possible missing and weak anatomical connections. With this set of constraints and the proposed cost functions, solving the network optimization problem recovers missing and weak anatomical connections supported by the functional information and provides the functional-associated anatomical subnetworks. Feasibility is demonstrated using realistic diffusion and functional MRI phantom data. It is shown that the proposed model recovers the maximum number of true connections, with fewest number of false connections when compared with the connectivity derived from a joint probabilistic model using the expectation-maximization (EM) algorithm presented in a prior work. We also apply the proposed method to data provided by the Human Connectome Project (HCP).

  6. Flexible polyurethane foam modelling and identification of viscoelastic parameters for automotive seating applications

    NASA Astrophysics Data System (ADS)

    Deng, R.; Davies, P.; Bajaj, A. K.

    2003-05-01

    A hereditary model and a fractional derivative model for the dynamic properties of flexible polyurethane foams used in automotive seat cushions are presented. Non-linear elastic and linear viscoelastic properties are incorporated into these two models. A polynomial function of compression is used to represent the non-linear elastic behavior. The viscoelastic property is modelled by a hereditary integral with a relaxation kernel consisting of two exponential terms in the hereditary model and by a fractional derivative term in the fractional derivative model. The foam is used as the only viscoelastic component in a foam-mass system undergoing uniaxial compression. One-term harmonic balance solutions are developed to approximate the steady state response of the foam-mass system to the harmonic base excitation. System identification procedures based on the direct non-linear optimization and a sub-optimal method are formulated to estimate the material parameters. The effects of the choice of the cost function, frequency resolution of data and imperfections in experiments are discussed. The system identification procedures are also applied to experimental data from a foam-mass system. The performances of the two models for data at different compression and input excitation levels are compared, and modifications to the structure of the fractional derivative model are briefly explored. The role of the viscous damping term in both types of model is discussed.

  7. Healthcare tariffs for specialist inpatient neurorehabilitation services: rationale and development of a UK casemix and costing methodology.

    PubMed

    Turner-Stokes, Lynne; Sutch, Stephen; Dredge, Robert

    2012-03-01

    To describe the rationale and development of a casemix model and costing methodology for tariff development for specialist neurorehabilitation services in the UK. Patients with complex needs incur higher treatment costs. Fair payment should be weighted in proportion to costs of providing treatment, and should allow for variation over time CASEMIX MODEL AND BAND-WEIGHTING: Case complexity is measured by the Rehabilitation Complexity Scale (RCS). Cases are divided into five bands of complexity, based on the total RCS score. The principal determinant of costs in rehabilitation is staff time. Total staff hours/week (estimated from the Northwick Park Nursing and Therapy Dependency Scales) are analysed within each complexity band, through cross-sectional analysis of parallel ratings. A 'band-weighting' factor is derived from the relative proportions of staff time within each of the five bands. Total unit treatment costs are obtained from retrospective analysis of provider hospitals' budget and accounting statements. Mean bed-day costs (total unit cost/occupied bed days) are divided broadly into 'variable' and 'non-variable' components. In the weighted costing model, the band-weighting factor is applied to the variable portion of the bed-day cost to derive a banded cost, and thence a set of cost-multipliers. Preliminary data from one unit are presented to illustrate how this weighted costing model will be applied to derive a multilevel banded payment model, based on serial complexity ratings, to allow for change over time.

  8. Generalized Redistribute-to-the-Right Algorithm: Application to the Analysis of Censored Cost Data

    PubMed Central

    CHEN, SHUAI; ZHAO, HONGWEI

    2013-01-01

    Medical cost estimation is a challenging task when censoring of data is present. Although researchers have proposed methods for estimating mean costs, these are often derived from theory and are not always easy to understand. We provide an alternative method, based on a replace-from-the-right algorithm, for estimating mean costs more efficiently. We show that our estimator is equivalent to an existing one that is based on the inverse probability weighting principle and semiparametric efficiency theory. We also propose an alternative method for estimating the survival function of costs, based on the redistribute-to-the-right algorithm, that was originally used for explaining the Kaplan–Meier estimator. We show that this second proposed estimator is equivalent to a simple weighted survival estimator of costs. Finally, we develop a more efficient survival estimator of costs, using the same redistribute-to-the-right principle. This estimator is naturally monotone, more efficient than some existing survival estimators, and has a quite small bias in many realistic settings. We conduct numerical studies to examine the finite sample property of the survival estimators for costs, and show that our new estimator has small mean squared errors when the sample size is not too large. We apply both existing and new estimators to a data example from a randomized cardiovascular clinical trial. PMID:24403869

  9. Which factors affect software projects maintenance cost more?

    PubMed

    Dehaghani, Sayed Mehdi Hejazi; Hajrahimi, Nafiseh

    2013-03-01

    The software industry has had significant progress in recent years. The entire life of software includes two phases: production and maintenance. Software maintenance cost is increasingly growing and estimates showed that about 90% of software life cost is related to its maintenance phase. Extraction and considering the factors affecting the software maintenance cost help to estimate the cost and reduce it by controlling the factors. In this study, the factors affecting software maintenance cost were determined then were ranked based on their priority and after that effective ways to reduce the maintenance costs were presented. This paper is a research study. 15 software related to health care centers information systems in Isfahan University of Medical Sciences and hospitals function were studied in the years 2010 to 2011. Among Medical software maintenance team members, 40 were selected as sample. After interviews with experts in this field, factors affecting maintenance cost were determined. In order to prioritize the factors derived by AHP, at first, measurement criteria (factors found) were appointed by members of the maintenance team and eventually were prioritized with the help of EC software. Based on the results of this study, 32 factors were obtained which were classified in six groups. "Project" was ranked the most effective feature in maintenance cost with the highest priority. By taking into account some major elements like careful feasibility of IT projects, full documentation and accompany the designers in the maintenance phase good results can be achieved to reduce maintenance costs and increase longevity of the software.

  10. Functional Status, Quality of Life, and Costs Associated With Fibromyalgia Subgroups: A Latent Profile Analysis.

    PubMed

    Luciano, Juan V; Forero, Carlos G; Cerdà-Lafont, Marta; Peñarrubia-María, María Teresa; Fernández-Vergel, Rita; Cuesta-Vargas, Antonio I; Ruíz, José M; Rozadilla-Sacanell, Antoni; Sirvent-Alierta, Elena; Santo-Panero, Pilar; García-Campayo, Javier; Serrano-Blanco, Antoni; Pérez-Aranda, Adrián; Rubio-Valera, María

    2016-10-01

    Although fibromyalgia syndrome (FM) is considered a heterogeneous condition, there is no generally accepted subgroup typology. We used hierarchical cluster analysis and latent profile analysis to replicate Giesecke's classification in Spanish FM patients. The second aim was to examine whether the subgroups differed in sociodemographic characteristics, functional status, quality of life, and in direct and indirect costs. A total of 160 FM patients completed the following measures for cluster derivation: the Center for Epidemiological Studies-Depression Scale, the Trait Anxiety Inventory, the Pain Catastrophizing Scale, and the Control over Pain subscale. Pain threshold was measured with a sphygmomanometer. In addition, the Fibromyalgia Impact Questionnaire-Revised, the EuroQoL-5D-3L, and the Client Service Receipt Inventory were administered for cluster validation. Two distinct clusters were identified using hierarchical cluster analysis ("hypersensitive" group, 69.8% and "functional" group, 30.2%). In contrast, the latent profile analysis goodness-of-fit indices supported the existence of 3 FM patient profiles: (1) a "functional" profile (28.1%) defined as moderate tenderness, distress, and pain catastrophizing; (2) a "dysfunctional" profile (45.6%) defined by elevated tenderness, distress, and pain catastrophizing; and (3) a "highly dysfunctional and distressed" profile (26.3%) characterized by elevated tenderness and extremely high distress and catastrophizing. We did not find significant differences in sociodemographic characteristics between the 2 clusters or among the 3 profiles. The functional profile was associated with less impairment, greater quality of life, and lower health care costs. We identified 3 distinct profiles which accounted for the heterogeneity of FM patients. Our findings might help to design tailored interventions for FM patients.

  11. Multiple Interactive Pollutants in Water Quality Trading

    NASA Astrophysics Data System (ADS)

    Sarang, Amin; Lence, Barbara J.; Shamsai, Abolfazl

    2008-10-01

    Efficient environmental management calls for the consideration of multiple pollutants, for which two main types of transferable discharge permit (TDP) program have been described: separate permits that manage each pollutant individually in separate markets, with each permit based on the quantity of the pollutant or its environmental effects, and weighted-sum permits that aggregate several pollutants as a single commodity to be traded in a single market. In this paper, we perform a mathematical analysis of TDP programs for multiple pollutants that jointly affect the environment (i.e., interactive pollutants) and demonstrate the practicality of this approach for cost-efficient maintenance of river water quality. For interactive pollutants, the relative weighting factors are functions of the water quality impacts, marginal damage function, and marginal treatment costs at optimality. We derive the optimal set of weighting factors required by this approach for important scenarios for multiple interactive pollutants and propose using an analytical elasticity of substitution function to estimate damage functions for these scenarios. We evaluate the applicability of this approach using a hypothetical example that considers two interactive pollutants. We compare the weighted-sum permit approach for interactive pollutants with individual permit systems and TDP programs for multiple additive pollutants. We conclude by discussing practical considerations and implementation issues that result from the application of weighted-sum permit programs.

  12. Optimal cost-effective designs of Phase II proof of concept trials and associated go-no go decisions.

    PubMed

    Chen, Cong; Beckman, Robert A

    2009-01-01

    This manuscript discusses optimal cost-effective designs for Phase II proof of concept (PoC) trials. Unlike a confirmatory registration trial, a PoC trial is exploratory in nature, and sponsors of such trials have the liberty to choose the type I error rate and the power. The decision is largely driven by the perceived probability of having a truly active treatment per patient exposure (a surrogate measure to development cost), which is naturally captured in an efficiency score to be defined in this manuscript. Optimization of the score function leads to type I error rate and power (and therefore sample size) for the trial that is most cost-effective. This in turn leads to cost-effective go-no go criteria for development decisions. The idea is applied to derive optimal trial-level, program-level, and franchise-level design strategies. The study is not meant to provide any general conclusion because the settings used are largely simplified for illustrative purposes. However, through the examples provided herein, a reader should be able to gain useful insight into these design problems and apply them to the design of their own PoC trials.

  13. Evolution of phenotypic plasticity and environmental tolerance of a labile quantitative character in a fluctuating environment.

    PubMed

    Lande, R

    2014-05-01

    Quantitative genetic models of evolution of phenotypic plasticity are used to derive environmental tolerance curves for a population in a changing environment, providing a theoretical foundation for integrating physiological and community ecology with evolutionary genetics of plasticity and norms of reaction. Plasticity is modelled for a labile quantitative character undergoing continuous reversible development and selection in a fluctuating environment. If there is no cost of plasticity, a labile character evolves expected plasticity equalling the slope of the optimal phenotype as a function of the environment. This contrasts with previous theory for plasticity influenced by the environment at a critical stage of early development determining a constant adult phenotype on which selection acts, for which the expected plasticity is reduced by the environmental predictability over the discrete time lag between development and selection. With a cost of plasticity in a labile character, the expected plasticity depends on the cost and on the environmental variance and predictability averaged over the continuous developmental time lag. Environmental tolerance curves derived from this model confirm traditional assumptions in physiological ecology and provide new insights. Tolerance curve width increases with larger environmental variance, but can only evolve within a limited range. The strength of the trade-off between tolerance curve height and width depends on the cost of plasticity. Asymmetric tolerance curves caused by male sterility at high temperature are illustrated. A simple condition is given for a large transient increase in plasticity and tolerance curve width following a sudden change in average environment. © 2014 The Author. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.

  14. ANOTHER LOOK AT THE FAST ITERATIVE SHRINKAGE/THRESHOLDING ALGORITHM (FISTA)*

    PubMed Central

    Kim, Donghwan; Fessler, Jeffrey A.

    2017-01-01

    This paper provides a new way of developing the “Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)” [3] that is widely used for minimizing composite convex functions with a nonsmooth term such as the ℓ1 regularizer. In particular, this paper shows that FISTA corresponds to an optimized approach to accelerating the proximal gradient method with respect to a worst-case bound of the cost function. This paper then proposes a new algorithm that is derived by instead optimizing the step coefficients of the proximal gradient method with respect to a worst-case bound of the composite gradient mapping. The proof is based on the worst-case analysis called Performance Estimation Problem in [11]. PMID:29805242

  15. Planar junctionless phototransistor: A potential high-performance and low-cost device for optical-communications

    NASA Astrophysics Data System (ADS)

    Ferhati, H.; Djeffal, F.

    2017-12-01

    In this paper, a new junctionless optical controlled field effect transistor (JL-OCFET) and its comprehensive theoretical model is proposed to achieve high optical performance and low cost fabrication process. Exhaustive study of the device characteristics and comparison between the proposed junctionless design and the conventional inversion mode structure (IM-OCFET) for similar dimensions are performed. Our investigation reveals that the proposed design exhibits an outstanding capability to be an alternative to the IM-OCFET due to the high performance and the weak signal detection benefit offered by this design. Moreover, the developed analytical expressions are exploited to formulate the objective functions to optimize the device performance using Genetic Algorithms (GAs) approach. The optimized JL-OCFET not only demonstrates good performance in terms of derived drain current and responsivity, but also exhibits superior signal to noise ratio, low power consumption, high-sensitivity, high ION/IOFF ratio and high-detectivity as compared to the conventional IM-OCFET counterpart. These characteristics make the optimized JL-OCFET potentially suitable for developing low cost and ultrasensitive photodetectors for high-performance and low cost inter-chips data communication applications.

  16. Costing interventions in primary care.

    PubMed

    Kernick, D

    2000-02-01

    Against a background of increasing demands on limited resources, studies that relate benefits of health interventions to the resources they consume will be an important part of any decision-making process in primary care, and an accurate assessment of costs will be an important part of any economic evaluation. Although there is no such thing as a gold standard cost estimate, there are a number of basic costing concepts that underlie any costing study. How costs are derived and combined will depend on the assumptions that have been made in their derivation. It is important to be clear what assumptions have been made and why in order to maintain consistency across comparative studies and prevent inappropriate conclusions being drawn. This paper outlines some costing concepts and principles to enable primary care practitioners and researchers to have a basic understanding of costing exercises and their pitfalls.

  17. Blind separation of positive sources by globally convergent gradient search.

    PubMed

    Oja, Erkki; Plumbley, Mark

    2004-09-01

    The instantaneous noise-free linear mixing model in independent component analysis is largely a solved problem under the usual assumption of independent nongaussian sources and full column rank mixing matrix. However, with some prior information on the sources, like positivity, new analysis and perhaps simplified solution methods may yet become possible. In this letter, we consider the task of independent component analysis when the independent sources are known to be nonnegative and well grounded, which means that they have a nonzero pdf in the region of zero. It can be shown that in this case, the solution method is basically very simple: an orthogonal rotation of the whitened observation vector into nonnegative outputs will give a positive permutation of the original sources. We propose a cost function whose minimum coincides with nonnegativity and derive the gradient algorithm under the whitening constraint, under which the separating matrix is orthogonal. We further prove that in the Stiefel manifold of orthogonal matrices, the cost function is a Lyapunov function for the matrix gradient flow, implying global convergence. Thus, this algorithm is guaranteed to find the nonnegative well-grounded independent sources. The analysis is complemented by a numerical simulation, which illustrates the algorithm.

  18. Limited Awareness of the Essences of Certification or Compliance Markings on Medical Devices.

    PubMed

    Foo, Jong Yong Abdiel; Tan, Xin Ji Alan

    2017-06-01

    Medical devices have been long used for odiagnostic, therapeutic or rehabilitation purposes. Currently, they can range from a low-cost portable device that is often used for personal health monitoring to high-end sophisticated equipment that can only be operated by trained professionals. Depending on the functional purposes, there are different certification or compliance markings on the device when it is sold. One common certification marking is the Conformité Européenne affixation but this has a range of certification mark numbering for a variety of functional purposes. While the regulators and medical device manufacturers understand the associated significance and clinical implications, these may not be apparent to the professionals (using or maintaining the device) and the general public. With portable healthcare devices and mobile applications gaining popularity, better awareness of certification marking will be needed. Particularly, there are differences in the allowed functional purposes and the associated cost derivations of devices with a seemingly similar nature. A preferred approach such as an easy-to-understand notation next to any certification marking on a device can aid in differentiation without the need to digest mountainous regulatory details.

  19. Numerical study of the shape parameter dependence of the local radial point interpolation method in linear elasticity.

    PubMed

    Moussaoui, Ahmed; Bouziane, Touria

    2016-01-01

    The method LRPIM is a Meshless method with properties of simple implementation of the essential boundary conditions and less costly than the moving least squares (MLS) methods. This method is proposed to overcome the singularity associated to polynomial basis by using radial basis functions. In this paper, we will present a study of a 2D problem of an elastic homogenous rectangular plate by using the method LRPIM. Our numerical investigations will concern the influence of different shape parameters on the domain of convergence,accuracy and using the radial basis function of the thin plate spline. It also will presents a comparison between numerical results for different materials and the convergence domain by precising maximum and minimum values as a function of distribution nodes number. The analytical solution of the deflection confirms the numerical results. The essential points in the method are: •The LRPIM is derived from the local weak form of the equilibrium equations for solving a thin elastic plate.•The convergence of the LRPIM method depends on number of parameters derived from local weak form and sub-domains.•The effect of distributions nodes number by varying nature of material and the radial basis function (TPS).

  20. Constructing polyatomic potential energy surfaces by interpolating diabatic Hamiltonian matrices with demonstration on green fluorescent protein chromophore

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Jae Woo; Rhee, Young Min, E-mail: ymrhee@postech.ac.kr; Department of Chemistry, Pohang University of Science and Technology

    2014-04-28

    Simulating molecular dynamics directly on quantum chemically obtained potential energy surfaces is generally time consuming. The cost becomes overwhelming especially when excited state dynamics is aimed with multiple electronic states. The interpolated potential has been suggested as a remedy for the cost issue in various simulation settings ranging from fast gas phase reactions of small molecules to relatively slow condensed phase dynamics with complex surrounding. Here, we present a scheme for interpolating multiple electronic surfaces of a relatively large molecule, with an intention of applying it to studying nonadiabatic behaviors. The scheme starts with adiabatic potential information and its diabaticmore » transformation, both of which can be readily obtained, in principle, with quantum chemical calculations. The adiabatic energies and their derivatives on each interpolation center are combined with the derivative coupling vectors to generate the corresponding diabatic Hamiltonian and its derivatives, and they are subsequently adopted in producing a globally defined diabatic Hamiltonian function. As a demonstration, we employ the scheme to build an interpolated Hamiltonian of a relatively large chromophore, para-hydroxybenzylidene imidazolinone, in reference to its all-atom analytical surface model. We show that the interpolation is indeed reliable enough to reproduce important features of the reference surface model, such as its adiabatic energies and derivative couplings. In addition, nonadiabatic surface hopping simulations with interpolation yield population transfer dynamics that is well in accord with the result generated with the reference analytic surface. With these, we conclude by suggesting that the interpolation of diabatic Hamiltonians will be applicable for studying nonadiabatic behaviors of sizeable molecules.« less

  1. IUS/TUG orbital operations and mission support study. Volume 3: Space tug operations

    NASA Technical Reports Server (NTRS)

    1975-01-01

    A study was conducted to develop space tug operational concepts and baseline operations plan, and to provide cost estimates for space tug operations. Background data and study results are presented along with a transition phase analysis (the transition from interim upper state to tug operations). A summary is given of the tug operational and interface requirements with emphasis on the on-orbit checkout requirements, external interface operational requirements, safety requirements, and system operational interface requirements. Other topics discussed include reference missions baselined for the tug and details for the mission functional flows and timelines derived for the tug mission, tug subsystems, tug on-orbit operations prior to the tug first burn, spacecraft deployment and retrieval by the tug, operations centers, mission planning, potential problem areas, and cost data.

  2. Recycling of inorganic waste in monolithic and cellular glass‐based materials for structural and functional applications

    PubMed Central

    Rincón, Acacio; Marangoni, Mauro; Cetin, Suna

    2016-01-01

    Abstract The stabilization of inorganic waste of various nature and origin, in glasses, has been a key strategy for environmental protection for the last decades. When properly formulated, glasses may retain many inorganic contaminants permanently, but it must be acknowledged that some criticism remains, mainly concerning costs and energy use. As a consequence, the sustainability of vitrification largely relies on the conversion of waste glasses into new, usable and marketable glass‐based materials, in the form of monolithic and cellular glass‐ceramics. The effective conversion in turn depends on the simultaneous control of both starting materials and manufacturing processes. While silica‐rich waste favours the obtainment of glass, iron‐rich wastes affect the functionalities, influencing the porosity in cellular glass‐based materials as well as catalytic, magnetic, optical and electrical properties. Engineered formulations may lead to important reductions of processing times and temperatures, in the transformation of waste‐derived glasses into glass‐ceramics, or even bring interesting shortcuts. Direct sintering of wastes, combined with recycled glasses, as an example, has been proven as a valid low‐cost alternative for glass‐ceramic manufacturing, for wastes with limited hazardousness. The present paper is aimed at providing an up‐to‐date overview of the correlation between formulations, manufacturing technologies and properties of most recent waste‐derived, glass‐based materials. © 2016 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry. PMID:27818564

  3. Recycling of inorganic waste in monolithic and cellular glass-based materials for structural and functional applications.

    PubMed

    Rincón, Acacio; Marangoni, Mauro; Cetin, Suna; Bernardo, Enrico

    2016-07-01

    The stabilization of inorganic waste of various nature and origin, in glasses, has been a key strategy for environmental protection for the last decades. When properly formulated, glasses may retain many inorganic contaminants permanently, but it must be acknowledged that some criticism remains, mainly concerning costs and energy use. As a consequence, the sustainability of vitrification largely relies on the conversion of waste glasses into new, usable and marketable glass-based materials, in the form of monolithic and cellular glass-ceramics. The effective conversion in turn depends on the simultaneous control of both starting materials and manufacturing processes. While silica-rich waste favours the obtainment of glass, iron-rich wastes affect the functionalities, influencing the porosity in cellular glass-based materials as well as catalytic, magnetic, optical and electrical properties. Engineered formulations may lead to important reductions of processing times and temperatures, in the transformation of waste-derived glasses into glass-ceramics, or even bring interesting shortcuts. Direct sintering of wastes, combined with recycled glasses, as an example, has been proven as a valid low-cost alternative for glass-ceramic manufacturing, for wastes with limited hazardousness. The present paper is aimed at providing an up-to-date overview of the correlation between formulations, manufacturing technologies and properties of most recent waste-derived, glass-based materials. © 2016 The Authors. Journal of Chemical Technology & Biotechnology published by John Wiley & Sons Ltd on behalf of Society of Chemical Industry.

  4. Experimental Determination of the Dynamic Hydraulic Transfer Function for the J-2X Oxidizer Turbopump. Part One; Methodology

    NASA Technical Reports Server (NTRS)

    Zoladz, Tom; Patel, Sandeep; Lee, Erik; Karon, Dave

    2011-01-01

    An advanced methodology for extracting the hydraulic dynamic pump transfer matrix (Yp) for a cavitating liquid rocket engine turbopump inducer+impeller has been developed. The transfer function is required for integrated vehicle pogo stability analysis as well as optimization of local inducer pumping stability. Laboratory pulsed subscale waterflow test of the J-2X oxygen turbo pump is introduced and our new extraction method applied to the data collected. From accurate measures of pump inlet and discharge perturbational mass flows and pressures, and one-dimensional flow models that represents complete waterflow loop physics, we are able to derive Yp and hence extract the characteristic pump parameters: compliance, pump gain, impedance, mass flow gain. Detailed modeling is necessary to accurately translate instrument plane measurements to the pump inlet and discharge and extract Yp. We present the MSFC Dynamic Lump Parameter Fluid Model Framework and describe critical dynamic component details. We report on fit minimization techniques, cost (fitness) function derivation, and resulting model fits to our experimental data are presented. Comparisons are made to alternate techniques for spatially translating measurement stations to actual pump inlet and discharge.

  5. Optimal Mortgage Refinancing: A Closed Form Solution

    PubMed Central

    Agarwal, Sumit; Driscoll, John C.; Laibson, David I.

    2013-01-01

    We derive the first closed-form optimal refinancing rule: Refinance when the current mortgage interest rate falls below the original rate by at least 1ψ[ϕ+W(−exp(−ϕ))]. In this formula W(.) is the Lambert W-function, ψ=2(ρ+λ)σ,ϕ=1+ψ(ρ+λ)κ∕M(1−τ), ρ is the real discount rate, λ is the expected real rate of exogenous mortgage repayment, σ is the standard deviation of the mortgage rate, κ/M is the ratio of the tax-adjusted refinancing cost and the remaining mortgage value, and τ is the marginal tax rate. This expression is derived by solving a tractable class of refinancing problems. Our quantitative results closely match those reported by researchers using numerical methods. PMID:25843977

  6. Eliminating time dispersion from seismic wave modeling

    NASA Astrophysics Data System (ADS)

    Koene, Erik F. M.; Robertsson, Johan O. A.; Broggini, Filippo; Andersson, Fredrik

    2018-04-01

    We derive an expression for the error introduced by the second-order accurate temporal finite-difference (FD) operator, as present in the FD, pseudospectral and spectral element methods for seismic wave modeling applied to time-invariant media. The `time-dispersion' error speeds up the signal as a function of frequency and time step only. Time dispersion is thus independent of the propagation path, medium or spatial modeling error. We derive two transforms to either add or remove time dispersion from synthetic seismograms after a simulation. The transforms are compared to previous related work and demonstrated on wave modeling in acoustic as well as elastic media. In addition, an application to imaging is shown. The transforms enable accurate computation of synthetic seismograms at reduced cost, benefitting modeling applications in both exploration and global seismology.

  7. Delaunay-based derivative-free optimization for efficient minimization of time-averaged statistics of turbulent flows

    NASA Astrophysics Data System (ADS)

    Beyhaghi, Pooriya

    2016-11-01

    This work considers the problem of the efficient minimization of the infinite time average of a stationary ergodic process in the space of a handful of independent parameters which affect it. Problems of this class, derived from physical or numerical experiments which are sometimes expensive to perform, are ubiquitous in turbulence research. In such problems, any given function evaluation, determined with finite sampling, is associated with a quantifiable amount of uncertainty, which may be reduced via additional sampling. This work proposes the first algorithm of this type. Our algorithm remarkably reduces the overall cost of the optimization process for problems of this class. Further, under certain well-defined conditions, rigorous proof of convergence is established to the global minimum of the problem considered.

  8. Non-woody weed control in pine plantations

    Treesearch

    Phillip M. Dougherty; Bob Lowery

    1986-01-01

    The cost and benefits derived from controlling non-woody competitors in pine planations were reviewed. Cost considerations included both the capital cost and biological cost that may be incurred when weed control treatments are applied. Several methods for reducing the cost of herbicide treatments were explored. Cost reduction considerations included adjustments in...

  9. 25 CFR Appendix D to Subpart C - Cost To Construct

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... would meet the Adequate Standard Characteristics (see Table 1). For roadways, the recommended design of... Costs, Pavement Costs, and Incidental Costs. For bridges, costs are derived from costs in the National...) with inadequate drainage and alignment that generally follows existing ground 100 4 A designed and...

  10. 25 CFR Appendix D to Subpart C - Cost To Construct

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... would meet the Adequate Standard Characteristics (see Table 1). For roadways, the recommended design of... Costs, Pavement Costs, and Incidental Costs. For bridges, costs are derived from costs in the National...) with inadequate drainage and alignment that generally follows existing ground 100 4 A designed and...

  11. 25 CFR Appendix D to Subpart C - Cost To Construct

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... would meet the Adequate Standard Characteristics (see Table 1). For roadways, the recommended design of... Costs, Pavement Costs, and Incidental Costs. For bridges, costs are derived from costs in the National...) with inadequate drainage and alignment that generally follows existing ground 100 4 A designed and...

  12. 25 CFR Appendix D to Subpart C - Cost To Construct

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... would meet the Adequate Standard Characteristics (see Table 1). For roadways, the recommended design of... Costs, Pavement Costs, and Incidental Costs. For bridges, costs are derived from costs in the National...) with inadequate drainage and alignment that generally follows existing ground 100 4 A designed and...

  13. Dietary Flavanols: A Review of Select Effects on Vascular Function, Blood Pressure, and Exercise Performance.

    PubMed

    Al-Dashti, Yousef A; Holt, Roberta R; Stebbins, Charles L; Keen, Carl L; Hackman, Robert M

    2018-05-02

    An individual's diet affects numerous physiological functions and can play an important role in reducing the risk of cardiovascular disease. Epidemiological and clinical studies suggest that dietary flavanols can be an important modulator of vascular risk. Diets and plant extracts rich in flavanols have been reported to lower blood pressure, especially in prehypertensive and hypertensive individuals. Flavanols may act in part through signaling pathways that affect vascular function, nitric oxide availability, and the release of endothelial-derived relaxing and constricting factors. During exercise, flavanols have been reported to modulate metabolism and respiration (e.g., maximal oxygen uptake, O 2 cost of exercise, and energy expenditure), and reduce oxidative stress and inflammation, resulting in increased skeletal muscle efficiency and endurance capacity. Flavanol-induced reductions in blood pressure during exercise may decrease the work of the heart. Collectively, these effects suggest that flavanols can act as an ergogenic aid to help delay the onset of fatigue. More research is needed to better clarify the effects of flavanols on vascular function, blood pressure regulation, and exercise performance and establish safe and effective levels of intake. Flavanol-rich foods and food products can be useful components of a healthy diet and lifestyle program for those seeking to better control their blood pressure or to enhance their physical activity. Key teaching points • Epidemiological and clinical studies indicate that dietary flavanols can reduce the risk of vascular disease. • Diets and plant extracts rich in flavanols have been reported to lower blood pressure and improve exercise performance in humans. • Mechanisms by which flavanols may reduce blood pressure function include alterations in signaling pathways that affect vascular function, nitric oxide availability, and the release of endothelial-derived relaxation and constriction factors. • Mechanisms by which flavanols may enhance exercise performance include modulation of metabolism and respiration (e.g., maximal oxygen uptake, O 2 cost of exercise, and energy expenditure) and reduction of oxidative stress and inflammation. These effects can result in increased skeletal muscle efficiency and endurance capacity. • Further research is needed to clarify the amount, timing, and frequency of flavanol intake for blood pressure regulation and exercise performance.

  14. Health and economic benefits of physical activity for patients with spinal cord injury.

    PubMed

    Miller, Larry E; Herbert, William G

    2016-01-01

    Spinal cord injury (SCI) is a traumatic, life-disrupting event with an annual incidence of 17,000 cases in the US. SCI is characterized by progressive physical deconditioning due to limited mobility and lack of modalities to allow safe physical activity that may partially offset these deleterious physical changes. Approximately, 50% of patients with SCI report no leisure-time physical activity and 15% report leisure-time physical activity below the threshold where meaningful health benefits could be realized. Collectively, about 363,000 patients with SCI, or 65% of the entire spinal cord injured population in the US, engages in insufficient physical activity and represents a target population that could derive considerable health benefits from even modest physical activity levels. Currently, the annual direct costs related to SCI exceed US$45 billion in the US. Rehabilitation protocols and technologies aimed to improve functional mobility have potential to significantly reduce the risk of medical complications and cost associated with SCI. Patients who commence routine physical activity in the first post-injury year and experience typical motor function improvements would realize US$290,000 to US$435,000 in lifetime cost savings, primarily due to fewer hospitalizations and less reliance on assistive care. New assistive technologies that allow patients with SCI to safely engage in routine physical activity are desperately needed.

  15. Using discrete choice experiments within a cost-benefit analysis framework: some considerations.

    PubMed

    McIntosh, Emma

    2006-01-01

    A great advantage of the stated preference discrete choice experiment (SPDCE) approach to economic evaluation methodology is its immense flexibility within applied cost-benefit analyses (CBAs). However, while the use of SPDCEs in healthcare has increased markedly in recent years there has been a distinct lack of equivalent CBAs in healthcare using such SPDCE-derived valuations. This article outlines specific issues and some practical suggestions for consideration relevant to the development of CBAs using SPDCE-derived benefits. The article shows that SPDCE-derived CBA can adopt recent developments in cost-effectiveness methodology including the cost-effectiveness plane, appropriate consideration of uncertainty, the net-benefit framework and probabilistic sensitivity analysis methods, while maintaining the theoretical advantage of the SPDCE approach. The concept of a cost-benefit plane is no different in principle to the cost-effectiveness plane and can be a useful tool for reporting and presenting the results of CBAs.However, there are many challenging issues to address for the advancement of CBA methodology using SPCDEs within healthcare. Particular areas for development include the importance of accounting for uncertainty in SPDCE-derived willingness-to-pay values, the methodology of SPDCEs in clinical trial settings and economic models, measurement issues pertinent to using SPDCEs specifically in healthcare, and the importance of issues such as consideration of the dynamic nature of healthcare and the resulting impact this has on the validity of attribute definitions and context.

  16. Welfare implications of energy and environmental policies: A general equilibrium approach

    NASA Astrophysics Data System (ADS)

    Iqbal, Mohammad Qamar

    Government intervention and implementation of policies can impose a financial and social cost. To achieve a desired goal there could be several different alternative policies or routes, and government would like to choose the one which imposes the least social costs or/and generates greater social benefits. Therefore, applied welfare economics plays a vital role in public decision making. This paper recasts welfare measure such as equivalent variation, in terms of the prices of factors of production rather than product prices. This is made possible by using duality theory within a general equilibrium framework and by deriving alternative forms of indirect utility functions and expenditure functions in factor prices. Not only we are able to recast existing welfare measures in factor prices, we are able to perform a true cost-benefit analysis of government policies using comparative static analysis of different equilibria and breaking up monetary measure of welfare change such as equivalent variation into its components. A further advantage of our research is demonstrated by incorporating externalities and public goods in the utility function. It is interesting that under a general equilibrium framework optimal income tax tends to reduce inequalities. Results show that imposition of taxes at socially optimal rates brings a net gain to the society. It was also seen that even though a pollution tax may reduce GDP, it leads to an increase in the welfare of the society if it is imposed at an optimal rate.

  17. Xenia Spacecraft Study Addendum: Spacecraft Cost Estimate

    NASA Technical Reports Server (NTRS)

    Hill, Spencer; Hopkins, Randall

    2009-01-01

    This slide presentation reviews the Xenia spacecraft cost estimates as an addendum for the Xenia Spacecraft study. The NASA/Air Force Cost model (NAFCPOM) was used to derive the cost estimates that are expressed in 2009 dollars.

  18. PanFP: Pangenome-based functional profiles for microbial communities

    DOE PAGES

    Jun, Se -Ran; Hauser, Loren John; Schadt, Christopher Warren; ...

    2015-09-26

    For decades there has been increasing interest in understanding the relationships between microbial communities and ecosystem functions. Current DNA sequencing technologies allows for the exploration of microbial communities in two principle ways: targeted rRNA gene surveys and shotgun metagenomics. For large study designs, it is often still prohibitively expensive to sequence metagenomes at both the breadth and depth necessary to statistically capture the true functional diversity of a community. Although rRNA gene surveys provide no direct evidence of function, they do provide a reasonable estimation of microbial diversity, while being a very cost effective way to screen samples of interestmore » for later shotgun metagenomic analyses. However, there is a great deal of 16S rRNA gene survey data currently available from diverse environments, and thus a need for tools to infer functional composition of environmental samples based on 16S rRNA gene survey data. As a result, we present a computational method called pangenome based functional profiles (PanFP), which infers functional profiles of microbial communities from 16S rRNA gene survey data for Bacteria and Archaea. PanFP is based on pangenome reconstruction of a 16S rRNA gene operational taxonomic unit (OTU) from known genes and genomes pooled from the OTU s taxonomic lineage. From this lineage, we derive an OTU functional profile by weighting a pangenome s functional profile with the OTUs abundance observed in a given sample. We validated our method by comparing PanFP to the functional profiles obtained from the direct shotgun metagenomic measurement of 65 diverse communities via Spearman correlation coefficients. These correlations improved with increasing sequencing depth, within the range of 0.8 0.9 for the most deeply sequenced Human Microbiome Project mock community samples. PanFP is very similar in performance to another recently released tool, PICRUSt, for almost all of survey data analysed here. But, our method is unique in that any OTU building method can be used, as opposed to being limited to closed reference OTU picking strategies against specific reference sequence databases. In conclusion, we developed an automated computational method, which derives an inferred functional profile based on the 16S rRNA gene surveys of microbial communities. The inferred functional profile provides a cost effective way to study complex ecosystems through predicted comparative functional metagenomes and metadata analysis. All PanFP source code and additional documentation are freely available online at GitHub.« less

  19. PanFP: pangenome-based functional profiles for microbial communities.

    PubMed

    Jun, Se-Ran; Robeson, Michael S; Hauser, Loren J; Schadt, Christopher W; Gorin, Andrey A

    2015-09-26

    For decades there has been increasing interest in understanding the relationships between microbial communities and ecosystem functions. Current DNA sequencing technologies allows for the exploration of microbial communities in two principle ways: targeted rRNA gene surveys and shotgun metagenomics. For large study designs, it is often still prohibitively expensive to sequence metagenomes at both the breadth and depth necessary to statistically capture the true functional diversity of a community. Although rRNA gene surveys provide no direct evidence of function, they do provide a reasonable estimation of microbial diversity, while being a very cost-effective way to screen samples of interest for later shotgun metagenomic analyses. However, there is a great deal of 16S rRNA gene survey data currently available from diverse environments, and thus a need for tools to infer functional composition of environmental samples based on 16S rRNA gene survey data. We present a computational method called pangenome-based functional profiles (PanFP), which infers functional profiles of microbial communities from 16S rRNA gene survey data for Bacteria and Archaea. PanFP is based on pangenome reconstruction of a 16S rRNA gene operational taxonomic unit (OTU) from known genes and genomes pooled from the OTU's taxonomic lineage. From this lineage, we derive an OTU functional profile by weighting a pangenome's functional profile with the OTUs abundance observed in a given sample. We validated our method by comparing PanFP to the functional profiles obtained from the direct shotgun metagenomic measurement of 65 diverse communities via Spearman correlation coefficients. These correlations improved with increasing sequencing depth, within the range of 0.8-0.9 for the most deeply sequenced Human Microbiome Project mock community samples. PanFP is very similar in performance to another recently released tool, PICRUSt, for almost all of survey data analysed here. But, our method is unique in that any OTU building method can be used, as opposed to being limited to closed-reference OTU picking strategies against specific reference sequence databases. We developed an automated computational method, which derives an inferred functional profile based on the 16S rRNA gene surveys of microbial communities. The inferred functional profile provides a cost effective way to study complex ecosystems through predicted comparative functional metagenomes and metadata analysis. All PanFP source code and additional documentation are freely available online at GitHub ( https://github.com/srjun/PanFP ).

  20. Satellite Power Systems (SPS) space transportation cost analysis and evaluation

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A picture of Space Power Systems space transportation costs at the present time is given with respect to accuracy as stated, reasonableness of the methods used, assumptions made, and uncertainty associated with the estimates. The approach used consists of examining space transportation costs from several perspectives to perform a variety of sensitivity analyses or reviews and examine the findings in terms of internal consistency and external comparison with analogous systems. These approaches are summarized as a theoretical and historical review including a review of stated and unstated assumptions used to derive the costs, and a performance or technical review. These reviews cover the overall transportation program as well as the individual vehicles proposed. The review of overall cost assumptions is the principal means used for estimating the cost uncertainty derived. The cost estimates used as the best current estimate are included.

  1. Valuing productivity costs in a changing macroeconomic environment: the estimation of colorectal cancer productivity costs using the friction cost approach.

    PubMed

    Hanly, Paul; Koopmanschap, Marc; Sharp, Linda

    2016-06-01

    The friction cost approach (FCA) has been proposed as an alternative to the human capital approach for productivity cost valuation. However, FCA estimates are context dependent and influenced by extant macroeconomic conditions. We applied the FCA to estimate colorectal cancer labor productivity costs and assessed the impact of a changing macroeconomic environment on these estimates. Data from colorectal cancer survivors (n = 159) derived from a postal survey undertaken in Ireland March 2010 to January 2011 were combined with national wage data, population-level survival data, and occupation-specific friction periods to calculate temporary and permanent disability, and premature mortality costs using the FCA. The effects of changing labor market conditions between 2006 and 2013 on the friction period were modeled in scenario analyses. Costs were valued in 2008 euros. In the base-case, the total FCA per-person productivity cost for incident colorectal cancer patients of working age at diagnosis was €8543. In scenario 1 (a 2.2 % increase in unemployment), the fall in the friction period caused total productivity costs to decrease by up to 18 % compared to base-case estimates. In scenario 2 (a 9.2 % increase in unemployment), the largest decrease in productivity cost was up to 65 %. Adjusting for the vacancy rate reduced the effect of unemployment on the cost results. The friction period used in calculating labor productivity costs greatly affects the derived estimates; this friction period requires reassessment following changes in labor market conditions. The influence of changes in macroeconomic conditions on FCA-derived cost estimates may be substantial.

  2. Optimizing Performance Parameters of Chemically-Derived Graphene/p-Si Heterojunction Solar Cell.

    PubMed

    Batra, Kamal; Nayak, Sasmita; Behura, Sanjay K; Jani, Omkar

    2015-07-01

    Chemically-derived graphene have been synthesized by modified Hummers method and reduced using sodium borohydride. To explore the potential for photovoltaic applications, graphene/p-silicon (Si) heterojunction devices were fabricated using a simple and cost effective technique called spin coating. The SEM analysis shows the formation of graphene oxide (GO) flakes which become smooth after reduction. The absence of oxygen containing functional groups, as observed in FT-IR spectra, reveals the reduction of GO, i.e., reduced graphene oxide (rGO). It was further confirmed by Raman analysis, which shows slight reduction in G-band intensity with respect to D-band. Hall effect measurement confirmed n-type nature of rGO. Therefore, an effort has been made to simu- late rGO/p-Si heterojunction device by using the one-dimensional solar cell capacitance software, considering the experimentally derived parameters. The detail analysis of the effects of Si thickness, graphene thickness and temperature on the performance of the device has been presented.

  3. Synthesis of vertical MnO2 wire arrays on hemp-derived carbon for efficient and robust green catalysts

    NASA Astrophysics Data System (ADS)

    Yang, MinHo; Kim, Dong Seok; Sim, Jae-Wook; Jeong, Jae-Min; Kim, Do Hyun; Choi, Jae Hyung; Kim, Jinsoo; Kim, Seung-Soo; Choi, Bong Gill

    2017-06-01

    Three-dimensional (3D) carbon materials derived from waste biomass have been attracted increasing attention in catalysis and materials science because of their great potential of catalyst supports with respect to multi-functionality, unique structures, high surface area, and low cost. Here, we present a facile and efficient way for preparing 3D heterogeneous catalysts based on vertical MnO2 wires deposited on hemp-derived 3D porous carbon. The 3D porous carbon materials are fabricated by carbonization and activation processes using hemp (Cannabis Sttiva L.). These 3D porous carbon materials are employed as catalyst supports for direct deposition of vertical MnO2 wires using a one-step hydrothermal method. The XRD and XPS results reveal the crystalline structure of α-MnO2 wires. The resultant composites are further employed as a catalyst for glycolysis of poly(ethylene terephthalate) (PET) with high conversion yield of 98%, which is expected to be expressly profitable for plastics recycling industry.

  4. Joint Symbol Timing and CFO Estimation for OFDM/OQAM Systems in Multipath Channels

    NASA Astrophysics Data System (ADS)

    Fusco, Tilde; Petrella, Angelo; Tanda, Mario

    2009-12-01

    The problem of data-aided synchronization for orthogonal frequency division multiplexing (OFDM) systems based on offset quadrature amplitude modulation (OQAM) in multipath channels is considered. In particular, the joint maximum-likelihood (ML) estimator for carrier-frequency offset (CFO), amplitudes, phases, and delays, exploiting a short known preamble, is derived. The ML estimators for phases and amplitudes are in closed form. Moreover, under the assumption that the CFO is sufficiently small, a closed form approximate ML (AML) CFO estimator is obtained. By exploiting the obtained closed form solutions a cost function whose peaks provide an estimate of the delays is derived. In particular, the symbol timing (i.e., the delay of the first multipath component) is obtained by considering the smallest estimated delay. The performance of the proposed joint AML estimator is assessed via computer simulations and compared with that achieved by the joint AML estimator designed for AWGN channel and that achieved by a previously derived joint estimator for OFDM systems.

  5. Multi-disciplinary optimization of aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1990-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  6. Multidisciplinary optimization of aeroservoelastic systems using reduced-size models

    NASA Technical Reports Server (NTRS)

    Karpel, Mordechay

    1992-01-01

    Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.

  7. Vermicompost as a natural adsorbent: evaluation of simultaneous metals (Pb, Cd) and tetracycline adsorption by sewage sludge-derived vermicompost.

    PubMed

    He, Xin; Zhang, Yaxin; Shen, Maocai; Tian, Ye; Zheng, Kaixuan; Zeng, Guangming

    2017-03-01

    The simultaneous adsorption of heavy metals (Pb, Cd) and organic pollutant (tetracycline (TC)) by a sewage sludge-derived vermicompost was investigated. The maximal adsorption capacity for Pb, Cd, and TC in a single adsorptive system calculated from Langmuir equation was 12.80, 85.20, and 42.94 mg L -1 , while for mixed substances, the adsorption amount was 2.99, 13.46, and 20.89 mg L -1 , respectively. The adsorption kinetics fitted well to the pseudo-second-order model, implying chemical interaction between adsorbates and functional groups, such as -COOH, -OH, -NH, and -CO, as well as the formation of organo-metal complexes. Fourier transform infrared (FTIR) spectroscopy, scanning electron microscopy (SEM), and Brunauer-Emmett-Teller (BET) specific surface area measurement were adopted to gain insight into the structural changes and a better understanding of the adsorption mechanism. The sewage sludge-derived vermicompost can be a low cost and environmental benign eco-material for high efficient wastewater remediation.

  8. IUS/TUG orbital operations and mission support study. Volume 5: Cost estimates

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The costing approach, methodology, and rationale utilized for generating cost data for composite IUS and space tug orbital operations are discussed. Summary cost estimates are given along with cost data initially derived for the IUS program and space tug program individually, and cost estimates for each work breakdown structure element.

  9. 25 CFR Appendix D to Subpart C - Cost To Construct

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... the Adequate Standard Characteristics (see Table 1). For roadways, the recommended design of the..., Pavement Costs, and Incidental Costs. For bridges, costs are derived from costs in the National Bridge...) with inadequate drainage and alignment that generally follows existing ground 100 4 A designed and...

  10. Alginate-Encapsulation for the Improved Hypothermic Preservation of Human Adipose-Derived Stem Cells

    PubMed Central

    Swioklo, Stephen; Constantinescu, Andrei

    2016-01-01

    Despite considerable progress within the cell therapy industry, unmet bioprocessing and logistical challenges associated with the storage and distribution of cells between sites of manufacture and the clinic exist. We examined whether hypothermic (4°C–23°C) preservation of human adipose-derived stem cells could be improved through their encapsulation in 1.2% calcium alginate. Alginate encapsulation improved the recovery of viable cells after 72 hours of storage. Viable cell recovery was highly temperature-dependent, with an optimum temperature of 15°C. At this temperature, alginate encapsulation preserved the ability for recovered cells to attach to tissue culture plastic on rewarming, further increasing its effect on total cell recovery. On attachment, the cells were phenotypically normal, displayed normal growth kinetics, and maintained their capacity for trilineage differentiation. The number of cells encapsulated (up to 2 × 106 cells per milliliter) did not affect viable cell recovery nor did storage of encapsulated cells in a xeno-free, serum-free,current Good Manufacturing Practice-grade medium. We present a simple, low-cost system capable of enhancing the preservation of human adipose-derived stem cells stored at hypothermic temperatures, while maintaining their normal function. The storage of cells in this manner has great potential for extending the time windows for quality assurance and efficacy testing, distribution between the sites of manufacture and the clinic, and reducing the wastage associated with the limited shelf life of cells stored in their liquid state. Significance Despite considerable advancement in the clinical application of cell-based therapies, major logistical challenges exist throughout the cell therapy supply chain associated with the storage and distribution of cells between the sites of manufacture and the clinic. A simple, low-cost system capable of preserving the viability and functionality of human adipose-derived stem cells (a cell with substantial clinical interest) at hypothermic temperatures (0°C–32°C) is presented. Such a system has considerable potential for extending the shelf life of cell therapy products at multiple stages throughout the cell therapy supply chain. PMID:26826163

  11. DESTINY, The Dark Energy Space Telescope

    NASA Technical Reports Server (NTRS)

    Pasquale, Bert A.; Woodruff, Robert A.; Benford, Dominic J.; Lauer, Tod

    2007-01-01

    We have proposed the development of a low-cost space telescope, Destiny, as a concept for the NASA/DOE Joint Dark Energy Mission. Destiny is a 1.65m space telescope, featuring a near-infrared (0.85-1.7m) survey camera/spectrometer with a moderate flat-field field of view (FOV). Destiny will probe the properties of dark energy by obtaining a Hubble diagram based on Type Ia supernovae and a large-scale mass power spectrum derived from weak lensing distortions of field galaxies as a function of redshift.

  12. An algorithm for control system design via parameter optimization. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Sinha, P. K.

    1972-01-01

    An algorithm for design via parameter optimization has been developed for linear-time-invariant control systems based on the model reference adaptive control concept. A cost functional is defined to evaluate the system response relative to nominal, which involves in general the error between the system and nominal response, its derivatives and the control signals. A program for the practical implementation of this algorithm has been developed, with the computational scheme for the evaluation of the performance index based on Lyapunov's theorem for stability of linear invariant systems.

  13. Compact, Robust Chips Integrate Optical Functions

    NASA Technical Reports Server (NTRS)

    2010-01-01

    Located in Bozeman, Montana, AdvR Inc. has been an active partner in NASA's Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) programs. Langley Research Center engineers partnered with AdvR through the SBIR program to develop new, compact, lightweight electro-optic components for remote sensing systems. While the primary customer for this technology will be NASA, AdvR foresees additional uses for its NASA-derived circuit chip in the fields of academic and industrial research anywhere that compact, low-cost, stabilized single-frequency lasers are needed.

  14. A Nonlinear Fuel Optimal Reaction Jet Control Law

    DTIC Science & Technology

    2002-07-29

    derive a nonlinear fuel optimal attitude control system (ACS) that drives the final state to the desired state according to a cost function that...αroll = 0.22 rad/s2 and αyaw = 0.20 rad/s2. [ ] ( )( )ωωτω rrr&r r&r ⋅×−⋅= ⋅Ω−= − 2 1 1 II QQ (9) where, Q r is the attitude quaternion ...from Table-1 regarding the relative performance of the nonlinear controller with a conventional PID controller ( used in this paper as a benchmark for

  15. Antivenoms for snakebite: design, function, and controversies.

    PubMed

    Lavonas, Eric J

    2012-08-01

    Animal-derived antivenoms have been used to treat snake envenomation for more than 100 years. Major technological advantages in the past 30 years have produced antivenoms that are highly purified and chemically modified to reduce the risk of acute hypersensitivity reactions. Like all pharmaceutical manufacture, commercial-scale antivenom production requires making trade-offs between cost, purity, pharmacokinetic profile, and production yield. This article reviews the current state of the art for antivenom production and development. Particular attention is paid to controversies and trade-offs used to achieve a balance between improved safety and pharmacokinetic performance.

  16. Common modular avionics - Partitioning and design philosophy

    NASA Astrophysics Data System (ADS)

    Scott, D. M.; Mulvaney, S. P.

    The design objectives and definition criteria for common modular hardware that will perform digital processing functions in multiple avionic subsystems are examined. In particular, attention is given to weapon system-level objectives, such as increased supportability, reduced life cycle costs, and increased upgradability. These objectives dictate the following overall modular design goals: reduce test equipment requirements; have a large number of subsystem applications; design for architectural growth; and standardize for technology transparent implementations. Finally, specific partitioning criteria are derived on the basis of the weapon system-level objectives and overall design goals.

  17. Simulation and evaluation of latent heat thermal energy storage

    NASA Technical Reports Server (NTRS)

    Sigmon, T. W.

    1980-01-01

    The relative value of thermal energy storage (TES) for heat pump storage (heating and cooling) as a function of storage temperature, mode of storage (hotside or coldside), geographic locations, and utility time of use rate structures were derived. Computer models used to simulate the performance of a number of TES/heat pump configurations are described. The models are based on existing performance data of heat pump components, available building thermal load computational procedures, and generalized TES subsystem design. Life cycle costs computed for each site, configuration, and rate structure are discussed.

  18. Earth Observatory Satellite system definition study. Report no. 3: Design/cost tradeoff studies. Appendix C: EOS program requirements document

    NASA Technical Reports Server (NTRS)

    1974-01-01

    An analysis of the requirements for the Earth Observatory Satellite (EOS) system specifications is presented. The analysis consists of requirements obtained from existing documentation and those derived from functional analysis. The requirements follow the hierarchy of program, mission, system, and subsystem. The code for designating specific requirements is explained. Among the subjects considered are the following: (1) the traffic model, (2) space shuttle related performance, (3) booster related performance, (4) the data collection system, (5) spacecraft structural tests, and (6) the ground support requirements.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jun, Se -Ran; Hauser, Loren John; Schadt, Christopher Warren

    For decades there has been increasing interest in understanding the relationships between microbial communities and ecosystem functions. Current DNA sequencing technologies allows for the exploration of microbial communities in two principle ways: targeted rRNA gene surveys and shotgun metagenomics. For large study designs, it is often still prohibitively expensive to sequence metagenomes at both the breadth and depth necessary to statistically capture the true functional diversity of a community. Although rRNA gene surveys provide no direct evidence of function, they do provide a reasonable estimation of microbial diversity, while being a very cost effective way to screen samples of interestmore » for later shotgun metagenomic analyses. However, there is a great deal of 16S rRNA gene survey data currently available from diverse environments, and thus a need for tools to infer functional composition of environmental samples based on 16S rRNA gene survey data. As a result, we present a computational method called pangenome based functional profiles (PanFP), which infers functional profiles of microbial communities from 16S rRNA gene survey data for Bacteria and Archaea. PanFP is based on pangenome reconstruction of a 16S rRNA gene operational taxonomic unit (OTU) from known genes and genomes pooled from the OTU s taxonomic lineage. From this lineage, we derive an OTU functional profile by weighting a pangenome s functional profile with the OTUs abundance observed in a given sample. We validated our method by comparing PanFP to the functional profiles obtained from the direct shotgun metagenomic measurement of 65 diverse communities via Spearman correlation coefficients. These correlations improved with increasing sequencing depth, within the range of 0.8 0.9 for the most deeply sequenced Human Microbiome Project mock community samples. PanFP is very similar in performance to another recently released tool, PICRUSt, for almost all of survey data analysed here. But, our method is unique in that any OTU building method can be used, as opposed to being limited to closed reference OTU picking strategies against specific reference sequence databases. In conclusion, we developed an automated computational method, which derives an inferred functional profile based on the 16S rRNA gene surveys of microbial communities. The inferred functional profile provides a cost effective way to study complex ecosystems through predicted comparative functional metagenomes and metadata analysis. All PanFP source code and additional documentation are freely available online at GitHub.« less

  20. The two-sample problem with induced dependent censorship.

    PubMed

    Huang, Y

    1999-12-01

    Induced dependent censorship is a general phenomenon in health service evaluation studies in which a measure such as quality-adjusted survival time or lifetime medical cost is of interest. We investigate the two-sample problem and propose two classes of nonparametric tests. Based on consistent estimation of the survival function for each sample, the two classes of test statistics examine the cumulative weighted difference in hazard functions and in survival functions. We derive a unified asymptotic null distribution theory and inference procedure. The tests are applied to trial V of the International Breast Cancer Study Group and show that long duration chemotherapy significantly improves time without symptoms of disease and toxicity of treatment as compared with the short duration treatment. Simulation studies demonstrate that the proposed tests, with a wide range of weight choices, perform well under moderate sample sizes.

  1. Global cost of correcting vision impairment from uncorrected refractive error.

    PubMed

    Fricke, T R; Holden, B A; Wilson, D A; Schlenther, G; Naidoo, K S; Resnikoff, S; Frick, K D

    2012-10-01

    To estimate the global cost of establishing and operating the educational and refractive care facilities required to provide care to all individuals who currently have vision impairment resulting from uncorrected refractive error (URE). The global cost of correcting URE was estimated using data on the population, the prevalence of URE and the number of existing refractive care practitioners in individual countries, the cost of establishing and operating educational programmes for practitioners and the cost of establishing and operating refractive care facilities. The assumptions made ensured that costs were not underestimated and an upper limit to the costs was derived using the most expensive extreme for each assumption. There were an estimated 158 million cases of distance vision impairment and 544 million cases of near vision impairment caused by URE worldwide in 2007. Approximately 47 000 additional full-time functional clinical refractionists and 18 000 ophthalmic dispensers would be required to provide refractive care services for these individuals. The global cost of educating the additional personnel and of establishing, maintaining and operating the refractive care facilities needed was estimated to be around 20 000 million United States dollars (US$) and the upper-limit cost was US$ 28 000 million. The estimated loss in global gross domestic product due to distance vision impairment caused by URE was US$ 202 000 million annually. The cost of establishing and operating the educational and refractive care facilities required to deal with vision impairment resulting from URE was a small proportion of the global loss in productivity associated with that vision impairment.

  2. Connecting source aggregating areas with distributive regions via Optimal Transportation theory.

    NASA Astrophysics Data System (ADS)

    Lanzoni, S.; Putti, M.

    2016-12-01

    We study the application of Optimal Transport (OT) theory to the transfer of water and sediments from a distributed aggregating source to a distributing area connected by a erodible hillslope. Starting from the Monge-Kantorovich equations, We derive a global energy functional that nonlinearly combines the cost of constructing the drainage network over the entire domain and the cost of water and sediment transportation through the network. It can be shown that the minimization of this functional is equivalent to the infinite time solution of a system of diffusion partial differential equations coupled with transient ordinary differential equations, that closely resemble the classical conservation laws of water and sediments mass and momentum. We present several numerical simulations applied to realstic test cases. For example, the solution of the proposed model forms network configurations that share strong similiratities with rill channels formed on an hillslope. At a larger scale, we obtain promising results in simulating the network patterns that ensure a progressive and continuous transition from a drainage drainage area to a distributive receiving region.

  3. Automated riverine landscape characterization: GIS-based tools for watershed-scale research, assessment, and management.

    PubMed

    Williams, Bradley S; D'Amico, Ellen; Kastens, Jude H; Thorp, James H; Flotemersch, Joseph E; Thoms, Martin C

    2013-09-01

    River systems consist of hydrogeomorphic patches (HPs) that emerge at multiple spatiotemporal scales. Functional process zones (FPZs) are HPs that exist at the river valley scale and are important strata for framing whole-watershed research questions and management plans. Hierarchical classification procedures aid in HP identification by grouping sections of river based on their hydrogeomorphic character; however, collecting data required for such procedures with field-based methods is often impractical. We developed a set of GIS-based tools that facilitate rapid, low cost riverine landscape characterization and FPZ classification. Our tools, termed RESonate, consist of a custom toolbox designed for ESRI ArcGIS®. RESonate automatically extracts 13 hydrogeomorphic variables from readily available geospatial datasets and datasets derived from modeling procedures. An advanced 2D flood model, FLDPLN, designed for MATLAB® is used to determine valley morphology by systematically flooding river networks. When used in conjunction with other modeling procedures, RESonate and FLDPLN can assess the character of large river networks quickly and at very low costs. Here we describe tool and model functions in addition to their benefits, limitations, and applications.

  4. Optimization and surgical design for applications in pediatric cardiology

    NASA Astrophysics Data System (ADS)

    Marsden, Alison; Bernstein, Adam; Taylor, Charles; Feinstein, Jeffrey

    2007-11-01

    The coupling of shape optimization to cardiovascular blood flow simulations has potential to improve the design of current surgeries and to eventually allow for optimization of surgical designs for individual patients. This is particularly true in pediatric cardiology, where geometries vary dramatically between patients, and unusual geometries can lead to unfavorable hemodynamic conditions. Interfacing shape optimization to three-dimensional, time-dependent fluid mechanics problems is particularly challenging because of the large computational cost and the difficulty in computing objective function gradients. In this work a derivative-free optimization algorithm is coupled to a three-dimensional Navier-Stokes solver that has been tailored for cardiovascular applications. The optimization code employs mesh adaptive direct search in conjunction with a Kriging surrogate. This framework is successfully demonstrated on several geometries representative of cardiovascular surgical applications. We will discuss issues of cost function choice for surgical applications, including energy loss and wall shear stress distribution. In particular, we will discuss the creation of new designs for the Fontan procedure, a surgery done in pediatric cardiology to treat single ventricle heart defects.

  5. Single-domain antibody bioconjugated near-IR quantum dots for targeted cellular imaging of pancreatic cancer.

    PubMed

    Zaman, Md Badruz; Baral, Toya Nath; Jakubek, Zygmunt J; Zhang, Jianbing; Wu, Xiaohua; Lai, Edward; Whitfield, Dennis; Yu, Kui

    2011-05-01

    Successful targeted imaging of BxPC3 human pancreatic cancer cells is feasible with near-IR CdTeSe/CdS quantum dots (QDs) functionalized with single-domain antibody (sdAb) 2A3. For specific targeting, sdAbs are superior to conventional antibodies, especially in terms of stability, aggregation, and production cost. The bright CdTeSe/CdS QDs were synthesized to emit in the diagnostic window of 650-900 nm with a narrow emission band. 2A3 was derived from llama and is small in size of 13 kDa, but with fully-functional recognition to the target carcinoembryonic antigen-related cell adhesion molecule 6 (CEACAM6), a possible biomarker as a therapeutic target of pancreatic cancer. For compelling imaging, optical may be the most sensible among the various imaging modalities, regarding the sensitivity and cost. This first report on sdAb-conjugated near-IR QDs with high signal to background sensitivity for targeted cellular imaging brings insights into the development of optical molecular imaging for early stage cancer diagnosis.

  6. Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data

    NASA Technical Reports Server (NTRS)

    Iliff, K. W.; Maine, R. E.

    1985-01-01

    Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.

  7. The role of retinal bipolar cell in early vision: an implication with analogue networks and regularization theory.

    PubMed

    Yagi, T; Ohshima, S; Funahashi, Y

    1997-09-01

    A linear analogue network model is proposed to describe the neuronal circuit of the outer retina consisting of cones, horizontal cells, and bipolar cells. The model reflects previous physiological findings on the spatial response properties of these neurons to dim illumination and is expressed by physiological mechanisms, i.e., membrane conductances, gap-junctional conductances, and strengths of chemical synaptic interactions. Using the model, we characterized the spatial filtering properties of the bipolar cell receptive field with the standard regularization theory, in which the early vision problems are attributed to minimization of a cost function. The cost function accompanying the present characterization is derived from the linear analogue network model, and one can gain intuitive insights on how physiological mechanisms contribute to the spatial filtering properties of the bipolar cell receptive field. We also elucidated a quantitative relation between the Laplacian of Gaussian operator and the bipolar cell receptive field. From the computational point of view, the dopaminergic modulation of the gap-junctional conductance between horizontal cells is inferred to be a suitable neural adaptation mechanism for transition between photopic and mesopic vision.

  8. Game-theoretic strategies for asymmetric networked systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell

    Abstract—We consider an infrastructure consisting of a network of systems each composed of discrete components that can be reinforced at a certain cost to guard against attacks. The network provides the vital connectivity between systems, and hence plays a critical, asymmetric role in the infrastructure operations. We characterize the system-level correlations using the aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual system or network. The survival probabilities of systems and network satisfy first-order differential conditions that capture the component-level correlations. We formulate the problem of ensuring the infrastructure survival as a gamemore » between anattacker and a provider, using the sum-form and product-form utility functions, each composed of a survival probability term and a cost term. We derive Nash Equilibrium conditions which provide expressions for individual system survival probabilities, and also the expected capacity specified by the total number of operational components. These expressions differ only in a single term for the sum-form and product-form utilities, despite their significant differences.We apply these results to simplified models of distributed cloud computing infrastructures.« less

  9. How Can It Cost That Much? A Three-Year Study of Proposal Production Costs.

    ERIC Educational Resources Information Center

    Wiese, W. C.; Bowden, C. Mal

    1997-01-01

    Examines significant new business proposal efforts for United States Department of Defense contracts. Identifies six "pillars" of a contractor's proposal preparation costs. Derives a formula that characterizes proposal preparation costs. Demonstrates that a quick, accurate cost model can be developed for proposal publishing. (RS)

  10. Nursing cost by DRG: nursing intensity weights.

    PubMed

    Knauf, Robert A; Ballard, Karen; Mossman, Philip N; Lichtig, Leo K

    2006-11-01

    Although diagnosis-related group (DRG) reimbursement is used for Medicare and many other payors, nursing--the largest portion of hospital costs--is not specifically identified and quantified in deriving payments in any of the DRG reimbursement systems except that of New York State. In New York, nursing costs are allocated to each DRG in payment rate formulation by means of nursing intensity weights (NIWs)--relative values reflecting the quantity and types of nursing services provided to patients in each DRG. In the absence of charges for nursing services, these NIWs are derived from scores for each DRG provided by a representative panel of nurses through a modified Delphi technique. NIWs have been shown to correlate with hospitals' nursing costs per day. They are used to set cost-based payment weights, thereby avoiding compression caused by using flat cost-to-charge or cost-per-day averages for all acute and intensive care patients.

  11. Electron Correlation from the Adiabatic Connection for Multireference Wave Functions

    NASA Astrophysics Data System (ADS)

    Pernal, Katarzyna

    2018-01-01

    An adiabatic connection (AC) formula for the electron correlation energy is derived for a broad class of multireference wave functions. The AC expression recovers dynamic correlation energy and assures a balanced treatment of the correlation energy. Coupling the AC formalism with the extended random phase approximation allows one to find the correlation energy only from reference one- and two-electron reduced density matrices. If the generalized valence bond perfect pairing model is employed a simple closed-form expression for the approximate AC formula is obtained. This results in the overall M5 scaling of the computation cost making the method one of the most efficient multireference approaches accounting for dynamic electron correlation also for the strongly correlated systems.

  12. Development and evaluation of the impulse transfer function technique

    NASA Technical Reports Server (NTRS)

    Mantus, M.

    1972-01-01

    The development of the test/analysis technique known as the impulse transfer function (ITF) method is discussed. This technique, when implemented with proper data processing systems, should become a valuable supplement to conventional dynamic testing and analysis procedures that will be used in the space shuttle development program. The method can relieve many of the problems associated with extensive and costly testing of the shuttle for transient loading conditions. In addition, the time history information derived from impulse testing has the potential for being used to determine modal data for the structure under investigation. The technique could be very useful in determining the time-varying modal characteristics of structures subjected to thermal transients, where conventional mode surveys are difficult to perform.

  13. Advanced Launch Technology Life Cycle Analysis Using the Architectural Comparison Tool (ACT)

    NASA Technical Reports Server (NTRS)

    McCleskey, Carey M.

    2015-01-01

    Life cycle technology impact comparisons for nanolauncher technology concepts were performed using an Affordability Comparison Tool (ACT) prototype. Examined are cost drivers and whether technology investments can dramatically affect the life cycle characteristics. Primary among the selected applications was the prospect of improving nanolauncher systems. As a result, findings and conclusions are documented for ways of creating more productive and affordable nanolauncher systems; e.g., an Express Lane-Flex Lane concept is forwarded, and the beneficial effect of incorporating advanced integrated avionics is explored. Also, a Functional Systems Breakdown Structure (F-SBS) was developed to derive consistent definitions of the flight and ground systems for both system performance and life cycle analysis. Further, a comprehensive catalog of ground segment functions was created.

  14. Optimization and evaluation of a proportional derivative controller for planar arm movement.

    PubMed

    Jagodnik, Kathleen M; van den Bogert, Antonie J

    2010-04-19

    In most clinical applications of functional electrical stimulation (FES), the timing and amplitude of electrical stimuli have been controlled by open-loop pattern generators. The control of upper extremity reaching movements, however, will require feedback control to achieve the required precision. Here we present three controllers using proportional derivative (PD) feedback to stimulate six arm muscles, using two joint angle sensors. Controllers were first optimized and then evaluated on a computational arm model that includes musculoskeletal dynamics. Feedback gains were optimized by minimizing a weighted sum of position errors and muscle forces. Generalizability of the controllers was evaluated by performing movements for which the controller was not optimized, and robustness was tested via model simulations with randomly weakened muscles. Robustness was further evaluated by adding joint friction and doubling the arm mass. After optimization with a properly weighted cost function, all PD controllers performed fast, accurate, and robust reaching movements in simulation. Oscillatory behavior was seen after improper tuning. Performance improved slightly as the complexity of the feedback gain matrix increased. Copyright 2009 Elsevier Ltd. All rights reserved.

  15. Optimization and evaluation of a proportional derivative controller for planar arm movement

    PubMed Central

    Jagodnik, Kathleen M.; van den Bogert, Antonie J.

    2013-01-01

    In most clinical applications of functional electrical stimulation (FES), the timing and amplitude of electrical stimuli have been controlled by open-loop pattern generators. The control of upper extremity reaching movements, however, will require feedback control to achieve the required precision. Here we present three controllers using proportional derivative (PD) feedback to stimulate six arm muscles, using two joint angle sensors. Controllers were first optimized and then evaluated on a computational arm model that includes musculoskeletal dynamics. Feedback gains were optimized by minimizing a weighted sum of position errors and muscle forces. Generalizability of the controllers was evaluated by performing movements for which the controller was not optimized, and robustness was tested via model simulations with randomly weakened muscles. Robustness was further evaluated by adding joint friction and doubling the arm mass. After optimization with a properly weighted cost function, all PD controllers performed fast, accurate, and robust reaching movements in simulation. Oscillatory behavior was seen after improper tuning. Performance improved slightly as the complexity of the feedback gain matrix increased. PMID:20097345

  16. An Approach to the Derivation of the Cost of UK Vehicle Crash Injuries

    PubMed Central

    Morris, Andrew; Welsh, Ruth; Barnes, Jo; Chambers-Smith, Dawn

    2006-01-01

    An approach to costing of road crash injury has been developed using data from a ‘Willingness-to-pay’ survey mapped to injuries listed in the Abbreviated Injury Scale 1998 Revision. The costs derived have been applied to a database of real-world crash injuries that have been collected as part of the UK Cooperative Crash Injury Study (CCIS). The approach has been developed in order to determine future research priorities in vehicle passive safety. When all injuries in all crash-types are examined, the results highlight the cost of ‘Whiplash’ in the UK. When more serious injuries are considered, specifically those at AIS 2+, the cost of head injuries becomes evident in both frontal and side impacts. PMID:16968643

  17. 45 CFR 1630.12 - Applicability to derivative income.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 4 2011-10-01 2011-10-01 false Applicability to derivative income. 1630.12... CORPORATION COST STANDARDS AND PROCEDURES § 1630.12 Applicability to derivative income. (a) Derivative income... activity. (b) Derivative income which is allocated to the LSC fund in accordance with paragraph (a) of this...

  18. 2D Inviscid and Viscous Inverse Design Using Continuous Adjoint and Lax-Wendroff Formulation

    NASA Astrophysics Data System (ADS)

    Proctor, Camron Lisle

    The continuous adjoint (CA) technique for optimization and/or inverse-design of aerodynamic components has seen nearly 30 years of documented success in academia. The benefits of using CA versus a direct sensitivity analysis are shown repeatedly in the literature. However, the use of CA in industry is relatively unheard-of. The sparseness of industry contributions to the field may be attributed to the tediousness of the derivation and/or to the difficulties in implementation due to the lack of well-documented adjoint numerical methods. The focus of this work has been to thoroughly document the techniques required to build a two-dimensional CA inverse-design tool. To this end, this work begins with a short background on computational fluid dynamics (CFD) and the use of optimization tools in conjunction with CFD tools to solve aerodynamic optimization problems. A thorough derivation of the continuous adjoint equations and the accompanying gradient calculations for inviscid and viscous constraining equations follows the introduction. Next, the numerical techniques used for solving the partial differential equations (PDEs) governing the flow equations and the adjoint equations are described. Numerical techniques for the supplementary equations are discussed briefly. Subsequently, a verification of the efficacy of the inverse design tool, for the inviscid adjoint equations as well as possible numerical implementation pitfalls are discussed. The NACA0012 airfoil is used as an initial airfoil and surface pressure distribution and the NACA16009 is used as the desired pressure and vice versa. Using a Savitsky-Golay gradient filter, convergence (defined as a cost function<1E-5) is reached in approximately 220 design iteration using 121 design variables. The inverse-design using inviscid adjoint equations results are followed by the discussion of the viscous inverse design results and techniques used to further the convergence of the optimizer. The relationship between limiting step-size and convergence in a line-search optimization is shown to slightly decrease the final cost function at significant computational cost. A gradient damping technique is presented and shown to increase the convergence rate for the optimization in viscous problems, at a negligible increase in computational cost, but is insufficient to converge the solution. Systematically including adjacent surface vertices in the perturbation of a design variable, also a surface vertex, is shown to affect the convergence capability of the viscous optimizer. Finally, a comparison of using inviscid adjoint equations, as opposed to viscous adjoint equations, on viscous flow is presented, and the inviscid adjoint paired with viscous flow is found to reduce the cost function further than the viscous adjoint for the presented problem.

  19. Approximating the linear quadratic optimal control law for hereditary systems with delays in the control

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.

    1987-01-01

    The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary systems. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.

  20. Approximating the linear quadratic optimal control law for hereditary systems with delays in the control

    NASA Technical Reports Server (NTRS)

    Milman, Mark H.

    1988-01-01

    The fundamental control synthesis issue of establishing a priori convergence rates of approximation schemes for feedback controllers for a class of distributed parameter systems is addressed within the context of hereditary schemes. Specifically, a factorization approach is presented for deriving approximations to the optimal feedback gains for the linear regulator-quadratic cost problem associated with time-varying functional differential equations with control delays. The approach is based on a discretization of the state penalty which leads to a simple structure for the feedback control law. General properties of the Volterra factors of Hilbert-Schmidt operators are then used to obtain convergence results for the controls, trajectories and feedback kernels. Two algorithms are derived from the basic approximation scheme, including a fast algorithm, in the time-invariant case. A numerical example is also considered.

  1. Global mortality consequences of climate change accounting for adaptation costs and benefits

    NASA Astrophysics Data System (ADS)

    Rising, J. A.; Jina, A.; Carleton, T.; Hsiang, S. M.; Greenstone, M.

    2017-12-01

    Empirically-based and plausibly causal estimates of the damages of climate change are greatly needed to inform rapidly developing global and local climate policies. To accurately reflect the costs of climate change, it is essential to estimate how much populations will adapt to a changing climate, yet adaptation remains one of the least understood aspects of social responses to climate. In this paper, we develop and implement a novel methodology to estimate climate impacts on mortality rates. We assemble comprehensive sub-national panel data in 41 countries that account for 56% of the world's population, and combine them with high resolution daily climate data to flexibly estimate the causal effect of temperature on mortality. We find the impacts of temperature on mortality have a U-shaped response; both hot days and cold days cause excess mortality. However, this average response obscures substantial heterogeneity, as populations are differentially adapted to extreme temperatures. Our empirical model allows us to extrapolate response functions across the entire globe, as well as across time, using a range of economic, population, and climate change scenarios. We also develop a methodology to capture not only the benefits of adaptation, but also its costs. We combine these innovations to produce the first causal, micro-founded, global, empirically-derived climate damage function for human health. We project that by 2100, business-as-usual climate change is likely to incur mortality-only costs that amount to approximately 5% of global GDP for 5°C degrees of warming above pre-industrial levels. On average across model runs, we estimate that the upper bound on adaptation costs amounts to 55% of the total damages.

  2. 76 FR 60357 - Federal Regulations; OMB Circulars, OFPP Policy Letters, and CASB Cost Accounting Standards...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-29

    ... derived from 41 U.S.C. 1501. Cost Accounting Standards are rules governing the measurement, assignment... Circulars, OFPP Policy Letters, and CASB Cost Accounting Standards Included in the Semiannual Agenda of..., and Cost Accounting Standards Board (CASB) Cost Accounting Standards. OMB Circulars and OFPP Policy...

  3. Cost effectiveness of meniscal allograft for torn discoid lateral meniscus in young women.

    PubMed

    Ramme, Austin J; Strauss, Eric J; Jazrawi, Laith; Gold, Heather T

    2016-09-01

    A discoid meniscus is more prone to tears than a normal meniscus. Patients with a torn discoid lateral meniscus are at increased risk for early onset osteoarthritis requiring total knee arthroplasty (TKA). Optimal management for this condition is controversial given the up-front cost difference between the two treatment options: the more expensive meniscal allograft transplantation compared with standard partial meniscectomy. We hypothesize that meniscal allograft transplantation following excision of a torn discoid lateral meniscus is more cost-effective compared with partial meniscectomy alone because allografts will extend the time to TKA. A decision analytic Markov model was created to compare the cost effectiveness of two treatments for symptomatic, torn discoid lateral meniscus: meniscal allograft and partial meniscectomy. Probability estimates and event rates were derived from the scientific literature, and costs and benefits were discounted by 3%. One-way sensitivity analyses were performed to test model robustness. Over 25 years, the partial meniscectomy strategy cost $10,430, whereas meniscal allograft cost on average $4040 more, at $14,470. Partial meniscectomy postponed TKA an average of 12.5 years, compared with 17.30 years for meniscal allograft, an increase of 4.8 years. Allograft cost $842 per-year-gained in time to TKA. Meniscal allografts have been shown to reduce pain and improve function in patients with discoid lateral meniscus tears. Though more costly, meniscal allografts may be more effective than partial meniscectomy in delaying TKA in this model. Additional future long term clinical studies will provide more insight into optimal surgical options.

  4. Direct Costs of Very Old Persons with Subsyndromal Depression: A 5-Year Prospective Study.

    PubMed

    Ludvigsson, Mikael; Bernfort, Lars; Marcusson, Jan; Wressle, Ewa; Milberg, Anna

    2018-03-15

    This study aimed to compare, over a 5-year period, the prospective direct healthcare costs and service utilization of persons with subsyndromal depression (SSD) and non-depressive persons (ND), in a population of very old persons. A second aim was to develop a model that predicts direct healthcare costs in very old persons with SSD. A prospective population-based study was undertaken on 85-year-old persons in Sweden. Depressiveness was screened with the Geriatric Depression Scale at baseline and at 1-year follow-up, and the results were classified into ND, SSD, and syndromal depression. Data on individual healthcare costs and service use from a 5-year period were derived from national database registers. Direct costs were compared between categories using Mann-Whitney U tests, and a prediction model was identified with linear regression. For persons with SSD, the direct healthcare costs per month of survival exceeded those of persons with ND by a ratio 1.45 (€634 versus €436), a difference that was significant even after controlling for somatic multimorbidity. The final regression model consisted of five independent variables predicting direct healthcare costs: male sex, activities of daily living functions, loneliness, presence of SSD, and somatic multimorbidity. SSD among very old persons is associated with increased direct healthcare costs independently of somatic multimorbidity. The associations between SSD, somatic multimorbidity, and healthcare costs in the very old need to be analyzed further in order to better guide allocation of resources in health policy. Copyright © 2018 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  5. Cost effectiveness of a smoking cessation program in patients admitted for coronary heart disease.

    PubMed

    Quist-Paulsen, Petter; Lydersen, Stian; Bakke, Per S; Gallefoss, Frode

    2006-04-01

    Smoking cessation is probably the most important action to reduce mortality after a coronary event. Smoking cessation programs are not widely implemented in patients with coronary heart disease, however, possibly because they are thought not to be worth their costs. Our objectives were to estimate the cost effectiveness of a smoking cessation program, and to compare it with other treatment modalities in cardiovascular medicine. A cost-effectiveness analysis was performed on the basis of a recently conducted randomized smoking cessation intervention trial in patients admitted for coronary heart disease. The cost per life year gained by the smoking cessation program was derived from the resources necessary to implement the program, the number needed to treat to get one additional quitter from the program, and the years of life gained if quitting smoking. The cost effectiveness was estimated in a low-risk group (i.e. patients with stable coronary heart disease) and a high-risk group (i.e. patients after myocardial infarction or unstable angina), using survival data from previously published investigations, and with life-time extrapolation of the survival curves by survival function modeling. In a lifetime perspective, the incremental cost per year of life gained by the smoking cessation program was euro 280 and euro 110 in the low and high-risk group, respectively (2000 prices). These costs compare favorably to other treatment modalities in patients with coronary heart disease, being approximately 1/25 the cost of both statins in the low-risk group and angiotensin-converting enzyme inhibitors in the high-risk group. In a sensitivity analysis, the costs remained low in a wide range of assumptions. A nurse-led smoking cessation program with several months of intervention is very cost-effective compared with other treatment modalities in patients with coronary heart disease.

  6. Nonperturbative renormalization-group approach preserving the momentum dependence of correlation functions

    NASA Astrophysics Data System (ADS)

    Rose, F.; Dupuis, N.

    2018-05-01

    We present an approximation scheme of the nonperturbative renormalization group that preserves the momentum dependence of correlation functions. This approximation scheme can be seen as a simple improvement of the local potential approximation (LPA) where the derivative terms in the effective action are promoted to arbitrary momentum-dependent functions. As in the LPA, the only field dependence comes from the effective potential, which allows us to solve the renormalization-group equations at a relatively modest numerical cost (as compared, e.g., to the Blaizot-Mendéz-Galain-Wschebor approximation scheme). As an application we consider the two-dimensional quantum O(N ) model at zero temperature. We discuss not only the two-point correlation function but also higher-order correlation functions such as the scalar susceptibility (which allows for an investigation of the "Higgs" amplitude mode) and the conductivity. In particular, we show how, using Padé approximants to perform the analytic continuation i ωn→ω +i 0+ of imaginary frequency correlation functions χ (i ωn) computed numerically from the renormalization-group equations, one can obtain spectral functions in the real-frequency domain.

  7. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  8. A low-cost video-oculography system for vestibular function testing.

    PubMed

    Jihwan Park; Youngsun Kong; Yunyoung Nam

    2017-07-01

    In order to remain in focus during head movements, vestibular-ocular reflex causes eyes to move in the opposite direction to head movement. Disorders of vestibular system decrease vision, causing abnormal nystagmus and dizziness. To diagnose abnormal nystagmus, various studies have been reported including the use of rotating chair tests and videonystagmography. However, these tests are unsuitable for home use due to their high costs. Thus, a low-cost video-oculography system is necessary to obtain clinical features at home. In this paper, we present a low-cost video-oculography system using an infrared camera and Raspberry Pi board for tracking the pupils and evaluating a vestibular system. Horizontal eye movement is derived from video data obtained from an infrared camera and infrared light-emitting diodes, and the velocity of head rotation is obtained from a gyroscope sensor. Each pupil was extracted using a morphology operation and a contour detection method. Rotatory chair tests were conducted with our developed device. To evaluate our system, gain, asymmetry, and phase were measured and compared with System 2000. The average IQR errors of gain, phase and asymmetry were 0.81, 2.74 and 17.35, respectively. We showed that our system is able to measure clinical features.

  9. A study of acoustic-to-articulatory inversion of speech by analysis-by-synthesis using chain matrices and the Maeda articulatory model

    PubMed Central

    Panchapagesan, Sankaran; Alwan, Abeer

    2011-01-01

    In this paper, a quantitative study of acoustic-to-articulatory inversion for vowel speech sounds by analysis-by-synthesis using the Maeda articulatory model is performed. For chain matrix calculation of vocal tract (VT) acoustics, the chain matrix derivatives with respect to area function are calculated and used in a quasi-Newton method for optimizing articulatory trajectories. The cost function includes a distance measure between natural and synthesized first three formants, and parameter regularization and continuity terms. Calibration of the Maeda model to two speakers, one male and one female, from the University of Wisconsin x-ray microbeam (XRMB) database, using a cost function, is discussed. Model adaptation includes scaling the overall VT and the pharyngeal region and modifying the outer VT outline using measured palate and pharyngeal traces. The inversion optimization is initialized by a fast search of an articulatory codebook, which was pruned using XRMB data to improve inversion results. Good agreement between estimated midsagittal VT outlines and measured XRMB tongue pellet positions was achieved for several vowels and diphthongs for the male speaker, with average pellet-VT outline distances around 0.15 cm, smooth articulatory trajectories, and less than 1% average error in the first three formants. PMID:21476670

  10. Benchmarking DFT and semi-empirical methods for a reliable and cost-efficient computational screening of benzofulvene derivatives as donor materials for small-molecule organic solar cells.

    PubMed

    Tortorella, Sara; Talamo, Maurizio Mastropasqua; Cardone, Antonio; Pastore, Mariachiara; De Angelis, Filippo

    2016-02-24

    A systematic computational investigation on the optical properties of a group of novel benzofulvene derivatives (Martinelli 2014 Org. Lett. 16 3424-7), proposed as possible donor materials in small molecule organic photovoltaic (smOPV) devices, is presented. A benchmark evaluation against experimental results on the accuracy of different exchange and correlation functionals and semi-empirical methods in predicting both reliable ground state equilibrium geometries and electronic absorption spectra is carried out. The benchmark of the geometry optimization level indicated that the best agreement with x-ray data is achieved by using the B3LYP functional. Concerning the optical gap prediction, we found that, among the employed functionals, MPW1K provides the most accurate excitation energies over the entire set of benzofulvenes. Similarly reliable results were also obtained for range-separated hybrid functionals (CAM-B3LYP and wB97XD) and for global hybrid methods incorporating a large amount of non-local exchange (M06-2X and M06-HF). Density functional theory (DFT) hybrids with a moderate (about 20-30%) extent of Hartree-Fock exchange (HFexc) (PBE0, B3LYP and M06) were also found to deliver HOMO-LUMO energy gaps which compare well with the experimental absorption maxima, thus representing a valuable alternative for a prompt and predictive estimation of the optical gap. The possibility of using completely semi-empirical approaches (AM1/ZINDO) is also discussed.

  11. Development of Mobile Mapping System for 3D Road Asset Inventory.

    PubMed

    Sairam, Nivedita; Nagarajan, Sudhagar; Ornitz, Scott

    2016-03-12

    Asset Management is an important component of an infrastructure project. A significant cost is involved in maintaining and updating the asset information. Data collection is the most time-consuming task in the development of an asset management system. In order to reduce the time and cost involved in data collection, this paper proposes a low cost Mobile Mapping System using an equipped laser scanner and cameras. First, the feasibility of low cost sensors for 3D asset inventory is discussed by deriving appropriate sensor models. Then, through calibration procedures, respective alignments of the laser scanner, cameras, Inertial Measurement Unit and GPS (Global Positioning System) antenna are determined. The efficiency of this Mobile Mapping System is experimented by mounting it on a truck and golf cart. By using derived sensor models, geo-referenced images and 3D point clouds are derived. After validating the quality of the derived data, the paper provides a framework to extract road assets both automatically and manually using techniques implementing RANSAC plane fitting and edge extraction algorithms. Then the scope of such extraction techniques along with a sample GIS (Geographic Information System) database structure for unified 3D asset inventory are discussed.

  12. Development of Mobile Mapping System for 3D Road Asset Inventory

    PubMed Central

    Sairam, Nivedita; Nagarajan, Sudhagar; Ornitz, Scott

    2016-01-01

    Asset Management is an important component of an infrastructure project. A significant cost is involved in maintaining and updating the asset information. Data collection is the most time-consuming task in the development of an asset management system. In order to reduce the time and cost involved in data collection, this paper proposes a low cost Mobile Mapping System using an equipped laser scanner and cameras. First, the feasibility of low cost sensors for 3D asset inventory is discussed by deriving appropriate sensor models. Then, through calibration procedures, respective alignments of the laser scanner, cameras, Inertial Measurement Unit and GPS (Global Positioning System) antenna are determined. The efficiency of this Mobile Mapping System is experimented by mounting it on a truck and golf cart. By using derived sensor models, geo-referenced images and 3D point clouds are derived. After validating the quality of the derived data, the paper provides a framework to extract road assets both automatically and manually using techniques implementing RANSAC plane fitting and edge extraction algorithms. Then the scope of such extraction techniques along with a sample GIS (Geographic Information System) database structure for unified 3D asset inventory are discussed. PMID:26985897

  13. Palbociclib in hormone receptor positive advanced breast cancer: A cost-utility analysis.

    PubMed

    Raphael, J; Helou, J; Pritchard, K I; Naimark, D M

    2017-11-01

    The addition of palbociclib to letrozole improves progression-free survival in the first-line treatment of hormone receptor positive advanced breast cancer (ABC). This study assesses the cost-utility of palbociclib from the Canadian healthcare payer perspective. A probabilistic discrete event simulation (DES) model was developed and parameterised with data from the PALOMA 1 and 2 trials and other sources. The incremental cost per quality-adjusted life-month (QALM) gained for palbociclib was calculated. A time horizon of 15 years was used in the base case with costs and effectiveness discounted at 5% annually. Time-to- progression and time-to-death were derived from a Weibull and exponential distribution. Expected costs were based on Ontario fees and other sources. Probabilistic sensitivity analyses were conducted to account for parameter uncertainty. Compared to letrozole, the addition of palbociclib provided an additional 14.7 QALM at an incremental cost of $161,508. The resulting incremental cost-effectiveness ratio was $10,999/QALM gained. Assuming a willingness-to-pay (WTP) of $4167/QALM, the probability of palbociclib to be cost-effective was 0%. Cost-effectiveness acceptability curves derived from a probabilistic sensitivity analysis showed that at a WTP of $11,000/QALM gained, the probability of palbociclib to be cost-effective was 50%. The addition of palbociclib to letrozole is unlikely to be cost-effective for the treatment of ABC from a Canadian healthcare perspective with its current price. While ABC patients derive a meaningful clinical benefit from palbociclib, considerations should be given to increase the WTP threshold and reduce the drug pricing, to render this strategy more affordable. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Estimating economic value of agricultural water under changing conditions and the effects of spatial aggregation.

    PubMed

    Medellín-Azuara, Josué; Harou, Julien J; Howitt, Richard E

    2010-11-01

    Given the high proportion of water used for agriculture in certain regions, the economic value of agricultural water can be an important tool for water management and policy development. This value is quantified using economic demand curves for irrigation water. Such demand functions show the incremental contribution of water to agricultural production. Water demand curves are estimated using econometric or optimisation techniques. Calibrated agricultural optimisation models allow the derivation of demand curves using smaller datasets than econometric models. This paper introduces these subject areas then explores the effect of spatial aggregation (upscaling) on the valuation of water for irrigated agriculture. A case study from the Rio Grande-Rio Bravo Basin in North Mexico investigates differences in valuation at farm and regional aggregated levels under four scenarios: technological change, warm-dry climate change, changes in agricultural commodity prices, and water costs for agriculture. The scenarios consider changes due to external shocks or new policies. Positive mathematical programming (PMP), a calibrated optimisation method, is the deductive valuation method used. An exponential cost function is compared to the quadratic cost functions typically used in PMP. Results indicate that the economic value of water at the farm level and the regionally aggregated level are similar, but that the variability and distributional effects of each scenario are affected by aggregation. Moderately aggregated agricultural production models are effective at capturing average-farm adaptation to policy changes and external shocks. Farm-level models best reveal the distribution of scenario impacts. Copyright © 2009 Elsevier B.V. All rights reserved.

  15. Differentiating functional brain regions using optical coherence tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Gil, Daniel A.; Bow, Hansen C.; Shen, Jin-H.; Joos, Karen M.; Skala, Melissa C.

    2017-02-01

    The human brain is made up of functional regions governing movement, sensation, language, and cognition. Unintentional injury during neurosurgery can result in significant neurological deficits and morbidity. The current standard for localizing function to brain tissue during surgery, intraoperative electrical stimulation or recording, significantly increases the risk, time, and cost of the procedure. There is a need for a fast, cost-effective, and high-resolution intraoperative technique that can avoid damage to functional brain regions. We propose that optical coherence tomography (OCT) can fill this niche by imaging differences in the cellular composition and organization of functional brain areas. We hypothesized this would manifest as differences in the attenuation coefficient measured using OCT. Five functional regions (prefrontal, somatosensory, auditory, visual, and cerebellum) were imaged in ex vivo porcine brains (n=3), a model chosen due to a similar white/gray matter ratio as human brains. The attenuation coefficient was calculated using a depth-resolved model and quantitatively validated with Intralipid phantoms across a physiological range of attenuation coefficients (absolute difference < 0.1cm-1). Image analysis was performed on the attenuation coefficient images to derive quantitative endpoints. We observed a statistically significant difference among the median attenuation coefficients of these five regions (one-way ANOVA, p<0.05). Nissl-stained histology will be used to validate our results and correlate OCT-measured attenuation coefficients to neuronal density. Additional development and validation of OCT algorithms to discriminate brain regions are planned to improve the safety and efficacy of neurosurgical procedures such as biopsy, electrode placement, and tissue resection.

  16. Assessment of some straw-derived materials for reducing the leaching potential of Metribuzin residues in the soil

    NASA Astrophysics Data System (ADS)

    Cara, Irina Gabriela; Trincă, Lucia Carmen; Trofin, Alina Elena; Cazacu, Ana; Ţopa, Denis; Peptu, Cătălina Anişoara; Jităreanu, Gerard

    2015-12-01

    Biomass (straw waste) can be used as raw to obtain materials for herbicide removal from wastewater. These by-products have some important advantages, being environmentally friendly, easily available, presenting low costs, and requiring little processing to increase their adsorptive capacity. In the present study, some materials derived from agricultural waste (wheat, corn and soybean straw) were investigated as potential adsorbents for metribuzin removal from aqueous solutions. The straw wastes were processed by grinding, mineralisation (850 °C) and KOH activation in order to improve their functional surface activity. The materials surface characteristics were investigated by scanning electron microscopy, energy dispersive X-ray spectroscopy and atomic force microscopy. The adsorbents capacity was evaluated using batch sorption tests and liquid chromatography coupled with mass spectrometry for herbicide determination. For adsorption isotherms, the equilibrium time considered was 3 h. The experimental adsorption data were modelled by Freundlich and Langmuir models. The activated straw and ash-derived materials from wheat, corn and soybean increased the adsorption capacity of metribuzin with an asymmetrical behaviour. Overall, our results sustain that activated ash-derived from straw and activated straw materials can be a valuable solution for reducing the leaching potential of metribuzin through soil.

  17. Burden of a multiple sclerosis relapse: the patient's perspective.

    PubMed

    Oleen-Burkey, Merrikay; Castelli-Haley, Jane; Lage, Maureen J; Johnson, Kenneth P

    2012-01-01

    Relapses are a common feature of relapsing-remitting multiple sclerosis (RRMS) and increasing severity has been shown to be associated with higher healthcare costs, and to result in transient increases in disability. Increasing disability likely impacts work and leisure productivity, and lowers quality of life. The objective of this study was to characterize from the patient's perspective the impact of a multiple sclerosis (MS) relapse in terms of the economic cost, work and leisure productivity, functional ability, and health-related quality of life (HR-QOL), for a sample of patients with RRMS in the US treated with immunomodulatory agents. A cross-sectional, web-based, self-report survey was conducted among members of MSWatch.com, a patient support website now known as Copaxone.com. Qualified respondents in the US had been diagnosed with RRMS and were using an immunomodulatory agent. The survey captured costs of RRMS with questions about healthcare resource utilization, use of community services, and purchased alterations and assistive items related to MS. The Work and Leisure Impairment instrument and the EQ-5D were used to measure productivity losses and HR-QOL (health utility), respectively. The Goodin MS neurological impairment questionnaire was used to measure functional disability; questions were added about relapses in the past year. Of 711 qualified respondents, 67% reported having at least one relapse during the last year, with a mean of 2.2 ± 2.3 relapses/year. Respondents who experienced at least one relapse had significantly higher mean annual direct and indirect costs compared with those who did not experience a relapse ($US38 458 vs $US28 669; p = 0.0004) [year 2009 values]. Direct health-related costs accounted for the majority of the increased cost ($US5201; 53%) and were mainly due to increases in hospitalizations, medications, and ambulatory care. Indirect costs, including informal care and productivity loss, accounted for the additional 47% of increased cost ($US4588). Accounting for the mean number of relapses associated with these increased costs, the total economic cost of one relapse episode could be estimated at about $US4449, exclusive of intangible costs. The mean self-reported Expanded Disability Status Scale (EDSS) score, derived from the Goodin MS questionnaire, was significantly higher with relapse than with a clinically stable state (EDSS 4.3 vs 3.7; p < 0.0001), while the mean health utility score was significantly lower with relapse compared with a clinically stable state (0.66 vs 0.75; p = 0.0001). The value of these intangible costs of relapse can be estimated at $US5400. The overall burden (direct, indirect, and intangible costs) of one relapse in patients treated with immunomodulatory agents is therefore estimated conservatively at $US9849. This study shows that from a patient's perspective an MS relapse is associated with a significant increase in the economic costs as well as a decline in HR-QOL and functional ability.

  18. Photovoltaic design optimization for terrestrial applications

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1978-01-01

    As part of the Jet Propulsion Laboratory's Low-Cost Solar Array Project, a comprehensive program of module cost-optimization has been carried out. The objective of these studies has been to define means of reducing the cost and improving the utility and reliability of photovoltaic modules for the broad spectrum of terrestrial applications. This paper describes one of the methods being used for module optimization, including the derivation of specific equations which allow the optimization of various module design features. The method is based on minimizing the life-cycle cost of energy for the complete system. Comparison of the life-cycle energy cost with the marginal cost of energy each year allows the logical plant lifetime to be determined. The equations derived allow the explicit inclusion of design parameters such as tracking, site variability, and module degradation with time. An example problem involving the selection of an optimum module glass substrate is presented.

  19. On a cost functional for H2/H(infinity) minimization

    NASA Technical Reports Server (NTRS)

    Macmartin, Douglas G.; Hall, Steven R.; Mustafa, Denis

    1990-01-01

    A cost functional is proposed and investigated which is motivated by minimizing the energy in a structure using only collocated feedback. Defined for an H(infinity)-norm bounded system, this cost functional also overbounds the H2 cost. Some properties of this cost functional are given, and preliminary results on the procedure for minimizing it are presented. The frequency domain cost functional is shown to have a time domain representation in terms of a Stackelberg non-zero sum differential game.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Vleet, Mary J.; Misquitta, Alston J.; Stone, Anthony J.

    Short-range repulsion within inter-molecular force fields is conventionally described by either Lennard-Jones or Born-Mayer forms. Despite their widespread use, these simple functional forms are often unable to describe the interaction energy accurately over a broad range of inter-molecular distances, thus creating challenges in the development of ab initio force fields and potentially leading to decreased accuracy and transferability. Herein, we derive a novel short-range functional form based on a simple Slater-like model of overlapping atomic densities and an iterated stockholder atom (ISA) partitioning of the molecular electron density. We demonstrate that this Slater-ISA methodology yields a more accurate, transferable, andmore » robust description of the short-range interactions at minimal additional computational cost compared to standard Lennard-Jones or Born-Mayer approaches. Lastly, we show how this methodology can be adapted to yield the standard Born-Mayer functional form while still retaining many of the advantages of the Slater-ISA approach.« less

  1. 45 CFR 1630.12 - Applicability to derivative income.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., regulations, guidelines, and instructions, the Accounting Guide for LSC recipients, the terms and conditions... 45 Public Welfare 4 2010-10-01 2010-10-01 false Applicability to derivative income. 1630.12... CORPORATION COST STANDARDS AND PROCEDURES § 1630.12 Applicability to derivative income. (a) Derivative income...

  2. Long-term Cost-Effectiveness of Diagnostic Tests for Assessing Stable Chest Pain: Modeled Analysis of Anatomical and Functional Strategies.

    PubMed

    Bertoldi, Eduardo G; Stella, Steffan F; Rohde, Luis E; Polanczyk, Carisi A

    2016-05-01

    Several tests exist for diagnosing coronary artery disease, with varying accuracy and cost. We sought to provide cost-effectiveness information to aid physicians and decision-makers in selecting the most appropriate testing strategy. We used the state-transitions (Markov) model from the Brazilian public health system perspective with a lifetime horizon. Diagnostic strategies were based on exercise electrocardiography (Ex-ECG), stress echocardiography (ECHO), single-photon emission computed tomography (SPECT), computed tomography coronary angiography (CTA), or stress cardiac magnetic resonance imaging (C-MRI) as the initial test. Systematic review provided input data for test accuracy and long-term prognosis. Cost data were derived from the Brazilian public health system. Diagnostic test strategy had a small but measurable impact in quality-adjusted life-years gained. Switching from Ex-ECG to CTA-based strategies improved outcomes at an incremental cost-effectiveness ratio of 3100 international dollars per quality-adjusted life-year. ECHO-based strategies resulted in cost and effectiveness almost identical to CTA, and SPECT-based strategies were dominated because of their much higher cost. Strategies based on stress C-MRI were most effective, but the incremental cost-effectiveness ratio vs CTA was higher than the proposed willingness-to-pay threshold. Invasive strategies were dominant in the high pretest probability setting. Sensitivity analysis showed that results were sensitive to costs of CTA, ECHO, and C-MRI. Coronary CT is cost-effective for the diagnosis of coronary artery disease and should be included in the Brazilian public health system. Stress ECHO has a similar performance and is an acceptable alternative for most patients, but invasive strategies should be reserved for patients at high risk. © 2016 Wiley Periodicals, Inc.

  3. Synthesis and biomedical applications of functional poly(α-hydroxy acids) via ring-opening polymerization of O-carboxyanhydrides.

    PubMed

    Yin, Qian; Yin, Lichen; Wang, Hua; Cheng, Jianjun

    2015-07-21

    Poly(α-hydroxy acids) (PAHAs) are a class of biodegradable and biocompatible polymers that are widely used in numerous applications. One drawback of these conventional polymers, however, is their lack of side-chain functionalities, which makes it difficult to conjugate active moieties to PAHA or to fine-tune the physical and chemical properties of PAHA-derived materials through side-chain modifications. Thus, extensive efforts have been devoted to the development of methodology that allows facile preparation of PAHAs with controlled molecular weights and a variety of functionalities for widespread utilities. However, it is highly challenging to introduce functional groups into conventional PAHAs derived from ring-opening polymerization (ROP) of lactides and glycolides to yield functional PAHAs with favorable properties, such as tunable hydrophilicity/hydrophobicity, facile postpolymerization modification, and well-defined physicochemical properties. Amino acids are excellent resources for functional polymers because of their low cost, availability, and structural as well as stereochemical diversity. Nevertheless, the synthesis of functional PAHAs using amino acids as building blocks has been rarely reported because of the difficulty of preparing large-scale monomers and poor yields during the synthesis. The synthesis of functionalized PAHAs from O-carboxyanhydrides (OCAs), a class of five-membered cyclic anhydrides derived from amino acids, has proven to be one of the most promising strategies and has thus attracted tremendous interest recently. In this Account, we highlight the recent progress in our group on the synthesis of functional PAHAs via ROP of OCAs and their self-assembly and biomedical applications. New synthetic methodologies that allow the facile preparation of PAHAs with controlled molecular weights and various functionalities through ROP of OCAs are reviewed and evaluated. The in vivo stability, side-chain functionalities, and/or trigger responsiveness of several functional PAHAs are evaluated. Their biomedical applications in drug and gene delivery are also discussed. The ready availability of starting materials from renewable resources and the facile postmodification strategies such as azide-alkyne cycloaddition and the thiol-yne "click" reaction have enabled the production of a multitude of PAHAs with controlled molecular weights, narrow polydispersity, high terminal group fidelities, and structural diversities that are amenable for self-assembly and bioapplications. We anticipate that this new generation of PAHAs and their self-assembled nanosystems as biomaterials will open up exciting new opportunities and have widespread utilities for biological applications.

  4. The economics of a pharmacy-based central intravenous additive service for paediatric patients.

    PubMed

    Armour, D J; Cairns, C J; Costello, I; Riley, S J; Davies, E G

    1996-10-01

    This study was designed to compare the costs of a pharmacy-based Central Intravenous Additive Service (CIVAS) with those of traditional ward-based preparation of intravenous doses for a paediatric population. Labour costs were derived from timings of preparation of individual doses in both the pharmacy and ward by an independent observer. The use of disposables and diluents was recorded and their acquisition costs apportioned to the cost of each dose prepared. Data were collected from 20 CIVAS sessions (501 doses) and 26 ward-based sessions (30 doses). In addition, the costs avoided by the use of part vials in CIVAS was calculated. This was derived from a total of 50 CIVAS sessions. Labour, disposable and diluent costs were significantly lower for CIVAS compared with ward-based preparation (p < 0.001). The ratio of costs per dose [in 1994 pounds sterling] between ward and pharmacy was 2.35:1 (2.51 pounds:1.07 pounds). Sensitivity analysis of the best and worst staff mixes in both locations ranged from 2.3:1 to 4.0:1, always in favour of CIVAS. There were considerable costs avoided in CIVAS from the multiple use of vials; the estimated annual sum derived from the study was 44,000 pounds. In addition, CIVAS was less vulnerable to unanticipated interruptions in work flow than ward-based preparation. CIVAS for children was more economical than traditional ward-based preparation, because of a cost-minimisation effect. Sensitivity analysis showed that these advantages were maintained over a full range of skill mixes. Additionally, significant savings accrued from the multiple use of vials in CIVAS.

  5. Optimal remediation of unconfined aquifers: Numerical applications and derivative calculations

    NASA Astrophysics Data System (ADS)

    Mansfield, Christopher M.; Shoemaker, Christine A.

    1999-05-01

    This paper extends earlier work on derivative-based optimization for cost-effective remediation to unconfined aquifers, which have more complex, nonlinear flow dynamics than confined aquifers. Most previous derivative-based optimization of contaminant removal has been limited to consideration of confined aquifers; however, contamination is more common in unconfined aquifers. Exact derivative equations are presented, and two computationally efficient approximations, the quasi-confined (QC) and head independent from previous (HIP) unconfined-aquifer finite element equation derivative approximations, are presented and demonstrated to be highly accurate. The derivative approximations can be used with any nonlinear optimization method requiring derivatives for computation of either time-invariant or time-varying pumping rates. The QC and HIP approximations are combined with the nonlinear optimal control algorithm SALQR into the unconfined-aquifer algorithm, which is shown to compute solutions for unconfined aquifers in CPU times that were not significantly longer than those required by the confined-aquifer optimization model. Two of the three example unconfined-aquifer cases considered obtained pumping policies with substantially lower objective function values with the unconfined model than were obtained with the confined-aquifer optimization, even though the mean differences in hydraulic heads predicted by the unconfined- and confined-aquifer models were small (less than 0.1%). We suggest a possible geophysical index based on differences in drawdown predictions between unconfined- and confined-aquifer models to estimate which aquifers require unconfined-aquifer optimization and which can be adequately approximated by the simpler confined-aquifer analysis.

  6. Final Report: Development of Renewable Microbial Polyesters for Cost Effective and Energy- Efficient Wood-Plastic Composites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, David N.; Emerick, Robert W.; England, Alfred B.

    In this project, we proposed to produce wood fiber reinforced thermoplastic composites (WFRTCs) using microbial thermoplastic polyesters in place of petroleum-derived plastic. WFRTCs are a rapidly growing product area, averaging a 38% growth rate since 1997. Their production is dependent on substantial quantities of petroleum based thermoplastics, increasing their overall energy costs by over 230% when compared to traditional Engineered Wood Products (EWP). Utilizing bio-based thermoplastics for these materials can reduce our dependence on foreign petroleum. We have demonstrated that biopolymers (polyhydroxyalkanoates, PHA) can be successfully produced from wood pulping waste streams and that viable wood fiber reinforced thermoplastic compositemore » products can be produced from these materials. The results show that microbial polyester (PHB in this study) can be extruded together with wastewater-derived cell mass and wood flour into deck products having performance properties comparable to existing commercial HDPE/WF composite products. This study has thus proven the underlying concept that the microbial polyesters produced from waste effluents can be used to make cost-effective and energy-efficient wood-plastic composites. The cost of purified microbial polyesters is about 5-20 times that of HDPE depending on the cost of crude oil, due to high purification (40%), carbon substrate (40%) and sterilized fermentation (20%) costs for the PHB. Hence, the ability to produce competitive and functional composites with unpurified PHA-biomass mixtures from waste carbon sources in unsterile systems—without cell debris removal—is a significant step forward in producing competitive value-added structural composites from forest products residuals using a biorefinery approach. As demonstrated in the energy and waste analysis for the project, significant energy savings and waste reductions can also be realized using this approach. We recommend that the next step for development of useful products using this technology is to scale the technology from the 700-L pilot reactor to a small-scale production facility, with dedicated operation staff and engineering controls. In addition, we recommend that a market study be conducted as well as further product development for construction products that will utilize the unique properties of this bio-based material.« less

  7. Cost-effectiveness of heat and moisture exchangers compared to usual care for pulmonary rehabilitation after total laryngectomy in Poland.

    PubMed

    Retèl, Valesca P; van den Boer, Cindy; Steuten, Lotte M G; Okła, Sławomir; Hilgers, Frans J; van den Brekel, Michiel W

    2015-09-01

    The beneficial physical and psychosocial effects of heat and moisture exchangers (HMEs) for pulmonary rehabilitation of laryngectomy patients are well evidenced. However, cost-effectiveness in terms of costs per additional quality-adjusted life years (QALYs) has not yet been investigated. Therefore, a model-based cost-effectiveness analysis of using HMEs versus usual care (UC) (including stoma covers, suction system and/or external humidifier) for patients after laryngectomy was performed. Primary outcomes were costs, QALYs and incremental cost-effectiveness ratio (ICER). Secondary outcomes were pulmonary infections, and sleeping problems. The analysis was performed from a health care perspective of Poland, using a time horizon of 10 years and cycle length of 1 year. Transition probabilities were derived from various sources, amongst others a Polish randomized clinical trial. Quality of life data was derived from an Italian study on similar patients. Data on frequencies and mortality-related tracheobronchitis and/or pneumonia were derived from a Europe-wide survey amongst head and neck cancer experts. Substantial differences in quality-adjusted survival between the use of HMEs (3.63 QALYs) versus UC (2.95 QALYs) were observed. Total health care costs/patient were 39,553 PLN (9465 Euro) for the HME strategy and 4889 PLN (1168 Euro) for the UC strategy. HME use resulted in fewer pulmonary infections, and less sleeping problems. We could conclude that given the Polish threshold of 99,000 PLN/QALY, using HMEs is cost-effective compared to UC, resulting in 51,326 PLN/QALY (12,264 Euro/QALY) gained for patients after total laryngectomy. For the hospital period alone (2 weeks), HMEs were cost-saving: less costly and more effective.

  8. Time-driven Activity-based Costing More Accurately Reflects Costs in Arthroplasty Surgery.

    PubMed

    Akhavan, Sina; Ward, Lorrayne; Bozic, Kevin J

    2016-01-01

    Cost estimates derived from traditional hospital cost accounting systems have inherent limitations that restrict their usefulness for measuring process and quality improvement. Newer approaches such as time-driven activity-based costing (TDABC) may offer more precise estimates of true cost, but to our knowledge, the differences between this TDABC and more traditional approaches have not been explored systematically in arthroplasty surgery. The purposes of this study were to compare the costs associated with (1) primary total hip arthroplasty (THA); (2) primary total knee arthroplasty (TKA); and (3) three surgeons performing these total joint arthroplasties (TJAs) as measured using TDABC versus traditional hospital accounting (TA). Process maps were developed for each phase of care (preoperative, intraoperative, and postoperative) for patients undergoing primary TJA performed by one of three surgeons at a tertiary care medical center. Personnel costs for each phase of care were measured using TDABC based on fully loaded labor rates, including physician compensation. Costs associated with consumables (including implants) were calculated based on direct purchase price. Total costs for 677 primary TJAs were aggregated over 17 months (January 2012 to May 2013) and organized into cost categories (room and board, implant, operating room services, drugs, supplies, other services). Costs derived using TDABC, based on actual time and intensity of resources used, were compared with costs derived using TA techniques based on activity-based costing and indirect costs calculated as a percentage of direct costs from the hospital decision support system. Substantial differences between cost estimates using TDABC and TA were found for primary THA (USD 12,982 TDABC versus USD 23,915 TA), primary TKA (USD 13,661 TDABC versus USD 24,796 TA), and individually across all three surgeons for both (THA: TDABC = 49%-55% of TA total cost; TKA: TDABC = 53%-55% of TA total cost). Cost categories with the most variability between TA and TDABC estimates were operating room services and room and board. Traditional hospital cost accounting systems overestimate the costs associated with many surgical procedures, including primary TJA. TDABC provides a more accurate measure of true resource use associated with TJAs and can be used to identify high-cost/high-variability processes that can be targeted for process/quality improvement. Level III, therapeutic study.

  9. Space station automation study: Automation requriements derived from space manufacturing concepts,volume 2

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Automation reuirements were developed for two manufacturing concepts: (1) Gallium Arsenide Electroepitaxial Crystal Production and Wafer Manufacturing Facility, and (2) Gallium Arsenide VLSI Microelectronics Chip Processing Facility. A functional overview of the ultimate design concept incoporating the two manufacturing facilities on the space station are provided. The concepts were selected to facilitate an in-depth analysis of manufacturing automation requirements in the form of process mechanization, teleoperation and robotics, sensors, and artificial intelligence. While the cost-effectiveness of these facilities was not analyzed, both appear entirely feasible for the year 2000 timeframe.

  10. Large Field Visualization with Demand-Driven Calculation

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.; Henze, Chris

    1999-01-01

    We present a system designed for the interactive definition and visualization of fields derived from large data sets: the Demand-Driven Visualizer (DDV). The system allows the user to write arbitrary expressions to define new fields, and then apply a variety of visualization techniques to the result. Expressions can include differential operators and numerous other built-in functions, ail of which are evaluated at specific field locations completely on demand. The payoff of following a demand-driven design philosophy throughout becomes particularly evident when working with large time-series data, where the costs of eager evaluation alternatives can be prohibitive.

  11. Quantum Approach to Cournot-type Competition

    NASA Astrophysics Data System (ADS)

    Frąckiewicz, Piotr

    2018-02-01

    The aim of this paper is to investigate Cournot-type competition in the quantum domain with the use of the Li-Du-Massar scheme for continuous-variable quantum games. We derive a formula which, in a simple way, determines a unique Nash equilibrium. The result concerns a large class of Cournot duopoly problems including the competition, where the demand and cost functions are not necessary linear. Further, we show that the Nash equilibrium converges to a Pareto-optimal strategy profile as the quantum correlation increases. In addition to illustrating how the formula works, we provide the readers with two examples.

  12. Optimal trajectories for hypersonic launch vehicles

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.; Bowles, Jeffrey V.; Whittaker, Thomas

    1994-01-01

    In this paper, we derive a near-optimal guidance law for the ascent trajectory from earth surface to earth orbit of a hypersonic, dual-mode propulsion, lifting vehicle. Of interest are both the optical flight path and the optimal operation of the propulsion system. The guidance law is developed from the energy-state approximation of the equations of motion. Because liquid hydrogen fueled hypersonic aircraft are volume sensitive, as well as weight sensitive, the cost functional is a weighted sum of fuel mass and volume; the weighting factor is chosen to minimize gross take-off weight for a given payload mass and volume in orbit.

  13. Extraterrestrial consumables production and utilization

    NASA Technical Reports Server (NTRS)

    Sanders, A. P.

    1972-01-01

    Potential oxygen requirements for lunar-surface, lunar-orbit, and planetary missions are presented with emphasis on: (1) emergency survival of the crew, (2) provision of energy consumables for vehicles, and (3) nondependency on an earth supply of oxygen. Although many extraterrestrial resource processes are analytically feasible, this study has considered hydrogen and fluorine processing concepts to obtain oxygen or water (or both). The results are quite encouraging and are extrapolatable to other processes. Preliminary mission planning and sequencing analysis has enabled the programmatic evaluation of using lunar-derived oxygen relative to transportation cost as a function of vehicle delivery and operational capability.

  14. Reconciling quality and cost: A case study in interventional radiology.

    PubMed

    Zhang, Li; Domröse, Sascha; Mahnken, Andreas

    2015-10-01

    To provide a method to calculate delay cost and examine the relationship between quality and total cost. The total cost including capacity, supply and delay cost for running an interventional radiology suite was calculated. The capacity cost, consisting of labour, lease and overhead costs, was derived based on expenses per unit time. The supply cost was calculated according to actual procedural material use. The delay cost and marginal delay cost derived from queueing models was calculated based on waiting times of inpatients for their procedures. Quality improvement increased patient safety and maintained the outcome. The average daily delay costs were reduced from 1275 € to 294 €, and marginal delay costs from approximately 2000 € to 500 €, respectively. The one-time annual cost saved from the transfer of surgical to radiological procedures was approximately 130,500 €. The yearly delay cost saved was approximately 150,000 €. With increased revenue of 10,000 € in project phase 2, the yearly total cost saved was approximately 290,000 €. Optimal daily capacity of 4.2 procedures was determined. An approach for calculating delay cost toward optimal capacity allocation was presented. An overall quality improvement was achieved at reduced costs. • Improving quality in terms of safety, outcome, efficiency and timeliness reduces cost. • Mismatch of demand and capacity is detrimental to quality and cost. • Full system utilization with random demand results in long waiting periods and increased cost.

  15. Development of a figure-of-merit for space missions

    NASA Technical Reports Server (NTRS)

    Preiss, Bruce; Pan, Thomas; Ramohalli, Kumar

    1991-01-01

    The concept of a quantitative figure-of-merit (FOM) to evaluate different and competing options for space missions is further developed. Over six hundred individual factors are considered. These range from mission orbital mechanics to in-situ resource utilization (ISRU/ISMU) plants. The program utilizes a commercial software package for synthesis and visual display; the details are completely developed in-house. Historical FOM's are derived for successful space missions such as the Surveyor, Voyager, Apollo, etc. A cost FOM is also mentioned. The bulk of this work is devoted to one specific example of Mars Sample Return (MSR). The program is flexible enough to accommodate a variety of evolving technologies. Initial results show that the FOM for sample return is a function of the mass returned to LEO, and that missions utilizing ISRU/ISMU are far more cost effective than those that rely on all earth-transported resources.

  16. IPhone or Kindle: Competition of Electronic Books Sales

    NASA Astrophysics Data System (ADS)

    Chen, Li

    With the technical development of the reading equipment, e-books have witnessed a gradual and steady increase in sales in recent years. Last year, smart phones announced to be able to perform additional functions as e-book reading devices, making it possible for retailers selling e-books for smart phones (SPR) such as iPhone to differentiate with those selling e-books for specific reading equipment (SER) such as Amazon Kindle. We develop a game theory model to examine the competition between SER and SPR retailers. We derive the equilibrium price and analyze the factors that affect equilibrium outcomes under both scenarios of complete and incomplete information. Our results suggest that reduced cost due to inconvenience of reading e-books over iPhone lowers equilibrium prices, and reduced cost of specific reading equipment leads to more intense price competition. Under information asymmetry, we show that SER retailers will increase the price at equilibrium.

  17. Optimal ordering and production policy for a recoverable item inventory system with learning effect

    NASA Astrophysics Data System (ADS)

    Tsai, Deng-Maw

    2012-02-01

    This article presents two models for determining an optimal integrated economic order quantity and economic production quantity policy in a recoverable manufacturing environment. The models assume that the unit production time of the recovery process decreases with the increase in total units produced as a result of learning. A fixed proportion of used products are collected from customers and then recovered for reuse. The recovered products are assumed to be in good condition and acceptable to customers. Constant demand can be satisfied by utilising both newly purchased products and recovered products. The aim of this article is to show how to minimise total inventory-related cost. The total cost functions of the two models are derived and two simple search procedures are proposed to determine optimal policy parameters. Numerical examples are provided to illustrate the proposed models. In addition, sensitivity analyses have also been performed and are discussed.

  18. When Reputation Enforces Evolutionary Cooperation in Unreliable MANETs.

    PubMed

    Tang, Changbing; Li, Ang; Li, Xiang

    2015-10-01

    In self-organized mobile ad hoc networks (MANETs), network functions rely on cooperation of self-interested nodes, where a challenge is to enforce their mutual cooperation. In this paper, we study cooperative packet forwarding in a one-hop unreliable channel which results from loss of packets and noisy observation of transmissions. We propose an indirect reciprocity framework based on evolutionary game theory, and enforce cooperation of packet forwarding strategies in both structured and unstructured MANETs. Furthermore, we analyze the evolutionary dynamics of cooperative strategies and derive the threshold of benefit-to-cost ratio to guarantee the convergence of cooperation. The numerical simulations verify that the proposed evolutionary game theoretic solution enforces cooperation when the benefit-to-cost ratio of the altruistic exceeds the critical condition. In addition, the network throughput performance of our proposed strategy in structured MANETs is measured, which is in close agreement with that of the full cooperative strategy.

  19. Cost effective simulation-based multiobjective optimization in the performance of an internal combustion engine

    NASA Astrophysics Data System (ADS)

    Aittokoski, Timo; Miettinen, Kaisa

    2008-07-01

    Solving real-life engineering problems can be difficult because they often have multiple conflicting objectives, the objective functions involved are highly nonlinear and they contain multiple local minima. Furthermore, function values are often produced via a time-consuming simulation process. These facts suggest the need for an automated optimization tool that is efficient (in terms of number of objective function evaluations) and capable of solving global and multiobjective optimization problems. In this article, the requirements on a general simulation-based optimization system are discussed and such a system is applied to optimize the performance of a two-stroke combustion engine. In the example of a simulation-based optimization problem, the dimensions and shape of the exhaust pipe of a two-stroke engine are altered, and values of three conflicting objective functions are optimized. These values are derived from power output characteristics of the engine. The optimization approach involves interactive multiobjective optimization and provides a convenient tool to balance between conflicting objectives and to find good solutions.

  20. Optimal control penalty finite elements - Applications to integrodifferential equations

    NASA Astrophysics Data System (ADS)

    Chung, T. J.

    The application of the optimal-control/penalty finite-element method to the solution of integrodifferential equations in radiative-heat-transfer problems (Chung et al.; Chung and Kim, 1982) is discussed and illustrated. The nonself-adjointness of the convective terms in the governing equations is treated by utilizing optimal-control cost functions and employing penalty functions to constrain auxiliary equations which permit the reduction of second-order derivatives to first order. The OCPFE method is applied to combined-mode heat transfer by conduction, convection, and radiation, both without and with scattering and viscous dissipation; the results are presented graphically and compared to those obtained by other methods. The OCPFE method is shown to give good results in cases where standard Galerkin FE fail, and to facilitate the investigation of scattering and dissipation effects.

  1. Projected quasiparticle theory for molecular electronic structure

    NASA Astrophysics Data System (ADS)

    Scuseria, Gustavo E.; Jiménez-Hoyos, Carlos A.; Henderson, Thomas M.; Samanta, Kousik; Ellis, Jason K.

    2011-09-01

    We derive and implement symmetry-projected Hartree-Fock-Bogoliubov (HFB) equations and apply them to the molecular electronic structure problem. All symmetries (particle number, spin, spatial, and complex conjugation) are deliberately broken and restored in a self-consistent variation-after-projection approach. We show that the resulting method yields a comprehensive black-box treatment of static correlations with effective one-electron (mean-field) computational cost. The ensuing wave function is of multireference character and permeates the entire Hilbert space of the problem. The energy expression is different from regular HFB theory but remains a functional of an independent quasiparticle density matrix. All reduced density matrices are expressible as an integration of transition density matrices over a gauge grid. We present several proof-of-principle examples demonstrating the compelling power of projected quasiparticle theory for quantum chemistry.

  2. Bypassing the malfunction junction in warm dense matter simulations

    NASA Astrophysics Data System (ADS)

    Cangi, Attila; Pribram-Jones, Aurora

    2015-03-01

    Simulation of warm dense matter requires computational methods that capture both quantum and classical behavior efficiently under high-temperature and high-density conditions. The state-of-the-art approach to model electrons and ions under those conditions is density functional theory molecular dynamics, but this method's computational cost skyrockets as temperatures and densities increase. We propose finite-temperature potential functional theory as an in-principle-exact alternative that suffers no such drawback. In analogy to the zero-temperature theory developed previously, we derive an orbital-free free energy approximation through a coupling-constant formalism. Our density approximation and its associated free energy approximation demonstrate the method's accuracy and efficiency. A.C. has been partially supported by NSF Grant CHE-1112442. A.P.J. is supported by DOE Grant DE-FG02-97ER25308.

  3. Large-scale generation of human iPSC-derived neural stem cells/early neural progenitor cells and their neuronal differentiation.

    PubMed

    D'Aiuto, Leonardo; Zhi, Yun; Kumar Das, Dhanjit; Wilcox, Madeleine R; Johnson, Jon W; McClain, Lora; MacDonald, Matthew L; Di Maio, Roberto; Schurdak, Mark E; Piazza, Paolo; Viggiano, Luigi; Sweet, Robert; Kinchington, Paul R; Bhattacharjee, Ayantika G; Yolken, Robert; Nimgaonka, Vishwajit L; Nimgaonkar, Vishwajit L

    2014-01-01

    Induced pluripotent stem cell (iPSC)-based technologies offer an unprecedented opportunity to perform high-throughput screening of novel drugs for neurological and neurodegenerative diseases. Such screenings require a robust and scalable method for generating large numbers of mature, differentiated neuronal cells. Currently available methods based on differentiation of embryoid bodies (EBs) or directed differentiation of adherent culture systems are either expensive or are not scalable. We developed a protocol for large-scale generation of neuronal stem cells (NSCs)/early neural progenitor cells (eNPCs) and their differentiation into neurons. Our scalable protocol allows robust and cost-effective generation of NSCs/eNPCs from iPSCs. Following culture in neurobasal medium supplemented with B27 and BDNF, NSCs/eNPCs differentiate predominantly into vesicular glutamate transporter 1 (VGLUT1) positive neurons. Targeted mass spectrometry analysis demonstrates that iPSC-derived neurons express ligand-gated channels and other synaptic proteins and whole-cell patch-clamp experiments indicate that these channels are functional. The robust and cost-effective differentiation protocol described here for large-scale generation of NSCs/eNPCs and their differentiation into neurons paves the way for automated high-throughput screening of drugs for neurological and neurodegenerative diseases.

  4. Definition, technology readiness, and development cost of the orbit transfer vehicle engine integrated control and health monitoring system elements

    NASA Technical Reports Server (NTRS)

    Cannon, I.; Balcer, S.; Cochran, M.; Klop, J.; Peterson, S.

    1991-01-01

    An Integrated Control and Health Monitoring (ICHM) system was conceived for use on a 20 Klb thrust baseline Orbit Transfer Vehicle (OTV) engine. Considered for space used, the ICHM was defined for reusability requirements for an OTV engine service free life of 20 missions, with 100 starts and a total engine operational time of 4 hours. Functions were derived by flowing down requirements from NASA guidelines, previous OTV engine or ICHM documents, and related contracts. The elements of an ICHM were identified and listed, and these elements were described in sufficient detail to allow estimation of their technology readiness levels. These elements were assessed in terms of technology readiness level, and supporting rationale for these assessments presented. The remaining cost for development of a minimal ICHM system to technology readiness level 6 was estimated. The estimates are within an accuracy range of minus/plus 20 percent. The cost estimates cover what is needed to prepare an ICHM system for use on a focussed testbed for an expander cycle engine, excluding support to the actual test firings.

  5. A Simple, Powerful Method for Optimal Guidance of Spacecraft Formations

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.

    2005-01-01

    One of the most interesting and challenging aspects of formation guidance law design is the coupling of the orbit design and the science return. The analyst s role is more complicated than simply to design the formation geometry and evolution. He or she is also involved in designing a significant portion of the science instrument itself. The effectiveness of the formation as a science instrument is intimately coupled with the relative geoniet,ry and evolution of the collection of spacecraft. Therefore, the science return can be maximized by optimizing the orbit design according to a performance metric relevant to the science mission goals. In this work, we present a simple method for optimal formation guidance that is applicable to missions whose performance metric, requirements, and constraints can be cast as functions that are explicitly dependent upon the orbit states and spacecraft relative positions and velocities. We present a general form for the cost and constraint functions, and derive their semi-analytic gradients with respect to the formation initial conditions. The gradients are broken down into two types. The first type are gradients of the mission specific performance metric with respect to formation geometry. The second type are derivatives of the formation geometry with respect to the orbit initial conditions. The fact that these two types of derivatives appear separately allows us to derive and implement a general framework that requires minimal modification to be applied to different missions or mission phases. To illustrate the applicability of the approach, we conclude with applications to twc missims: the Magnetospheric Mu!tiscale mission (MMS), a,nd the TJaser Interferometer Space Antenna (LISA).

  6. A Simple, Powerful Method for Optimal Guidance of Spacecraft Formations

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.

    2006-01-01

    One of the most interesting and challenging aspects of formation guidance law design is the coupling of the orbit design and the science return. The analyst's role is more complicated than simply to design the formation geometry and evolution. He or she is also involved in designing a significant portion of the science instrument itself. The effectiveness of the formation as a science instrument is intimately coupled with the relative geometry and evolution of the collection of spacecraft. Therefore, the science return can be maximized by optimizing the orbit design according to a performance metric relevant to the science mission goals. In this work, we present a simple method for optimal formation guidance that is applicable to missions whose performance metric, requirements, and constraints can be cast as functions that are explicitly dependent upon the orbit states and spacecraft relative positions and velocities. We present a general form for the cost and constraint functions, and derive their semi-analytic gradients with respect to the formation initial conditions. The gradients are broken down into two types. The first type are gradients of the mission specific performance metric with respect to formation geometry. The second type are derivatives of the formation geometry with respect to the orbit initial conditions. The fact that these two types of derivatives appear separately allows us to derive and implement a general framework that requires minimal modification to be applied to different missions or mission phases. To illustrate the applicability of the approach, we conclude with applications to two missions: the Magnetospheric Multiscale mission (MMS) , and the Laser Interferometer Space Antenna (LISA).

  7. Accounting for the relationship between per diem cost and LOS when estimating hospitalization costs.

    PubMed

    Ishak, K Jack; Stolar, Marilyn; Hu, Ming-yi; Alvarez, Piedad; Wang, Yamei; Getsios, Denis; Williams, Gregory C

    2012-12-01

    Hospitalization costs in clinical trials are typically derived by multiplying the length of stay (LOS) by an average per-diem (PD) cost from external sources. This assumes that PD costs are independent of LOS. Resource utilization in early days of the stay is usually more intense, however, and thus, the PD cost for a short hospitalization may be higher than for longer stays. The shape of this relationship is unlikely to be linear, as PD costs would be expected to gradually plateau. This paper describes how to model the relationship between PD cost and LOS using flexible statistical modelling techniques. An example based on a clinical study of clevidipine for the treatment of peri-operative hypertension during hospitalizations for cardiac surgery is used to illustrate how inferences about cost-savings associated with good blood pressure (BP) control during the stay can be affected by the approach used to derive hospitalization costs.Data on the cost and LOS of hospitalizations for coronary artery bypass grafting (CABG) from the Massachusetts Acute Hospital Case Mix Database (the MA Case Mix Database) were analyzed to link LOS to PD cost, factoring in complications that may have occurred during the hospitalization or post-discharge. The shape of the relationship between LOS and PD costs in the MA Case Mix was explored graphically in a regression framework. A series of statistical models including those based on simple logarithmic transformation of LOS to more flexible models using LOcally wEighted Scatterplot Smoothing (LOESS) techniques were considered. A final model was selected, using simplicity and parsimony as guiding principles in addition traditional fit statistics (like Akaike's Information Criterion, or AIC). This mapping was applied in ECLIPSE to predict an LOS-specific PD cost, and then a total cost of hospitalization. These were then compared for patients who had good vs. poor peri-operative blood-pressure control. The MA Case Mix dataset included data from over 10,000 patients. Visual inspection of PD vs. LOS revealed a non-linear relationship. A logarithmic model and a series of LOESS and piecewise-linear models with varying connection points were tested. The logarithmic model was ultimately favoured for its fit and simplicity. Using this mapping in the ECLIPSE trials, we found that good peri-operative BP control was associated with a cost savings of $5,366 when costs were derived using the mapping, compared with savings of $7,666 obtained using the traditional approach of calculating the cost. PD costs vary systematically with LOS, with short stays being associated with high PD costs that drop gradually and level off. The shape of the relationship may differ in other settings. It is important to assess this and model the observed pattern, as this may have an impact on conclusions based on derived hospitalization costs.

  8. Accounting for the relationship between per diem cost and LOS when estimating hospitalization costs

    PubMed Central

    2012-01-01

    Background Hospitalization costs in clinical trials are typically derived by multiplying the length of stay (LOS) by an average per-diem (PD) cost from external sources. This assumes that PD costs are independent of LOS. Resource utilization in early days of the stay is usually more intense, however, and thus, the PD cost for a short hospitalization may be higher than for longer stays. The shape of this relationship is unlikely to be linear, as PD costs would be expected to gradually plateau. This paper describes how to model the relationship between PD cost and LOS using flexible statistical modelling techniques. Methods An example based on a clinical study of clevidipine for the treatment of peri-operative hypertension during hospitalizations for cardiac surgery is used to illustrate how inferences about cost-savings associated with good blood pressure (BP) control during the stay can be affected by the approach used to derive hospitalization costs. Data on the cost and LOS of hospitalizations for coronary artery bypass grafting (CABG) from the Massachusetts Acute Hospital Case Mix Database (the MA Case Mix Database) were analyzed to link LOS to PD cost, factoring in complications that may have occurred during the hospitalization or post-discharge. The shape of the relationship between LOS and PD costs in the MA Case Mix was explored graphically in a regression framework. A series of statistical models including those based on simple logarithmic transformation of LOS to more flexible models using LOcally wEighted Scatterplot Smoothing (LOESS) techniques were considered. A final model was selected, using simplicity and parsimony as guiding principles in addition traditional fit statistics (like Akaike’s Information Criterion, or AIC). This mapping was applied in ECLIPSE to predict an LOS-specific PD cost, and then a total cost of hospitalization. These were then compared for patients who had good vs. poor peri-operative blood-pressure control. Results The MA Case Mix dataset included data from over 10,000 patients. Visual inspection of PD vs. LOS revealed a non-linear relationship. A logarithmic model and a series of LOESS and piecewise-linear models with varying connection points were tested. The logarithmic model was ultimately favoured for its fit and simplicity. Using this mapping in the ECLIPSE trials, we found that good peri-operative BP control was associated with a cost savings of $5,366 when costs were derived using the mapping, compared with savings of $7,666 obtained using the traditional approach of calculating the cost. Conclusions PD costs vary systematically with LOS, with short stays being associated with high PD costs that drop gradually and level off. The shape of the relationship may differ in other settings. It is important to assess this and model the observed pattern, as this may have an impact on conclusions based on derived hospitalization costs. PMID:23198908

  9. Microgravity vibration isolation: An optimal control law for the one-dimensional case

    NASA Technical Reports Server (NTRS)

    Hampton, Richard D.; Grodsinsky, Carlos M.; Allaire, Paul E.; Lewis, David W.; Knospe, Carl R.

    1991-01-01

    Certain experiments contemplated for space platforms must be isolated from the accelerations of the platform. An optimal active control is developed for microgravity vibration isolation, using constant state feedback gains (identical to those obtained from the Linear Quadratic Regulator (LQR) approach) along with constant feedforward gains. The quadratic cost function for this control algorithm effectively weights external accelerations of the platform disturbances by a factor proportional to (1/omega) exp 4. Low frequency accelerations are attenuated by greater than two orders of magnitude. The control relies on the absolute position and velocity feedback of the experiment and the absolute position and velocity feedforward of the platform, and generally derives the stability robustness characteristics guaranteed by the LQR approach to optimality. The method as derived is extendable to the case in which only the relative positions and velocities and the absolute accelerations of the experiment and space platform are available.

  10. Analysis of the Hessian for Aerodynamic Optimization: Inviscid Flow

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Ta'asan, Shlomo

    1996-01-01

    In this paper we analyze inviscid aerodynamic shape optimization problems governed by the full potential and the Euler equations in two and three dimensions. The analysis indicates that minimization of pressure dependent cost functions results in Hessians whose eigenvalue distributions are identical for the full potential and the Euler equations. However the optimization problems in two and three dimensions are inherently different. While the two dimensional optimization problems are well-posed the three dimensional ones are ill-posed. Oscillations in the shape up to the smallest scale allowed by the design space can develop in the direction perpendicular to the flow, implying that a regularization is required. A natural choice of such a regularization is derived. The analysis also gives an estimate of the Hessian's condition number which implies that the problems at hand are ill-conditioned. Infinite dimensional approximations for the Hessians are constructed and preconditioners for gradient based methods are derived from these approximate Hessians.

  11. Two dimensional radial gas flows in atmospheric pressure plasma-enhanced chemical vapor deposition

    NASA Astrophysics Data System (ADS)

    Kim, Gwihyun; Park, Seran; Shin, Hyunsu; Song, Seungho; Oh, Hoon-Jung; Ko, Dae Hong; Choi, Jung-Il; Baik, Seung Jae

    2017-12-01

    Atmospheric pressure (AP) operation of plasma-enhanced chemical vapor deposition (PECVD) is one of promising concepts for high quality and low cost processing. Atmospheric plasma discharge requires narrow gap configuration, which causes an inherent feature of AP PECVD. Two dimensional radial gas flows in AP PECVD induces radial variation of mass-transport and that of substrate temperature. The opposite trend of these variations would be the key consideration in the development of uniform deposition process. Another inherent feature of AP PECVD is confined plasma discharge, from which volume power density concept is derived as a key parameter for the control of deposition rate. We investigated deposition rate as a function of volume power density, gas flux, source gas partial pressure, hydrogen partial pressure, plasma source frequency, and substrate temperature; and derived a design guideline of deposition tool and process development in terms of deposition rate and uniformity.

  12. DRS: Derivational Reasoning System

    NASA Technical Reports Server (NTRS)

    Bose, Bhaskar

    1995-01-01

    The high reliability requirements for airborne systems requires fault-tolerant architectures to address failures in the presence of physical faults, and the elimination of design flaws during the specification and validation phase of the design cycle. Although much progress has been made in developing methods to address physical faults, design flaws remain a serious problem. Formal methods provides a mathematical basis for removing design flaws from digital systems. DRS (Derivational Reasoning System) is a formal design tool based on advanced research in mathematical modeling and formal synthesis. The system implements a basic design algebra for synthesizing digital circuit descriptions from high level functional specifications. DRS incorporates an executable specification language, a set of correctness preserving transformations, verification interface, and a logic synthesis interface, making it a powerful tool for realizing hardware from abstract specifications. DRS integrates recent advances in transformational reasoning, automated theorem proving and high-level CAD synthesis systems in order to provide enhanced reliability in designs with reduced time and cost.

  13. Microgravity vibration isolation: An optimal control law for the one-dimensional case

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Grodsinsky, C. M.; Allaire, P. E.; Lewis, D. W.; Knospe, C. R.

    1991-01-01

    Certain experiments contemplated for space platforms must be isolated from the accelerations of the platforms. An optimal active control is developed for microgravity vibration isolation, using constant state feedback gains (identical to those obtained from the Linear Quadratic Regulator (LQR) approach) along with constant feedforward (preview) gains. The quadratic cost function for this control algorithm effectively weights external accelerations of the platform disturbances by a factor proportional to (1/omega)(exp 4). Low frequency accelerations (less than 50 Hz) are attenuated by greater than two orders of magnitude. The control relies on the absolute position and velocity feedback of the experiment and the absolute position and velocity feedforward of the platform, and generally derives the stability robustness characteristics guaranteed by the LQR approach to optimality. The method as derived is extendable to the case in which only the relative positions and velocities and the absolute accelerations of the experiment and space platform are available.

  14. 15 CFR 24.24 - Matching or cost sharing.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  15. 45 CFR 92.24 - Matching or cost sharing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  16. 14 CFR 1273.24 - Matching or cost sharing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  17. 29 CFR 1470.24 - Matching or cost sharing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  18. 45 CFR 92.24 - Matching or cost sharing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  19. 29 CFR 1470.24 - Matching or cost sharing.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  20. 13 CFR 143.24 - Matching or cost sharing.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  1. 29 CFR 1470.24 - Matching or cost sharing.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  2. 45 CFR 92.24 - Matching or cost sharing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  3. 14 CFR 1273.24 - Matching or cost sharing.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  4. 45 CFR 92.24 - Matching or cost sharing.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  5. 45 CFR 92.24 - Matching or cost sharing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  6. 15 CFR 24.24 - Matching or cost sharing.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  7. 13 CFR 143.24 - Matching or cost sharing.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  8. 13 CFR 143.24 - Matching or cost sharing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  9. 13 CFR 143.24 - Matching or cost sharing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  10. 15 CFR 24.24 - Matching or cost sharing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  11. 14 CFR 1273.24 - Matching or cost sharing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  12. 15 CFR 24.24 - Matching or cost sharing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  13. 14 CFR 1273.24 - Matching or cost sharing.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  14. 29 CFR 1470.24 - Matching or cost sharing.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  15. 29 CFR 1470.24 - Matching or cost sharing.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  16. 13 CFR 143.24 - Matching or cost sharing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) The value of third party in-kind contributions applicable to the period to which the cost sharing or.... Neither costs nor the values of third party in-kind contributions may count towards satisfying a cost... contractors. These records must show how the value placed on third party in-kind contributions was derived. To...

  17. Cost savings of reduced constipation rates attributed to increased dietary fiber intakes: a decision-analytic model

    PubMed Central

    2014-01-01

    Background Nearly five percent of Americans suffer from functional constipation, many of whom may benefit from increasing dietary fiber consumption. The annual constipation-related healthcare cost savings associated with increasing intakes may be considerable but have not been examined previously. The objective of the present study was to estimate the economic impact of increased dietary fiber consumption on direct medical costs associated with constipation. Methods Literature searches were conducted to identify nationally representative input parameters for the U.S. population, which included prevalence of functional constipation; current dietary fiber intakes; proportion of the population meeting recommended intakes; and the percentage that would be expected to respond, in terms of alleviation of constipation, to a change in dietary fiber consumption. A dose–response analysis of published data was conducted to estimate the percent reduction in constipation prevalence per 1 g/day increase in dietary fiber intake. Annual direct medical costs for constipation were derived from the literature and updated to U.S. $ 2012. Sensitivity analyses explored the impact on adult vs. pediatric populations and the robustness of the model to each input parameter. Results The base case direct medical cost-savings was $12.7 billion annually among adults. The base case assumed that 3% of men and 6% of women currently met recommended dietary fiber intakes; each 1 g/day increase in dietary fiber intake would lead to a reduction of 1.9% in constipation prevalence; and all adults would increase their dietary fiber intake to recommended levels (mean increase of 9 g/day). Sensitivity analyses, which explored numerous alternatives, found that even if only 50% of the adult population increased dietary fiber intake by 3 g/day, annual medical costs savings exceeded $2 billion. All plausible scenarios resulted in cost savings of at least $1 billion. Conclusions Increasing dietary fiber consumption is associated with considerable cost savings, potentially exceeding $12 billion, which is a conservative estimate given the exclusion of lost productivity costs in the model. The finding that $12.7 billion in direct medical costs of constipation could be averted through simple, realistic changes in dietary practices is promising and highlights the need for strategies to increase dietary fiber intakes. PMID:24739472

  18. A low-cost inertial smoothing system for landing approach guidance

    NASA Technical Reports Server (NTRS)

    Niessen, F. R.

    1973-01-01

    Accurate position and velocity information with low noise content for instrument approaches and landings is required for both control and display applications. In a current VTOL automatic instrument approach and landing research program, radar-derived landing guidance position reference signals, which are noisy, have been mixed with acceleration information derived from low-cost onboard sensors to provide high-quality position and velocity information. An in-flight comparison of signal quality and accuracy has shown good agreement between the low-cost inertial smoothing system and an aided inertial navigation system. Furthermore, the low-cost inertial smoothing system has been proven to be satisfactory in control and display system applications for both automatic and pilot-in-the-loop instrument approaches and landings.

  19. An Evaluation of Facility Maintenance and Repair Strategies of Select Companies

    DTIC Science & Technology

    2002-09-01

    challenge for facility maintenance professionals is balancing the cost of facility Maintenance and Repair (M&R) with the benefits derived from those...maintenance professionals is balancing the cost of facility Maintenance and Repair (M&R) with the benefits derived from those facilities. This thesis...private organizations may also benefit from an analysis of the practices in use by successful corporations. The second group to benefit from this

  20. L{sup {infinity}} Variational Problems with Running Costs and Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aronsson, G., E-mail: gunnar.aronsson@liu.se; Barron, E. N., E-mail: enbarron@math.luc.edu

    2012-02-15

    Various approaches are used to derive the Aronsson-Euler equations for L{sup {infinity}} calculus of variations problems with constraints. The problems considered involve holonomic, nonholonomic, isoperimetric, and isosupremic constraints on the minimizer. In addition, we derive the Aronsson-Euler equation for the basic L{sup {infinity}} problem with a running cost and then consider properties of an absolute minimizer. Many open problems are introduced for further study.

  1. A Total Cost of Ownership Model for Low Temperature PEM Fuel Cells in Combined Heat and Power and Backup Power Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    University of California, Berkeley; Wei, Max; Lipman, Timothy

    2014-06-23

    A total cost of ownership model is described for low temperature proton exchange membrane stationary fuel cell systems for combined heat and power (CHP) applications from 1-250kW and backup power applications from 1-50kW. System designs and functional specifications for these two applications were developed across the range of system power levels. Bottom-up cost estimates were made for balance of plant costs, and detailed direct cost estimates for key fuel cell stack components were derived using design-for-manufacturing-and-assembly techniques. The development of high throughput, automated processes achieving high yield are projected to reduce the cost for fuel cell stacks to the $300/kWmore » level at an annual production volume of 100 MW. Several promising combinations of building types and geographical location in the U.S. were identified for installation of fuel cell CHP systems based on the LBNL modelling tool DER CAM. Life-cycle modelling and externality assessment were done for hotels and hospitals. Reduced electricity demand charges, heating credits and carbon credits can reduce the effective cost of electricity ($/kWhe) by 26-44percent in locations such as Minneapolis, where high carbon intensity electricity from the grid is displaces by a fuel cell system operating on reformate fuel. This project extends the scope of existing cost studies to include externalities and ancillary financial benefits and thus provides a more comprehensive picture of fuel cell system benefits, consistent with a policy and incentive environment that increasingly values these ancillary benefits. The project provides a critical, new modelling capacity and should aid a broad range of policy makers in assessing the integrated costs and benefits of fuel cell systems versus other distributed generation technologies.« less

  2. Cost-effectiveness of everolimus for the treatment of advanced neuroendocrine tumours of gastrointestinal or lung origin in Canada.

    PubMed

    Chua, A; Perrin, A; Ricci, J F; Neary, M P; Thabane, M

    2018-02-01

    In 2016, everolimus was approved by Health Canada for the treatment of unresectable, locally advanced or metastatic, well-differentiated, non-functional, neuroendocrine tumours (NET) of gastrointestinal (GI) or lung origin in adult patients with progressive disease. This analysis evaluated the cost-effectiveness of everolimus in this setting from a Canadian societal perspective. A partitioned survival model was developed to compare the cost per life-year (LY) gained and cost per quality-adjusted life-year (QALY) gained of everolimus plus best supportive care (BSC) versus BSC alone in patients with advanced or metastatic NET of GI or lung origin. Model health states included stable disease, disease progression, and death. Efficacy inputs were based on the RADIANT-4 trial and utilities were mapped from quality-of-life data retrieved from RADIANT-4. Resource utilization inputs were derived from a Canadian physician survey, while cost inputs were obtained from official reimbursement lists from Ontario and other published sources. Costs and efficacy outcomes were discounted 5% annually over a 10-year time horizon, and sensitivity analyses were conducted to test the robustness of the base case results. Everolimus had an incremental gain of 0.616 QALYs (0.823 LYs) and CA$89,795 resulting in an incremental cost-effectiveness ratio of CA$145,670 per QALY gained (CA$109,166 per LY gained). The probability of cost-effectiveness was 52.1% at a willingness to pay (WTP) threshold of CA$150,000 per QALY. Results of the probabilistic sensitivity analysis indicate that everolimus has a 52.1% probability of being cost-effective at a WTP threshold of CA$150,000 per QALY gained in Canada.

  3. Machine Learning Techniques in Optimal Design

    NASA Technical Reports Server (NTRS)

    Cerbone, Giuseppe

    1992-01-01

    Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution to the problem, is then obtained by solving in parallel each of the sub-problems in the set and computing the one with the minimum cost. In addition to speeding up the optimization process, our use of learning methods also relieves the expert from the burden of identifying rules that exactly pinpoint optimal candidate sub-problems. In real engineering tasks it is usually too costly to the engineers to derive such rules. Therefore, this paper also contributes to a further step towards the solution of the knowledge acquisition bottleneck [Feigenbaum, 1977] which has somewhat impaired the construction of rulebased expert systems.

  4. Economic evaluation of Cardio inCode®, a clinical-genetic function for coronary heart disease risk assessment.

    PubMed

    Ramírez de Arellano, A; Coca, A; de la Figuera, M; Rubio-Terrés, C; Rubio-Rodríguez, D; Gracia, A; Boldeanu, A; Puig-Gilberte, J; Salas, E

    2013-10-01

    A clinical–genetic function (Cardio inCode®) was generated using genetic variants associated with coronary heart disease (CHD), but not with classical CHD risk factors, to achieve a more precise estimation of the CHD risk of individuals by incorporating genetics into risk equations [Framingham and REGICOR (Registre Gironí del Cor)]. The objective of this study was to conduct an economic analysis of the CHD risk assessment with Cardio inCode®, which incorporates the patient’s genetic risk into the functions of REGICOR and Framingham, compared with the standard method (using only the functions). A Markov model was developed with seven states of health (low CHD risk, moderate CHD risk, high CHD risk, CHD event, recurrent CHD, chronic CHD, and death). The reclassification of CHD risk derived from genetic information and transition probabilities between states was obtained from a validation study conducted in cohorts of REGICOR (Spain) and Framingham (USA). It was assumed that patients classified as at moderate risk by the standard method were the best candidates to test the risk reclassification with Cardio inCode®. The utilities and costs (€; year 2011 values) of Markov states were obtained from the literature and Spanish sources. The analysis was performed from the perspective of the Spanish National Health System, for a life expectancy of 82 years in Spain. An annual discount rate of 3.5 % for costs and benefits was applied. For a Cardio inCode® price of €400, the cost per QALY gained compared with the standard method [incremental cost-effectiveness ratio (ICER)] would be €12,969 and €21,385 in REGICOR and Framingham cohorts, respectively. The threshold price of Cardio inCode® to reach the ICER threshold generally accepted in Spain (€30,000/QALY) would range between €668 and €836. The greatest benefit occurred in the subgroup of patients with moderate–high risk, with a high-risk reclassification of 22.8 % and 12 % of patients and an ICER of €1,652/QALY and €5,884/QALY in the REGICOR and Framingham cohorts, respectively. Sensitivity analyses confirmed the stability of the study results. Cardio inCode® is a cost-effective risk score option in CHD risk assessment compared with the standard method.

  5. Valuing SF-6D Health States Using a Discrete Choice Experiment.

    PubMed

    Norman, Richard; Viney, Rosalie; Brazier, John; Burgess, Leonie; Cronin, Paula; King, Madeleine; Ratcliffe, Julie; Street, Deborah

    2014-08-01

    SF-6D utility weights are conventionally produced using a standard gamble (SG). SG-derived weights consistently demonstrate a floor effect not observed with other elicitation techniques. Recent advances in discrete choice methods have allowed estimation of utility weights. The objective was to produce Australian utility weights for the SF-6D and to explore the application of discrete choice experiment (DCE) methods in this context. We hypothesized that weights derived using this method would reflect the largely monotonic construction of the SF-6D. We designed an online DCE and administered it to an Australia-representative online panel (n = 1017). A range of specifications investigating nonlinear preferences with respect to additional life expectancy were estimated using a random-effects probit model. The preferred model was then used to estimate a preference index such that full health and death were valued at 1 and 0, respectively, to provide an algorithm for Australian cost-utility analyses. Physical functioning, pain, mental health, and vitality were the largest drivers of utility weights. Combining levels to remove illogical orderings did not lead to a poorer model fit. Relative to international SG-derived weights, the range of utility weights was larger with 5% of health states valued below zero. s. DCEs can be used to investigate preferences for health profiles and to estimate utility weights for multi-attribute utility instruments. Australian cost-utility analyses can now use domestic SF-6D weights. The comparability of DCE results to those using other elicitation methods for estimating utility weights for quality-adjusted life-year calculations should be further investigated. © The Author(s) 2013.

  6. Management of End-Stage Ankle Arthritis: Cost-Utility Analysis Using Direct and Indirect Costs.

    PubMed

    Nwachukwu, Benedict U; McLawhorn, Alexander S; Simon, Matthew S; Hamid, Kamran S; Demetracopoulos, Constantine A; Deland, Jonathan T; Ellis, Scott J

    2015-07-15

    Total ankle replacement and ankle fusion are costly but clinically effective treatments for ankle arthritis. Prior cost-effectiveness analyses for the management of ankle arthritis have been limited by a lack of consideration of indirect costs and nonoperative management. The purpose of this study was to compare the cost-effectiveness of operative and nonoperative treatments for ankle arthritis with inclusion of direct and indirect costs in the analysis. Markov model analysis was conducted from a health-systems perspective with use of direct costs and from a societal perspective with use of direct and indirect costs. Costs were derived from the 2012 Nationwide Inpatient Sample (NIS) and expressed in 2013 U.S. dollars; effectiveness was expressed in quality-adjusted life years (QALYs). Model transition probabilities were derived from the available literature. The principal outcome measure was the incremental cost-effectiveness ratio (ICER). In the direct-cost analysis for the base case, total ankle replacement was associated with an ICER of $14,500/QALY compared with nonoperative management. When indirect costs were included, total ankle replacement was both more effective and resulted in $5900 and $800 in lifetime cost savings compared with the lifetime costs following nonoperative management and ankle fusion, respectively. At a $100,000/QALY threshold, surgical management of ankle arthritis was preferred for patients younger than ninety-six years and total ankle replacement was increasingly more cost-effective in younger patients. Total ankle replacement, ankle fusion, and nonoperative management were the preferred strategy in 83%, 12%, and 5% of the analyses, respectively; however, our model was sensitive to patient age, the direct costs of total ankle replacement, the failure rate of total ankle replacement, and the probability of arthritis after ankle fusion. Compared with nonoperative treatment for the management of end-stage ankle arthritis, total ankle replacement is preferred over ankle fusion; total ankle replacement is cost-saving when indirect costs are considered and demonstrates increasing cost-effectiveness in younger patients. As indications for and utilization of total ankle replacement increase, continued research is needed to define appropriate subgroups of patients who would likely derive the greatest clinical benefit from that procedure. Economic and decision analysis Level II. See Instructions for Authors for a complete description of levels of evidence. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.

  7. Graph-based surface reconstruction from stereo pairs using image segmentation

    NASA Astrophysics Data System (ADS)

    Bleyer, Michael; Gelautz, Margrit

    2005-01-01

    This paper describes a novel stereo matching algorithm for epipolar rectified images. The method applies colour segmentation on the reference image. The use of segmentation makes the algorithm capable of handling large untextured regions, estimating precise depth boundaries and propagating disparity information to occluded regions, which are challenging tasks for conventional stereo methods. We model disparity inside a segment by a planar equation. Initial disparity segments are clustered to form a set of disparity layers, which are planar surfaces that are likely to occur in the scene. Assignments of segments to disparity layers are then derived by minimization of a global cost function via a robust optimization technique that employs graph cuts. The cost function is defined on the pixel level, as well as on the segment level. While the pixel level measures the data similarity based on the current disparity map and detects occlusions symmetrically in both views, the segment level propagates the segmentation information and incorporates a smoothness term. New planar models are then generated based on the disparity layers' spatial extents. Results obtained for benchmark and self-recorded image pairs indicate that the proposed method is able to compete with the best-performing state-of-the-art algorithms.

  8. Numerical optimization of actuator trajectories for ITER hybrid scenario profile evolution

    NASA Astrophysics Data System (ADS)

    van Dongen, J.; Felici, F.; Hogeweij, G. M. D.; Geelen, P.; Maljaars, E.

    2014-12-01

    Optimal actuator trajectories for an ITER hybrid scenario ramp-up are computed using a numerical optimization method. For both L-mode and H-mode scenarios, the time trajectory of plasma current, EC heating and current drive distribution is determined that minimizes a chosen cost function, while satisfying constraints. The cost function is formulated to reflect two desired properties of the plasma q profile at the end of the ramp-up. The first objective is to maximize the ITG turbulence threshold by maximizing the volume-averaged s/q ratio. The second objective is to achieve a stationary q profile by having a flat loop voltage profile. Actuator and physics-derived constraints are included, imposing limits on plasma current, ramp rates, internal inductance and q profile. This numerical method uses the fast control-oriented plasma profile evolution code RAPTOR, which is successfully benchmarked against more complete CRONOS simulations for L-mode and H-mode mode ITER hybrid scenarios. It is shown that the optimized trajectories computed using RAPTOR also result in an improved ramp-up scenario for CRONOS simulations using the same input trajectories. Furthermore, the optimal trajectories are shown to vary depending on the precise timing of the L-H transition.

  9. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation

    PubMed Central

    Li, Hong; Lu, Mingquan

    2017-01-01

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318

  10. Integrated thermal and energy management of plug-in hybrid electric vehicles

    NASA Astrophysics Data System (ADS)

    Shams-Zahraei, Mojtaba; Kouzani, Abbas Z.; Kutter, Steffen; Bäker, Bernard

    2012-10-01

    In plug-in hybrid electric vehicles (PHEVs), the engine temperature declines due to reduced engine load and extended engine off period. It is proven that the engine efficiency and emissions depend on the engine temperature. Also, temperature influences the vehicle air-conditioner and the cabin heater loads. Particularly, while the engine is cold, the power demand of the cabin heater needs to be provided by the batteries instead of the waste heat of engine coolant. The existing energy management strategies (EMS) of PHEVs focus on the improvement of fuel efficiency based on hot engine characteristics neglecting the effect of temperature on the engine performance and the vehicle power demand. This paper presents a new EMS incorporating an engine thermal management method which derives the global optimal battery charge depletion trajectories. A dynamic programming-based algorithm is developed to enforce the charge depletion boundaries, while optimizing a fuel consumption cost function by controlling the engine power. The optimal control problem formulates the cost function based on two state variables: battery charge and engine internal temperature. Simulation results demonstrate that temperature and the cabin heater/air-conditioner power demand can significantly influence the optimal solution for the EMS, and accordingly fuel efficiency and emissions of PHEVs.

  11. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.

    PubMed

    Wang, Fei; Li, Hong; Lu, Mingquan

    2017-06-30

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.

  12. A general framework for regularized, similarity-based image restoration.

    PubMed

    Kheradmand, Amin; Milanfar, Peyman

    2014-12-01

    Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.

  13. Analytical optimization of demand management strategies across all urban water use sectors

    NASA Astrophysics Data System (ADS)

    Friedman, Kenneth; Heaney, James P.; Morales, Miguel; Palenchar, John

    2014-07-01

    An effective urban water demand management program can greatly influence both peak and average demand and therefore long-term water supply and infrastructure planning. Although a theoretical framework for evaluating residential indoor demand management has been well established, little has been done to evaluate other water use sectors such as residential irrigation in a compatible manner for integrating these results into an overall solution. This paper presents a systematic procedure to evaluate the optimal blend of single family residential irrigation demand management strategies to achieve a specified goal based on performance functions derived from parcel level tax assessor's data linked to customer level monthly water billing data. This framework is then generalized to apply to any urban water sector, as exponential functions can be fit to all resulting cumulative water savings functions. Two alternative formulations are presented: maximize net benefits, or minimize total costs subject to satisfying a target water savings. Explicit analytical solutions are presented for both formulations based on appropriate exponential best fits of performance functions. A direct result of this solution is the dual variable which represents the marginal cost of water saved at a specified target water savings goal. A case study of 16,303 single family irrigators in Gainesville Regional Utilities utilizing high quality tax assessor and monthly billing data along with parcel level GIS data provide an illustrative example of these techniques. Spatial clustering of targeted homes can be easily performed in GIS to identify priority demand management areas.

  14. [Prevention of Occupational Injuries Related to Hands: Calculation of Subsequent Injury Costs for the Austrian Social Occupational Insurance Institution (AUVA)].

    PubMed

    Rauner, M S; Mayer, B; Schaffhauser-Linzatti, M M

    2015-08-01

    Occupational injuries cause short-term, direct costs as well as long-term follow-up costs over the lifetime of the casualties. Due to shrinking budgets accident insurance companies focus on cost reduction programmes and prevention measures. For this reason, a decision support system for consequential cost calculation of occupational injuries was developed for the main Austrian social occupational insurance institution (AUVA) during three projects. This so-called cost calculation tool combines the traditional instruments of accounting with quantitative methods such as micro-simulation. The cost data are derived from AUVA-internal as well as external economic data sources. Based on direct and indirect costs, the subsequent occupational accident costs from the time of an accident and, if applicable, beyond the death of the individual casualty are predicted for the AUVA, the companies in which the casualties are working, and the other economic sectors. By using this cost calculation tool, the AUVA classifies risk groups and derives related prevention campaigns. In the past, the AUVA concentrated on falling, accidents at construction sites and in agriculture/forestry, as well as commuting accidents. Currently, among others, a focus on hand injuries is given and first prevention programmes have been initiated. Hand injuries represent about 38% of all casualties with average costs of about 7,851 Euro/case. Main causes of these accidents are cutting injuries in production, agriculture, and forestry. Beside a low, but costly, number of amputations with average costs of more than 100,000 Euro/case, bone fractures and strains burden the AUVA-budget with about 17,500 and 10,500 € per case, respectively. Decision support systems such as this cost calculation tool represent necessary instruments to identify risk groups and their injured body parts, causes of accidents, and economic activities, which highly burden the budget of an injury company, and help derive countermeasures to avoid injuries. Target-group specific, suitable prevention measures for hand injuries can reduce accidents in a cost-effective way and lower their consequences. © Georg Thieme Verlag KG Stuttgart · New York.

  15. Comparison between different direct search optimization algorithms in the calibration of a distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca

    2010-05-01

    The modern distributed hydrological models allow the representation of the different surface and subsurface phenomena with great accuracy and high spatial and temporal resolution. Such complexity requires, in general, an equally accurate parametrization. A number of approaches have been followed in this respect, from simple local search method (like Nelder-Mead algorithm), that minimize a cost function representing some distance between model's output and available measures, to more complex approaches like dynamic filters (such as the Ensemble Kalman Filter) that carry on an assimilation of the observations. In this work the first approach was followed in order to compare the performances of three different direct search algorithms on the calibration of a distributed hydrological balance model. The direct search family can be defined as that category of algorithms that make no use of derivatives of the cost function (that is, in general, a black box) and comprehend a large number of possible approaches. The main benefit of this class of methods is that they don't require changes in the implementation of the numerical codes to be calibrated. The first algorithm is the classical Nelder-Mead, often used in many applications and utilized as reference. The second algorithm is a GSS (Generating Set Search) algorithm, built in order to guarantee the conditions of global convergence and suitable for a parallel and multi-start implementation, here presented. The third one is the EGO algorithm (Efficient Global Optimization), that is particularly suitable to calibrate black box cost functions that require expensive computational resource (like an hydrological simulation). EGO minimizes the number of evaluations of the cost function balancing the need to minimize a response surface that approximates the problem and the need to improve the approximation sampling where prediction error may be high. The hydrological model to be calibrated was MOBIDIC, a complete balance distributed model developed at the Department of Civil and Environmental Engineering of the University of Florence. Discussion on the comparisons between the effectiveness of the different algorithms on different cases of study on Central Italy basins is provided.

  16. [The cost of polypharmacy in patients with type 2 diabetes mellitus].

    PubMed

    García A, Luz María; Villarreal R, Enrique; Galicia R, Liliana; Martínez G, Lidia; Vargas D, Emma Rosa

    2015-05-01

    Polypharmacy or the concomitant use of three or more medications, may increase the complexity of health care and its costs. To determine the costs of polypharmacy in patients with Type 2 Diabetes Mellitus in a Mexican population sample. Analysis of health care costs in 257 patients with Type 2 Diabetes Mellitus from two family care facilities, who had at least five consultations during one year. The cost of professional care by family physicians, pharmacological care and medications were considered to calculate the total expenses. The price of medications and the number of units consumed in one year were used to determine pharmacological expenses. Medications were grouped to determine costs derived from complications and concomitant diseases. Costs were calculated in US dollars (USD). The mean cost derived from family physician fees was USD 82.32 and from pharmacy fees USD 29.37. The mean cost of medications for diabetes treatment was USD 33.31, for the management of complications USD 13.9 and for management of concomitant diseases USD 23.7, rendering a total cost of USD 70.92. Thus, the total annual care cost of a diabetic patient was USD 182.61. Medications represent less than 50% of total expenses of diabetic patients with polypharmacy.

  17. Derivation of a Levelized Cost of Coating (LCOC) metric for evaluation of solar selective absorber materials

    DOE PAGES

    Ho, C. K.; Pacheco, J. E.

    2015-06-05

    A new metric, the Levelized Cost of Coating (LCOC), is derived in this paper to evaluate and compare alternative solar selective absorber coatings against a baseline coating (Pyromark 2500). In contrast to previous metrics that focused only on the optical performance of the coating, the LCOC includes costs, durability, and optical performance for more comprehensive comparisons among candidate materials. The LCOC is defined as the annualized marginal cost of the coating to produce a baseline annual thermal energy production. Costs include the cost of materials and labor for initial application and reapplication of the coating, as well as the costmore » of additional or fewer heliostats to yield the same annual thermal energy production as the baseline coating. Results show that important factors impacting the LCOC include the initial solar absorptance, thermal emittance, reapplication interval, degradation rate, reapplication cost, and downtime during reapplication. The LCOC can also be used to determine the optimal reapplication interval to minimize the levelized cost of energy production. As a result, similar methods can be applied more generally to determine the levelized cost of component for other applications and systems.« less

  18. Reinforcement learning controller design for affine nonlinear discrete-time systems using online approximators.

    PubMed

    Yang, Qinmin; Jagannathan, Sarangapani

    2012-04-01

    In this paper, reinforcement learning state- and output-feedback-based adaptive critic controller designs are proposed by using the online approximators (OLAs) for a general multi-input and multioutput affine unknown nonlinear discretetime systems in the presence of bounded disturbances. The proposed controller design has two entities, an action network that is designed to produce optimal signal and a critic network that evaluates the performance of the action network. The critic estimates the cost-to-go function which is tuned online using recursive equations derived from heuristic dynamic programming. Here, neural networks (NNs) are used both for the action and critic whereas any OLAs, such as radial basis functions, splines, fuzzy logic, etc., can be utilized. For the output-feedback counterpart, an additional NN is designated as the observer to estimate the unavailable system states, and thus, separation principle is not required. The NN weight tuning laws for the controller schemes are also derived while ensuring uniform ultimate boundedness of the closed-loop system using Lyapunov theory. Finally, the effectiveness of the two controllers is tested in simulation on a pendulum balancing system and a two-link robotic arm system.

  19. Oligomannuronates from Seaweeds as Renewable Sources for the Development of Green Surfactants

    NASA Astrophysics Data System (ADS)

    Benvegnu, Thierry; Sassi, Jean-François

    The development of surfactants based on natural renewable resources is a concept that is gaining recognition in detergents, cosmetics, and green chemistry. This new class of biodegradable and biocompatible products is a response to the increasing consumer demand for products that are both "greener", milder, and more efficient. In order to achieve these objectives, it is necessary to use renewable low-cost biomass that is available in large quantities and to design molecular structures through green processes that show improved performance, favorable ecotoxicological properties and reduced environmental impact. Within this context, marine algae represent a rich source of complex polysaccharides and oligosaccharides with innovative structures and functional properties that may find applications as starting materials for the development of green surfactants or cosmetic actives. Thus, we have developed original surfactants based on mannuronate moieties derived from alginates (cell-wall polyuronic acids from brown seaweeds) and fatty hydrocarbon chains derived from vegetable resources. Controlled chemical and/or enzymatic depolymerizations of the algal polysaccharides give saturated and/or unsaturated functional oligomannuronates. Clean chemical processes allow the efficient transformation of the oligomers into neutral or anionic amphiphilic molecules. These materials represent a new class of surface-active agents with promising foaming/emulsifying properties.

  20. Learning With Mixed Hard/Soft Pointwise Constraints.

    PubMed

    Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello

    2015-09-01

    A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.

  1. Estimating irrigation water demand in the Moroccan Drâa Valley using contingent valuation.

    PubMed

    Storm, Hugo; Heckelei, Thomas; Heidecke, Claudia

    2011-10-01

    Irrigation water management is crucial for agricultural production and livelihood security in Morocco as in many other parts of the world. For the implementation of an effective water management, knowledge about farmers' demand for irrigation water is crucial to assess reactions to water pricing policy, to establish a cost-benefit analysis of water supply investments or to determine the optimal water allocation between different users. Previously used econometric methods providing this information often have prohibitive data requirements. In this paper, the Contingent Valuation Method (CVM) is adjusted to derive a demand function for irrigation water along farmers' willingness to pay for one additional unit of surface water or groundwater. An application in the Middle Drâa Valley in Morocco shows that the method provides reasonable results in an environment with limited data availability. For analysing the censored survey data, the Least Absolute Deviation estimator was found to be a more suitable alternative to the Tobit model as errors are heteroscedastic and non-normally distributed. The adjusted CVM to derive demand functions is especially attractive for water scarce countries under limited data availability. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Development of efficient time-evolution method based on three-term recurrence relation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akama, Tomoko, E-mail: a.tomo---s-b-l-r@suou.waseda.jp; Kobayashi, Osamu; Nanbu, Shinkoh, E-mail: shinkoh.nanbu@sophia.ac.jp

    The advantage of the real-time (RT) propagation method is a direct solution of the time-dependent Schrödinger equation which describes frequency properties as well as all dynamics of a molecular system composed of electrons and nuclei in quantum physics and chemistry. Its applications have been limited by computational feasibility, as the evaluation of the time-evolution operator is computationally demanding. In this article, a new efficient time-evolution method based on the three-term recurrence relation (3TRR) was proposed to reduce the time-consuming numerical procedure. The basic formula of this approach was derived by introducing a transformation of the operator using the arcsine function.more » Since this operator transformation causes transformation of time, we derived the relation between original and transformed time. The formula was adapted to assess the performance of the RT time-dependent Hartree-Fock (RT-TDHF) method and the time-dependent density functional theory. Compared to the commonly used fourth-order Runge-Kutta method, our new approach decreased computational time of the RT-TDHF calculation by about factor of four, showing the 3TRR formula to be an efficient time-evolution method for reducing computational cost.« less

  3. Event-Triggered Distributed Control of Nonlinear Interconnected Systems Using Online Reinforcement Learning With Exploration.

    PubMed

    Narayanan, Vignesh; Jagannathan, Sarangapani

    2017-09-07

    In this paper, a distributed control scheme for an interconnected system composed of uncertain input affine nonlinear subsystems with event triggered state feedback is presented by using a novel hybrid learning scheme-based approximate dynamic programming with online exploration. First, an approximate solution to the Hamilton-Jacobi-Bellman equation is generated with event sampled neural network (NN) approximation and subsequently, a near optimal control policy for each subsystem is derived. Artificial NNs are utilized as function approximators to develop a suite of identifiers and learn the dynamics of each subsystem. The NN weight tuning rules for the identifier and event-triggering condition are derived using Lyapunov stability theory. Taking into account, the effects of NN approximation of system dynamics and boot-strapping, a novel NN weight update is presented to approximate the optimal value function. Finally, a novel strategy to incorporate exploration in online control framework, using identifiers, is introduced to reduce the overall cost at the expense of additional computations during the initial online learning phase. System states and the NN weight estimation errors are regulated and local uniformly ultimately bounded results are achieved. The analytical results are substantiated using simulation studies.

  4. The next generation of low-cost personal air quality sensors for quantitative exposure monitoring

    NASA Astrophysics Data System (ADS)

    Piedrahita, R.; Xiang, Y.; Masson, N.; Ortega, J.; Collier, A.; Jiang, Y.; Li, K.; Dick, R. P.; Lv, Q.; Hannigan, M.; Shang, L.

    2014-10-01

    Advances in embedded systems and low-cost gas sensors are enabling a new wave of low-cost air quality monitoring tools. Our team has been engaged in the development of low-cost, wearable, air quality monitors (M-Pods) using the Arduino platform. These M-Pods house two types of sensors - commercially available metal oxide semiconductor (MOx) sensors used to measure CO, O3, NO2, and total VOCs, and NDIR sensors used to measure CO2. The MOx sensors are low in cost and show high sensitivity near ambient levels; however they display non-linear output signals and have cross-sensitivity effects. Thus, a quantification system was developed to convert the MOx sensor signals into concentrations. We conducted two types of validation studies - first, deployments at a regulatory monitoring station in Denver, Colorado, and second, a user study. In the two deployments (at the regulatory monitoring station), M-Pod concentrations were determined using collocation calibrations and laboratory calibration techniques. M-Pods were placed near regulatory monitors to derive calibration function coefficients using the regulatory monitors as the standard. The form of the calibration function was derived based on laboratory experiments. We discuss various techniques used to estimate measurement uncertainties. The deployments revealed that collocation calibrations provide more accurate concentration estimates than laboratory calibrations. During collocation calibrations, median standard errors ranged between 4.0-6.1 ppb for O3, 6.4-8.4 ppb for NO2, 0.28-0.44 ppm for CO, and 16.8 ppm for CO2. Median signal to noise (S / N) ratios for the M-Pod sensors were higher than the regulatory instruments: for NO2, 3.6 compared to 23.4; for O3, 1.4 compared to 1.6; for CO, 1.1 compared to 10.0; and for CO2, 42.2 compared to 300-500. By contrast, lab calibrations added bias and made it difficult to cover the necessary range of environmental conditions to obtain a good calibration. A separate user study was also conducted to assess uncertainty estimates and sensor variability. In this study, 9 M-Pods were calibrated via collocation multiple times over 4 weeks, and sensor drift was analyzed, with the result being a calibration function that included baseline drift. Three pairs of M-Pods were deployed, while users individually carried the other three. The user study suggested that inter-M-Pod variability between paired units was on the same order as calibration uncertainty; however, it is difficult to make conclusions about the actual personal exposure levels due to the level of user engagement. The user study provided real-world sensor drift data, showing limited CO drift (under -0.05 ppm day-1), and higher for O3 (-2.6 to 2.0 ppb day-1), NO2 (-1.56 to 0.51 ppb day-1), and CO2 (-4.2 to 3.1 ppm day-1). Overall, the user study confirmed the utility of the M-Pod as a low-cost tool to assess personal exposure.

  5. Closed-loop control of boundary layer streaks induced by free-stream turbulence

    NASA Astrophysics Data System (ADS)

    Papadakis, George; Lu, Liang; Ricco, Pierre

    2016-08-01

    The central aim of the paper is to carry out a theoretical and numerical study of active wall transpiration control of streaks generated within an incompressible boundary layer by free-stream turbulence. The disturbance flow model is based on the linearized unsteady boundary-region (LUBR) equations, studied by Leib, Wundrow, and Goldstein [J. Fluid Mech. 380, 169 (1999), 10.1017/S0022112098003504], which are the rigorous asymptotic limit of the Navier-Stokes equations for low-frequency and long-streamwise wavelength. The mathematical formulation of the problem directly incorporates the random forcing into the equations in a consistent way. Due to linearity, this forcing is factored out and appears as a multiplicative factor. It is shown that the cost function (integral of kinetic energy in the domain) is properly defined as the expectation of a random quadratic function only after integration in wave number space. This operation naturally introduces the free-stream turbulence spectral tensor into the cost function. The controller gains for each wave number are independent of the spectral tensor and, in that sense, universal. Asymptotic matching of the LUBR equations with the free-stream conditions results in an additional forcing term in the state-space system whose presence necessitates the reformulation of the control problem and the rederivation of its solution. It is proved that the solution can be obtained analytically using an extension of the sweep method used in control theory to obtain the standard Riccati equation. The control signal consists of two components, a feedback part and a feed-forward part (that depends explicitly on the forcing term). Explicit recursive equations that provide these two components are derived. It is shown that the feed-forward part makes a negligible contribution to the control signal. We also derive an explicit expression that a priori (i.e., before solving the control problem) leads to the minimum of the objective cost function (i.e., the fundamental performance limit), based only on the system matrices and the initial and free-stream boundary conditions. The adjoint equations admit a self-similar solution for large spanwise wave numbers with a scaling which is different from that of the LUBR equations. The controlled flow field also has a self-similar solution if the weighting matrices of the objective function are chosen appropriately. The code developed to implement this algorithm is efficient and has modest memory requirements. Computations show the significant reduction of energy for each wave number. The control of the full spectrum streaks, for conditions corresponding to a realistic experimental case, shows that the root-mean-square of the streamwise velocity is strongly suppressed in the whole domain and for all the frequency ranges examined.

  6. The impact of Pulpwood Rail Freight Costs on the Minnesota-Wisconsin Pulpwood Market

    Treesearch

    David C. Lothner

    1976-01-01

    Transportation costs affect the marketing and utilization of pulpwood. Their impact on the procurement and utilization of pulpwood often prove difficult to measure because deriving an average annual measure of the transportation cost is difficult. This note, by means of a simple index method for measuring regional interstate pulpwood rail freight costs, illustrates...

  7. 26 CFR 16A.126-1 - Certain cost-sharing payments-in general.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 26 Internal Revenue 14 2012-04-01 2012-04-01 false Certain cost-sharing payments-in general. 16A... CERTAIN CONSERVATION COST-SHARING PAYMENTS § 16A.126-1 Certain cost-sharing payments—in general. (a... average annual income derived from the affected property prior to receipt of the improvement or an amount...

  8. 26 CFR 16A.126-1 - Certain cost-sharing payments-in general.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 26 Internal Revenue 14 2013-04-01 2013-04-01 false Certain cost-sharing payments-in general. 16A... CERTAIN CONSERVATION COST-SHARING PAYMENTS § 16A.126-1 Certain cost-sharing payments—in general. (a... average annual income derived from the affected property prior to receipt of the improvement or an amount...

  9. 26 CFR 16A.126-1 - Certain cost-sharing payments-in general.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 14 2014-04-01 2013-04-01 true Certain cost-sharing payments-in general. 16A... CERTAIN CONSERVATION COST-SHARING PAYMENTS § 16A.126-1 Certain cost-sharing payments—in general. (a... average annual income derived from the affected property prior to receipt of the improvement or an amount...

  10. Task Values, Cost, and Choice Decisions in College Physical Education

    ERIC Educational Resources Information Center

    Chen, Ang; Liu, Xinlan

    2009-01-01

    The expectancy-value motivation theory postulates that motivation can be achieved when perceived values in an activity override perceived cost of the activity derived from the effort of achieving. This study was designed to examine types of perceived cost in physical education and the extent to which the cost might affect motivation. Data about…

  11. Estimating age-based antiretroviral therapy costs for HIV-infected children in resource-limited settings based on World Health Organization weight-based dosing recommendations.

    PubMed

    Doherty, Kathleen; Essajee, Shaffiq; Penazzato, Martina; Holmes, Charles; Resch, Stephen; Ciaranello, Andrea

    2014-05-02

    Pediatric antiretroviral therapy (ART) has been shown to substantially reduce morbidity and mortality in HIV-infected infants and children. To accurately project program costs, analysts need accurate estimations of antiretroviral drug (ARV) costs for children. However, the costing of pediatric antiretroviral therapy is complicated by weight-based dosing recommendations which change as children grow. We developed a step-by-step methodology for estimating the cost of pediatric ARV regimens for children ages 0-13 years old. The costing approach incorporates weight-based dosing recommendations to provide estimated ARV doses throughout childhood development. Published unit drug costs are then used to calculate average monthly drug costs. We compared our derived monthly ARV costs to published estimates to assess the accuracy of our methodology. The estimates of monthly ARV costs are provided for six commonly used first-line pediatric ARV regimens, considering three possible care scenarios. The costs derived in our analysis for children were fairly comparable to or slightly higher than available published ARV drug or regimen estimates. The methodology described here can be used to provide an accurate estimation of pediatric ARV regimen costs for cost-effectiveness analysts to project the optimum packages of care for HIV-infected children, as well as for program administrators and budget analysts who wish to assess the feasibility of increasing pediatric ART availability in constrained budget environments.

  12. Human neural stem cell-derived cultures in three-dimensional substrates form spontaneously functional neuronal networks.

    PubMed

    Smith, Imogen; Silveirinha, Vasco; Stein, Jason L; de la Torre-Ubieta, Luis; Farrimond, Jonathan A; Williamson, Elizabeth M; Whalley, Benjamin J

    2017-04-01

    Differentiated human neural stem cells were cultured in an inert three-dimensional (3D) scaffold and, unlike two-dimensional (2D) but otherwise comparable monolayer cultures, formed spontaneously active, functional neuronal networks that responded reproducibly and predictably to conventional pharmacological treatments to reveal functional, glutamatergic synapses. Immunocytochemical and electron microscopy analysis revealed a neuronal and glial population, where markers of neuronal maturity were observed in the former. Oligonucleotide microarray analysis revealed substantial differences in gene expression conferred by culturing in a 3D vs a 2D environment. Notable and numerous differences were seen in genes coding for neuronal function, the extracellular matrix and cytoskeleton. In addition to producing functional networks, differentiated human neural stem cells grown in inert scaffolds offer several significant advantages over conventional 2D monolayers. These advantages include cost savings and improved physiological relevance, which make them better suited for use in the pharmacological and toxicological assays required for development of stem cell-based treatments and the reduction of animal use in medical research. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  13. What Do Cost Functions Tell Us about the Cost of an Adequate Education?

    ERIC Educational Resources Information Center

    Costrell, Robert M.; Hanushek, Eric; Loeb, Susanna

    2008-01-01

    Econometric cost functions have begun to appear in education adequacy cases with greater frequency. Cost functions are superficially attractive because they give the impression of objectivity, holding out the promise of scientifically estimating the cost of achieving specified levels of performance from actual data on spending. By contrast, the…

  14. Wake Vortex Systems Cost/Benefits Analysis

    NASA Technical Reports Server (NTRS)

    Crisp, Vicki K.

    1997-01-01

    The goals of cost/benefit assessments are to provide quantitative and qualitative data to aid in the decision-making process. Benefits derived from increased throughput (or decreased delays) used to balance life-cycle costs. Packaging technologies together may provide greater gains (demonstrate higher return on investment).

  15. COSTS AND ISSUES RELATED TO REMEDIATION OF PETROLEUM-CONTAMINATED SITES

    EPA Science Inventory

    The remediation costs required at sites contaminated with petroleum-derived compounds remains a relevant issue because of the large number of existing underground storage tanks the United States and the presence of benzene, MTBE, and TBA in some drinking water supplies. Cost inf...

  16. Automatic monitoring of ecosystem structure and functions using integrated low-cost near surface sensors

    NASA Astrophysics Data System (ADS)

    Kim, J.; Ryu, Y.; Jiang, C.; Hwang, Y.

    2016-12-01

    Near surface sensors are able to acquire more reliable and detailed information with higher temporal resolution than satellite observations. Conventional near surface sensors usually work individually, and thus they require considerable manpower from data collection through information extraction and sharing. Recent advances of Internet of Things (IoT) provides unprecedented opportunities to integrate various low-cost sensors as an intelligent near surface observation system for monitoring ecosystem structure and functions. In this study, we developed a Smart Surface Sensing System (4S), which can automatically collect, transfer, process and analyze data, and then publish time series results on public-available website. The system is composed of micro-computer Raspberry pi, micro-controller Arduino, multi-spectral spectrometers made from Light Emitting Diode (LED), visible and near infrared cameras, and Internet module. All components are connected with each other and Raspberry pi intelligently controls the automatic data production chain. We did intensive tests and calibrations in-lab. Then, we conducted in-situ observations at a rice paddy field and a deciduous broadleaf forest. During the whole growth season, 4S obtained landscape images, spectral reflectance in red, green, blue, and near infrared, normalized difference vegetation index (NDVI), fraction of photosynthetically active radiation (fPAR), and leaf area index (LAI) continuously. Also We compared 4S data with other independent measurements. NDVI obtained from 4S agreed well with Jaz hyperspectrometer at both diurnal and seasonal scales (R2 = 0.92, RMSE = 0.059), and 4S derived fPAR and LAI were comparable to LAI-2200 and destructive measurements in both magnitude and seasonal trajectory. We believe that the integrated low-cost near surface sensor could help research community monitoring ecosystem structure and functions closer and easier through a network system.

  17. SIMRAND I- SIMULATION OF RESEARCH AND DEVELOPMENT PROJECTS

    NASA Technical Reports Server (NTRS)

    Miles, R. F.

    1994-01-01

    The Simulation of Research and Development Projects program (SIMRAND) aids in the optimal allocation of R&D resources needed to achieve project goals. SIMRAND models the system subsets or project tasks as various network paths to a final goal. Each path is described in terms of task variables such as cost per hour, cost per unit, availability of resources, etc. Uncertainty is incorporated by treating task variables as probabilistic random variables. SIMRAND calculates the measure of preference for each alternative network. The networks yielding the highest utility function (or certainty equivalence) are then ranked as the optimal network paths. SIMRAND has been used in several economic potential studies at NASA's Jet Propulsion Laboratory involving solar dish power systems and photovoltaic array construction. However, any project having tasks which can be reduced to equations and related by measures of preference can be modeled. SIMRAND analysis consists of three phases: reduction, simulation, and evaluation. In the reduction phase, analytical techniques from probability theory and simulation techniques are used to reduce the complexity of the alternative networks. In the simulation phase, a Monte Carlo simulation is used to derive statistics on the variables of interest for each alternative network path. In the evaluation phase, the simulation statistics are compared and the networks are ranked in preference by a selected decision rule. The user must supply project subsystems in terms of equations based on variables (for example, parallel and series assembly line tasks in terms of number of items, cost factors, time limits, etc). The associated cumulative distribution functions and utility functions for each variable must also be provided (allowable upper and lower limits, group decision factors, etc). SIMRAND is written in Microsoft FORTRAN 77 for batch execution and has been implemented on an IBM PC series computer operating under DOS.

  18. Cost-effectiveness analysis of online hemodiafiltration versus high-flux hemodialysis.

    PubMed

    Ramponi, Francesco; Ronco, Claudio; Mason, Giacomo; Rettore, Enrico; Marcelli, Daniele; Martino, Francesca; Neri, Mauro; Martin-Malo, Alejandro; Canaud, Bernard; Locatelli, Francesco

    2016-01-01

    Clinical studies suggest that hemodiafiltration (HDF) may lead to better clinical outcomes than high-flux hemodialysis (HF-HD), but concerns have been raised about the cost-effectiveness of HDF versus HF-HD. Aim of this study was to investigate whether clinical benefits, in terms of longer survival and better health-related quality of life, are worth the possibly higher costs of HDF compared to HF-HD. The analysis comprised a simulation based on the combined results of previous published studies, with the following steps: 1) estimation of the survival function of HF-HD patients from a clinical trial and of HDF patients using the risk reduction estimated in a meta-analysis; 2) simulation of the survival of the same sample of patients as if allocated to HF-HD or HDF using three-state Markov models; and 3) application of state-specific health-related quality of life coefficients and differential costs derived from the literature. Several Monte Carlo simulations were performed, including simulations for patients with different risk profiles, for example, by age (patients aged 40, 50, and 60 years), sex, and diabetic status. Scatter plots of simulations in the cost-effectiveness plane were produced, incremental cost-effectiveness ratios were estimated, and cost-effectiveness acceptability curves were computed. An incremental cost-effectiveness ratio of €6,982/quality-adjusted life years (QALY) was estimated for the baseline cohort of 50-year-old male patients. Given the commonly accepted threshold of €40,000/QALY, HDF is cost-effective. The probabilistic sensitivity analysis showed that HDF is cost-effective with a probability of ~81% at a threshold of €40,000/QALY. It is fundamental to measure the outcome also in terms of quality of life. HDF is more cost-effective for younger patients. HDF can be considered cost-effective compared to HF-HD.

  19. Modeled cost-effectiveness of transforaminal lumbar interbody fusion compared with posterolateral fusion for spondylolisthesis using N(2)QOD data.

    PubMed

    Carreon, Leah Y; Glassman, Steven D; Ghogawala, Zoher; Mummaneni, Praveen V; McGirt, Matthew J; Asher, Anthony L

    2016-06-01

    OBJECTIVE Transforaminal lumbar interbody fusion (TLIF) has become the most commonly used fusion technique for lumbar degenerative disorders. This suggests an expectation of better clinical outcomes with this technique, but this has not been validated consistently. How surgical variables and choice of health utility measures drive the cost-effectiveness of TLIF relative to posterolateral fusion (PSF) has not been established. The authors used health utility values derived from Short Form-6D (SF-6D) and EQ-5D and different cost-effectiveness thresholds to evaluate the relative cost-effectiveness of TLIF compared with PSF. METHODS From the National Neurosurgery Quality and Outcomes Database (N(2)QOD), 101 patients with spondylolisthesis who underwent PSF were propensity matched to patients who underwent TLIF. Health-related quality of life measures and perioperative parameters were compared. Because health utility values derived from the SF-6D and EQ-5D questionnaires have been shown to vary in patients with low-back pain, quality-adjusted life years (QALYs) were derived from both measures. On the basis of these matched cases, a sensitivity analysis for the relative cost per QALY of TLIF versus PSF was performed in a series of cost-assumption models. RESULTS Operative time, blood loss, hospital stay, and 30-day and 90-day readmission rates were similar for the TLIF and PSF groups. Both TLIF and PSF significantly improved back and leg pain, Oswestry Disability Index (ODI) scores, and EQ-5D and SF-6D scores at 3 and 12 months postoperatively. At 12 months postoperatively, patients who had undergone TLIF had greater improvements in mean ODI scores (30.4 vs 21.1, p = 0.001) and mean SF-6D scores (0.16 vs 0.11, p = 0.001) but similar improvements in mean EQ-5D scores (0.25 vs 0.22, p = 0.415) as patients treated with PSF. At a cost per QALY threshold of $100,000 and using SF-6D-based QALYs, the authors found that TLIF would be cost-prohibitive compared with PSF at a surgical cost of $4830 above that of PSF. However, with EQ-5D-based QALYs, TLIF would become cost-prohibitive at an increased surgical cost of $2960 relative to that of PSF. With the 2014 US per capita gross domestic product of $53,042 as a more stringent cost-effectiveness threshold, TLIF would become cost-prohibitive at surgical costs $2562 above that of PSF with SF-6D-based QALYs or at a surgical cost exceeding that of PSF by $1570 with EQ-5D-derived QALYs. CONCLUSIONS As with all cost-effectiveness studies, cost per QALY depended on the measure of health utility selected, durability of the intervention, readmission rates, and the accuracy of the cost assumptions.

  20. Global economic potential for reducing carbon dioxide emissions from mangrove loss.

    PubMed

    Siikamäki, Juha; Sanchirico, James N; Jardine, Sunny L

    2012-09-04

    Mangroves are among the most threatened and rapidly disappearing natural environments worldwide. In addition to supporting a wide range of other ecological and economic functions, mangroves store considerable carbon. Here, we consider the global economic potential for protecting mangroves based exclusively on their carbon. We develop unique high-resolution global estimates (5' grid, about 9 × 9 km) of the projected carbon emissions from mangrove loss and the cost of avoiding the emissions. Using these spatial estimates, we derive global and regional supply curves (marginal cost curves) for avoided emissions. Under a broad range of assumptions, we find that the majority of potential emissions from mangroves could be avoided at less than $10 per ton of CO(2). Given the recent range of market price for carbon offsets and the cost of reducing emissions from other sources, this finding suggests that protecting mangroves for their carbon is an economically viable proposition. Political-economy considerations related to the ability of doing business in developing countries, however, can severely limit the supply of offsets and increases their price per ton. We also find that although a carbon-focused conservation strategy does not automatically target areas most valuable for biodiversity, implementing a biodiversity-focused strategy would only slightly increase the costs.

  1. The effectiveness and costs of comprehensive geriatric evaluation and management.

    PubMed

    Wieland, Darryl

    2003-11-01

    Comprehensive geriatric assessment (CGA) is a multidimensional interdisciplinary diagnostic process focused on determining a frail elderly person's medical, psychological, and functional capabilities in order to develop a coordinated and integrated plan for treatment and long-term follow-up. Geriatrics interventions building on CGA are defined from their historical emergence to the present day in a discussion of their complexity, goals and normative components. Through literature review, questions of the effectiveness and costs of these interventions are addressed. Evidence of effectiveness is derived from individual trials and, particularly, recent systematic reviews. While the trial evidence lends support to the proposition that geriatric interventions can be effective, the results have not been uniform. Review of meta-regression studies suggests that much of this outcome variability is related to identifiable program design parameters. In particular, targeting the frail, an interdisciplinary team structure with clinical control of care, and long-term follow-up, tend to be associated with effective programs. Answers to cost-effectiveness questions also vary and are more rare. With some exceptions, existing evidence as exists suggest that geriatrics interventions can be effective without raising total costs of care. Despite the attention given to these questions in recent years, there is still much room for clinical and scientific advance as we move to better understand what CGA interventions do well and in whom.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, MeiYue; Lin, Lin; Yang, Chao

    The single particle energies obtained in a Kohn-Sham density functional theory (DFT) calculation are generally known to be poor approximations to electron excitation energies that are measured in tr ansport, tunneling and spectroscopic experiments such as photo-emission spectroscopy. The correction to these energies can be obtained from the poles of a single particle Green’s function derived from a many-body perturbation theory. From a computational perspective, the accuracy and efficiency of such an approach depends on how a self energy term that properly accounts for dynamic screening of electrons is approximated. The G 0W 0 approximation is a widely used techniquemore » in which the self energy is expressed as the convolution of a noninteracting Green’s function (G 0) and a screened Coulomb interaction (W 0) in the frequency domain. The computational cost associated with such a convolution is high due to the high complexity of evaluating W 0 at multiple frequencies. In this paper, we discuss how the cost of G 0W 0 calculation can be reduced by constructing a low rank approximation to the frequency dependent part of W 0 . In particular, we examine the effect of such a low rank approximation on the accuracy of the G 0W 0 approximation. We also discuss how the numerical convolution of G 0 and W 0 can be evaluated efficiently and accurately by using a contour deformation technique with an appropriate choice of the contour.« less

  3. Differential pricing of drugs: a role for cost-effectiveness analysis?

    PubMed

    Lopert, Ruth; Lang, Danielle L; Hill, Suzanne R; Henry, David A

    2002-06-15

    Internationally, the high costs of pharmaceutical products limit access to treatment. The principle of differential pricing is that drug prices should vary according to some measure of affordability. How differential prices should be determined is, however, unclear. Here we describe a method whereby differential prices for essential drugs could be derived in countries of variable national wealth, and, using angiotensin-converting enzyme inhibitors provide an example of how the process might work. Indicative prices for drugs can be derived by cost-effectiveness analysis that incorporates a measure of national wealth. Such prices could be used internationally as a basis of differential price negotiations.

  4. Computing an upper bound on contact stress with surrogate duality

    NASA Astrophysics Data System (ADS)

    Xuan, Zhaocheng; Papadopoulos, Panayiotis

    2016-07-01

    We present a method for computing an upper bound on the contact stress of elastic bodies. The continuum model of elastic bodies with contact is first modeled as a constrained optimization problem by using finite elements. An explicit formulation of the total contact force, a fraction function with the numerator as a linear function and the denominator as a quadratic convex function, is derived with only the normalized nodal contact forces as the constrained variables in a standard simplex. Then two bounds are obtained for the sum of the nodal contact forces. The first is an explicit formulation of matrices of the finite element model, derived by maximizing the fraction function under the constraint that the sum of the normalized nodal contact forces is one. The second bound is solved by first maximizing the fraction function subject to the standard simplex and then using Dinkelbach's algorithm for fractional programming to find the maximum—since the fraction function is pseudo concave in a neighborhood of the solution. These two bounds are solved with the problem dimensions being only the number of contact nodes or node pairs, which are much smaller than the dimension for the original problem, namely, the number of degrees of freedom. Next, a scheme for constructing an upper bound on the contact stress is proposed that uses the bounds on the sum of the nodal contact forces obtained on a fine finite element mesh and the nodal contact forces obtained on a coarse finite element mesh, which are problems that can be solved at a lower computational cost. Finally, the proposed method is verified through some examples concerning both frictionless and frictional contact to demonstrate the method's feasibility, efficiency, and robustness.

  5. Cost characteristics of hospitals.

    PubMed

    Smet, Mike

    2002-09-01

    Modern hospitals are complex multi-product organisations. The analysis of a hospital's production and/or cost structure should therefore use the appropriate techniques. Flexible functional forms based on the neo-classical theory of the firm seem to be most suitable. Using neo-classical cost functions implicitly assumes minimisation of (variable) costs given that input prices and outputs are exogenous. Local and global properties of flexible functional forms and short-run versus long-run equilibrium are further issues that require thorough investigation. In order to put the results based on econometric estimations of cost functions in the right perspective, it is important to keep these considerations in mind when using flexible functional forms. The more recent studies seem to agree that hospitals generally do not operate in their long-run equilibrium (they tend to over-invest in capital (capacity and equipment)) and that it is therefore appropriate to estimate a short-run variable cost function. However, few studies explicitly take into account the implicit assumptions and restrictions embedded in the models they use. An alternative method to explain differences in costs uses management accounting techniques to identify the cost drivers of overhead costs. Related issues such as cost-shifting and cost-adjusting behaviour of hospitals and the influence of market structure on competition, prices and costs are also discussed shortly.

  6. The cost of sustaining a patient-centered medical home: experience from 2 states.

    PubMed

    Magill, Michael K; Ehrenberger, David; Scammon, Debra L; Day, Julie; Allen, Tatiana; Reall, Andreu J; Sides, Rhonda W; Kim, Jaewhan

    2015-09-01

    As medical practices transform to patient-centered medical homes (PCMHs), it is important to identify the ongoing costs of maintaining these "advanced primary care" functions. A key required input is personnel effort. This study's objective was to assess direct personnel costs to practices associated with the staffing necessary to deliver PCMH functions as outlined in the National Committee for Quality Assurance Standards. We developed a PCMH cost dimensions tool to assess costs associated with activities uniquely required to maintain PCMH functions. We interviewed practice managers, nurse supervisors, and medical directors in 20 varied primary care practices in 2 states, guided by the tool. Outcome measures included categories of staff used to perform various PCMH functions, time and personnel costs, and whether practices were delivering PCMH functions. Costs per full-time equivalent primary care clinician associated with PCMH functions varied across practices with an average of $7,691 per month in Utah practices and $9,658 in Colorado practices. PCMH incremental costs per encounter were $32.71 in Utah and $36.68 in Colorado. The average estimated cost per member per month for an assumed panel of 2,000 patients was $3.85 in Utah and $4.83 in Colorado. Identifying costs of maintaining PCMH functions will contribute to effective payment reform and to sustainability of transformation. Maintenance and ongoing support of PCMH functions require additional time and new skills, which may be provided by existing staff, additional staff, or both. Adequate compensation for ongoing and substantial incremental costs is critical for practices to sustain PCMH functions. © 2015 Annals of Family Medicine, Inc.

  7. The Cost of Sustaining a Patient-Centered Medical Home: Experience From 2 States

    PubMed Central

    Magill, Michael K.; Ehrenberger, David; Scammon, Debra L.; Day, Julie; Allen, Tatiana; Reall, Andreu J.; Sides, Rhonda W.; Kim, Jaewhan

    2015-01-01

    PURPOSE As medical practices transform to patient-centered medical homes (PCMHs), it is important to identify the ongoing costs of maintaining these “advanced primary care” functions. A key required input is personnel effort. This study’s objective was to assess direct personnel costs to practices associated with the staffing necessary to deliver PCMH functions as outlined in the National Committee for Quality Assurance Standards. METHODS We developed a PCMH cost dimensions tool to assess costs associated with activities uniquely required to maintain PCMH functions. We interviewed practice managers, nurse supervisors, and medical directors in 20 varied primary care practices in 2 states, guided by the tool. Outcome measures included categories of staff used to perform various PCMH functions, time and personnel costs, and whether practices were delivering PCMH functions. RESULTS Costs per full-time equivalent primary care clinician associated with PCMH functions varied across practices with an average of $7,691 per month in Utah practices and $9,658 in Colorado practices. PCMH incremental costs per encounter were $32.71 in Utah and $36.68 in Colorado. The average estimated cost per member per month for an assumed panel of 2,000 patients was $3.85 in Utah and $4.83 in Colorado. CONCLUSIONS Identifying costs of maintaining PCMH functions will contribute to effective payment reform and to sustainability of transformation. Maintenance and ongoing support of PCMH functions require additional time and new skills, which may be provided by existing staff, additional staff, or both. Adequate compensation for ongoing and substantial incremental costs is critical for practices to sustain PCMH functions. PMID:26371263

  8. Cost-effectiveness of focused ultrasound, radiosurgery, and DBS for essential tremor.

    PubMed

    Ravikumar, Vinod K; Parker, Jonathon J; Hornbeck, Traci S; Santini, Veronica E; Pauly, Kim Butts; Wintermark, Max; Ghanouni, Pejman; Stein, Sherman C; Halpern, Casey H

    2017-08-01

    Essential tremor remains a very common yet medically refractory condition. A recent phase 3 study demonstrated that magnetic resonance-guided focused ultrasound thalamotomy significantly improved upper limb tremor. The objectives of this study were to assess this novel therapy's cost-effectiveness compared with existing procedural options. Literature searches of magnetic resonance-guided focused ultrasound thalamotomy, DBS, and stereotactic radiosurgery for essential tremor were performed. Pre- and postoperative tremor-related disability scores were collected from 32 studies involving 83 magnetic resonance-guided focused ultrasound thalamotomies, 615 DBSs, and 260 stereotactic radiosurgery cases. Utility, defined as quality of life and derived from percent change in functional disability, was calculated; Medicare reimbursement was employed as a proxy for societal cost. Medicare reimbursement rates are not established for magnetic resonance-guided focused ultrasound thalamotomy for essential tremor; therefore, reimbursements were estimated to be approximately equivalent to stereotactic radiosurgery to assess a cost threshold. A decision analysis model was constructed to examine the most cost-effective option for essential tremor, implementing meta-analytic techniques. Magnetic resonance-guided focused ultrasound thalamotomy resulted in significantly higher utility scores compared with DBS (P < 0.001) or stereotactic radiosurgery (P < 0.001). Projected costs of magnetic resonance-guided focused ultrasound thalamotomy were significantly less than DBS (P < 0.001), but not significantly different from radiosurgery. Magnetic resonance-guided focused ultrasound thalamotomy is cost-effective for tremor compared with DBS and stereotactic radiosurgery and more effective than both. Even if longer follow-up finds changes in effectiveness or costs, focused ultrasound thalamotomy will likely remain competitive with both alternatives. © 2017 International Parkinson and Movement Disorder Society. © 2017 International Parkinson and Movement Disorder Society.

  9. Heterogeneous terrain: a challenge to derive evapotranspiration with remote sensing and scintillometry

    NASA Astrophysics Data System (ADS)

    Thiem, Christina; Sun, Liya; Müller, Benjamin; Bernhardt, Matthias; Schulz, Karsten

    2014-05-01

    Despite the importance of evapotranspiration for Meteorology, Hydrology and Agronomy, obtaining area-averaged evapotranspiration estimates is cost as well as maintenance intensive: usually area-averaged evapotranspiration estimates are obtained by distributed sensor networks or remotely sensed with a scintillometer. A low cost alternative for evapotranspiration estimates are satellite images, as many of them are freely available. This approach has been proven to be worthwhile above homogeneous terrain, and typically evapotranspiration data obtained with scintillometry are applied for validation. We will extend this approach to heterogeneous terrain: evapotranspiration estimates from ASTER 2013 images will be compared to scintillometer derived evapotranspiration estimates. The goodness of the correlation will be presented as well as an uncertainty estimation for both the ASTER derived and the scintillometer derived evapotranspiration.

  10. Radiation hardened microprocessor for small payloads

    NASA Technical Reports Server (NTRS)

    Shah, Ravi

    1993-01-01

    The RH-3000 program is developing a rad-hard space qualified 32-bit MIPS R-3000 RISC processor under the Naval Research Lab sponsorship. In addition, under IR&D Harris is developing RHC-3000 for embedded control applications where low cost and radiation tolerance are primary concerns. The development program leverages heavily from commercial development of the MIPS R-3000. The commercial R-3000 has a large installed user base and several foundry partners are currently producing a wide variety of R-3000 derivative products. One of the MIPS derivative products, the LR33000 from LSI Logic, was used as the basis for the design of the RH-3000 chipset. The RH-3000 chipset consists of three core chips and two support chips. The core chips include the CPU, which is the R-3000 integer unit and the FPA/MD chip pair, which performs the R-3010 floating point functions. The two support whips contain all the support functions required for fault tolerance support, real-time support, memory management, timers, and other functions. The Harris development effort had first passed silicon success in June, 1992 with the first rad-hard 32-bit RH-3000 CPU chip. The CPU device is 30 kgates, has a 508 mil by 503 mil die size and is fabricated at Harris Semiconductor on the rad-hard CMOS Silicon on Sapphire (SOS) process. The CPU device successfully passed tesing against 600,000 test vectors derived directly on the LSI/MIPS test suite and has been operational as a single board computer running C code for the past year. In addition, the RH-3000 program has developed the methodology for converting commercially developed designs utilizing logic synthesis techniques based on a combination of VHDK and schematic data bases.

  11. Fast and robust standard-deviation-based method for bulk motion compensation in phase-based functional OCT.

    PubMed

    Wei, Xiang; Camino, Acner; Pi, Shaohua; Cepurna, William; Huang, David; Morrison, John C; Jia, Yali

    2018-05-01

    Phase-based optical coherence tomography (OCT), such as OCT angiography (OCTA) and Doppler OCT, is sensitive to the confounding phase shift introduced by subject bulk motion. Traditional bulk motion compensation methods are limited by their accuracy and computing cost-effectiveness. In this Letter, to the best of our knowledge, we present a novel bulk motion compensation method for phase-based functional OCT. Bulk motion associated phase shift can be directly derived by solving its equation using a standard deviation of phase-based OCTA and Doppler OCT flow signals. This method was evaluated on rodent retinal images acquired by a prototype visible light OCT and human retinal images acquired by a commercial system. The image quality and computational speed were significantly improved, compared to two conventional phase compensation methods.

  12. Objective analysis of pseudostress over the Indian Ocean using a direct-minimization approach

    NASA Technical Reports Server (NTRS)

    Legler, David M.; Navon, I. M.; O'Brien, James J.

    1989-01-01

    A technique not previously used in objective analysis of meteorological data is used here to produce monthly average surface pseudostress data over the Indian Ocean. An initial guess field is derived and a cost functional is constructed with five terms: approximation to initial guess, approximation to climatology, a smoothness parameter, and two kinematic terms. The functional is minimized using a conjugate-gradient technique, and the weight for the climatology term controls the overall balance of influence between the climatology and the initial guess. Results from various weight combinations are presented for January and July 1984. Quantitative and qualitative comparisons to the subject analysis are made to find which weight combination provides the best results. The weight on the approximation to climatology is found to balance the influence of the original field and climatology.

  13. Cost Effectiveness of On-Line Retrieval System.

    ERIC Educational Resources Information Center

    King, Donald W.; Neel, Peggy W.

    A recently developed cost-effectiveness model for on-line retrieval systems is discussed through use of an example utilizing performance results collected from several independent sources and cost data derived for a recently completed study for the American Psychological Association. One of the primary attributes of the model rests in its great…

  14. COSTS AND ISSUES RELATED TO REMEDIATION OF PETROLEUM-CONTAMINATED SITES (NEW ORLEANS, LA)

    EPA Science Inventory

    The remediation costs required at sites contaminated with petroleum-derived compounds remains a relevant issue because of the large number of existing underground storage tanks the United States and the presence of benzene, MTBE, and TBA in some drinking water supplies. Cost inf...

  15. The cost-effectiveness of semi-rigid ankle brace to facilitate return to work following first-time acute ankle sprains.

    PubMed

    Fatoye, Francis; Haigh, Carol

    2016-05-01

    To examine the cost-effectiveness of semi-rigid ankle brace to facilitate return to work following first-time acute ankle sprains. Economic evaluation based on cost-utility analysis. Ankle sprains are a source of morbidity and absenteeism from work, accounting for 15-20% of all sports injuries. Semi-rigid ankle brace and taping are functional treatment interventions used by Musculoskeletal Physiotherapists and Nurses to facilitate return to work following acute ankle sprains. A decision model analysis, based on cost-utility analysis from the perspective of National Health Service was used. The primary outcomes measure was incremental cost-effectiveness ratio, based on quality-adjusted life years. Costs and quality of life data were derived from published literature, while model clinical probabilities were sourced from Musculoskeletal Physiotherapists. The cost and quality adjusted life years gained using semi-rigid ankle brace was £184 and 0.72 respectively. However, the cost and quality adjusted life years gained following taping was £155 and 0.61 respectively. The incremental cost-effectiveness ratio for the semi-rigid brace was £263 per quality adjusted life year. Probabilistic sensitivity analysis showed that ankle brace provided the highest net-benefit, hence the preferred option. Taping is a cheaper intervention compared with ankle brace to facilitate return to work following first-time ankle sprains. However, the incremental cost-effectiveness ratio observed for ankle brace was less than the National Institute for Health and Care Excellence threshold and the intervention had a higher net-benefit, suggesting that it is a cost-effective intervention. Decision-makers may be willing to pay £263 for an additional gain in quality adjusted life year. The findings of this economic evaluation provide justification for the use of semi-rigid ankle brace by Musculoskeletal Physiotherapists and Nurses to facilitate return to work in individuals with first-time ankle sprains. © 2016 John Wiley & Sons Ltd.

  16. On the ψ-Hilfer fractional derivative

    NASA Astrophysics Data System (ADS)

    Vanterler da C. Sousa, J.; Capelas de Oliveira, E.

    2018-07-01

    In this paper we introduce a new fractional derivative with respect to another function the so-called ψ-Hilfer fractional derivative. We discuss some properties and important results of the fractional calculus. In this sense, we present some results involving uniformly convergent sequence of function, uniformly continuous function and examples including the Mittag-Leffler function with one parameter. Finally, we present a wide class of integrals and fractional derivatives, by means of the fractional integral with respect to another function and the ψ-Hilfer fractional derivative.

  17. ZIF-67 derived porous Co3O4 hollow nanopolyhedron functionalized solution-gated graphene transistors for simultaneous detection of glucose and uric acid in tears.

    PubMed

    Xiong, Can; Zhang, Tengfei; Kong, Weiyu; Zhang, Zhixiang; Qu, Hao; Chen, Wei; Wang, Yanbo; Luo, Linbao; Zheng, Lei

    2018-03-15

    Biomarkers in tears have attracted much attention in daily healthcare sensing and monitoring. Here, highly sensitive sensors for simultaneous detection of glucose and uric acid are successfully constructed based on solution-gated graphene transistors (SGGTs) with two separate Au gate electrodes, modified with GOx-CHIT and BSA-CHIT respectively. The sensitivity of the SGGT is dramatically improved by co-modifying the Au gate with ZIF-67 derived porous Co 3 O 4 hollow nanopolyhedrons. The sensing mechanism for glucose sensor is attributed to the reaction of H 2 O 2 generated by the oxidation of glucose near the gate, while the sensing mechanism for uric acid is due to the direct electro-oxidation of uric acid molecules on the gate. The optimized glucose and uric acid sensors show the detection limits both down to 100nM, far beyond the sensitivity required for non-invasive detection of glucose and uric acid in tears. The glucose and uric acid in real tear samples was quantitatively detected at 323.2 ± 16.1μM and 98.5 ± 16.3μM by using the functionalized SGGT device. Due to the low-cost, high-biocompatibility and easy-fabrication features of the ZIF-67 derived porous Co 3 O 4 hollow nanopolyhedron, they provide excellent electrocatalytic nanomaterials for enhancing sensitivity of SGGTs for a broad range of disease-related biomarkers. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. The Therapeutic Potential of Induced Pluripotent Stem Cells After Stroke: Evidence from Rodent Models.

    PubMed

    Zents, Karlijn; Copray, Sjef

    2016-01-01

    Stroke is the second most common cause of death and the leading cause of disability in the world. About 30% of the people that are affected by stroke die within a year; 25% of the patients that survive stroke remain in need of care after a year. Therefore, stroke is a major burden for health care costs. The most common subtype is ischemic stroke. This type is characterized by a reduced and insufficient blood supply to a certain part of the brain. Despite the high prevalence of stroke, the currently used therapeutic interventions are limited. No therapies that aim to restore damaged neuronal tissue or to promote recovery are available nowadays. Transplantation of stem cell-derived cells has been investigated as a potential regenerative and protective treatment. Embryonic stem cell (ESC)-based cell therapy in rodent models of stroke has been shown to improve functional outcome. However, the clinical use of ESCs still raises ethical questions and implantation of ESC-derived cells requires continuous immunosuppression. The groundbreaking detection of induced pluripotent stem cells (iPSCs) has provided a most promising alternative. This mini-review summarizes current literature in which the potential use of iPSC-derived cells has been tested in rodent models of stroke. iPSC-based cell therapy has been demonstrated to improve motor function, decrease stroke volume, promote neurogenesis and angiogenesis and to exert immunomodulatory, anti-inflammatory effects in the brain of stroke-affected rodents.

  19. Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.

    PubMed

    Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A

    2013-02-01

    The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on understanding the distributional characteristics of such uncertainty. Our approach provides a tool to improve decision making. © 2013 Society for Conservation Biology.

  20. Analytic computation of energy derivatives - Relationships among partial derivatives of a variationally determined function

    NASA Technical Reports Server (NTRS)

    King, H. F.; Komornicki, A.

    1986-01-01

    Formulas are presented relating Taylor series expansion coefficients of three functions of several variables, the energy of the trial wave function (W), the energy computed using the optimized variational wave function (E), and the response function (lambda), under certain conditions. Partial derivatives of lambda are obtained through solution of a recursive system of linear equations, and solution through order n yields derivatives of E through order 2n + 1, extending Puley's application of Wigner's 2n + 1 rule to partial derivatives in couple perturbation theory. An examination of numerical accuracy shows that the usual two-term second derivative formula is less stable than an alternative four-term formula, and that previous claims that energy derivatives are stationary properties of the wave function are fallacious. The results have application to quantum theoretical methods for the computation of derivative properties such as infrared frequencies and intensities.

  1. Prospective Teachers' Reactions to "Right-or-Wrong" Tasks: The Case of Derivatives of Absolute Value Functions

    ERIC Educational Resources Information Center

    Tsamir, Pessia; Rasslan, Shaker; Dreyfus, Tommy

    2006-01-01

    This paper illustrates the role of a "Thinking-about-Derivatives" task in identifying learners' derivative conceptions and for promoting their critical thinking about derivatives of absolute value functions. The task included three parts: "Define" the derivative of a function f(x) at x = x[subscript 0], "Solve-if-Possible" the derivative of f(x) =…

  2. Adjoint shape optimization for fluid-structure interaction of ducted flows

    NASA Astrophysics Data System (ADS)

    Heners, J. P.; Radtke, L.; Hinze, M.; Düster, A.

    2018-03-01

    Based on the coupled problem of time-dependent fluid-structure interaction, equations for an appropriate adjoint problem are derived by the consequent use of the formal Lagrange calculus. Solutions of both primal and adjoint equations are computed in a partitioned fashion and enable the formulation of a surface sensitivity. This sensitivity is used in the context of a steepest descent algorithm for the computation of the required gradient of an appropriate cost functional. The efficiency of the developed optimization approach is demonstrated by minimization of the pressure drop in a simple two-dimensional channel flow and in a three-dimensional ducted flow surrounded by a thin-walled structure.

  3. Design of optimally normal minimum gain controllers by continuation method

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Juang, J.-N.; Kim, Z. C.

    1989-01-01

    A measure of the departure from normality is investigated for system robustness. An attractive feature of the normality index is its simplicity for pole placement designs. To allow a tradeoff between system robustness and control effort, a cost function consisting of the sum of a norm of weighted gain matrix and a normality index is minimized. First- and second-order necessary conditions for the constrained optimization problem are derived and solved by a Newton-Raphson algorithm imbedded into a one-parameter family of neighboring zero problems. The method presented allows the direct computation of optimal gains in terms of robustness and control effort for pole placement problems.

  4. Affects of Provider Type on Patient Satisfaction, Productivity and Cost Efficiency

    DTIC Science & Technology

    2006-04-25

    plus inflation. With the implementation of the prospective payment system, the MTF Commanders will need to examine ways to demonstrate effectiveness ...practitioner’s performed well when compared to physicians, the longer time spent with patients can reduce productivity and thereby reduce cost effectiveness ...are most cost effective in use of resources (Vincent, 2002). Cost per visit ratio is derived by dividing the variable cost of Provider Type 22

  5. 18 CFR 301.7 - Average System Cost methodology functionalization.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... SYSTEM COST METHODOLOGY FOR SALES FROM UTILITIES TO BONNEVILLE POWER ADMINISTRATION UNDER NORTHWEST POWER... functionalization under its Direct Analysis assigns costs, revenues, debits or credits based upon the actual and/or...) Functionalization methods. (1) Direct analysis, if allowed or required by Table 1, assigns costs, revenues, debits...

  6. Estimating age-based antiretroviral therapy costs for HIV-infected children in resource-limited settings based on World Health Organization weight-based dosing recommendations

    PubMed Central

    2014-01-01

    Background Pediatric antiretroviral therapy (ART) has been shown to substantially reduce morbidity and mortality in HIV-infected infants and children. To accurately project program costs, analysts need accurate estimations of antiretroviral drug (ARV) costs for children. However, the costing of pediatric antiretroviral therapy is complicated by weight-based dosing recommendations which change as children grow. Methods We developed a step-by-step methodology for estimating the cost of pediatric ARV regimens for children ages 0–13 years old. The costing approach incorporates weight-based dosing recommendations to provide estimated ARV doses throughout childhood development. Published unit drug costs are then used to calculate average monthly drug costs. We compared our derived monthly ARV costs to published estimates to assess the accuracy of our methodology. Results The estimates of monthly ARV costs are provided for six commonly used first-line pediatric ARV regimens, considering three possible care scenarios. The costs derived in our analysis for children were fairly comparable to or slightly higher than available published ARV drug or regimen estimates. Conclusions The methodology described here can be used to provide an accurate estimation of pediatric ARV regimen costs for cost-effectiveness analysts to project the optimum packages of care for HIV-infected children, as well as for program administrators and budget analysts who wish to assess the feasibility of increasing pediatric ART availability in constrained budget environments. PMID:24885453

  7. Years of life lost and morbidity cases attributable to transportation noise and air pollution: A comparative health risk assessment for Switzerland in 2010.

    PubMed

    Vienneau, Danielle; Perez, Laura; Schindler, Christian; Lieb, Christoph; Sommer, Heini; Probst-Hensch, Nicole; Künzli, Nino; Röösli, Martin

    2015-08-01

    There is growing evidence that chronic exposure to transportation related noise and air pollution affects human health. However, health burden to a country of these two pollutants have been rarely compared. As an input for external cost quantification, we estimated the cardiorespiratory health burden from transportation related noise and air pollution in Switzerland, incorporating the most recent findings related to the health effects of noise. Spatially resolved noise and air pollution models for the year 2010 were derived for road, rail and aircraft sources. Average day-evening-night sound level (Lden) and particulate matter (PM10) were selected as indicators, and population-weighted exposures derived by transportation source. Cause-specific exposure-response functions were derived from a meta-analysis for noise and literature review for PM10. Years of life lost (YLL) were calculated using life table methods; population attributable fraction was used for deriving attributable cases for hospitalisations, respiratory illnesses, visits to general practitioners and restricted activity days. The mean population weighted exposure above a threshold of 48dB(A) was 8.74dB(A), 1.89dB(A) and 0.37dB(A) for road, rail and aircraft noise. Corresponding mean exposure contributions were 4.4, 0.54, 0.12μg/m(3) for PM10. We estimated that in 2010 in Switzerland transportation caused 6000 and 14,000 YLL from noise and air pollution exposure, respectively. While there were a total of 8700 cardiorespiratory hospital days attributed to air pollution exposure, estimated burden due to noise alone amounted to 22,500 hospital days. YLL due to transportation related pollution in Switzerland is dominated by air pollution from road traffic, whereas consequences for morbidity and indicators of quality of life are dominated by noise. In terms of total external costs the burden of noise equals that of air pollution. Copyright © 2015 Elsevier GmbH. All rights reserved.

  8. Decorrelation scales for Arctic Ocean hydrography - Part I: Amerasian Basin

    NASA Astrophysics Data System (ADS)

    Sumata, Hiroshi; Kauker, Frank; Karcher, Michael; Rabe, Benjamin; Timmermans, Mary-Louise; Behrendt, Axel; Gerdes, Rüdiger; Schauer, Ursula; Shimada, Koji; Cho, Kyoung-Ho; Kikuchi, Takashi

    2018-03-01

    Any use of observational data for data assimilation requires adequate information of their representativeness in space and time. This is particularly important for sparse, non-synoptic data, which comprise the bulk of oceanic in situ observations in the Arctic. To quantify spatial and temporal scales of temperature and salinity variations, we estimate the autocorrelation function and associated decorrelation scales for the Amerasian Basin of the Arctic Ocean. For this purpose, we compile historical measurements from 1980 to 2015. Assuming spatial and temporal homogeneity of the decorrelation scale in the basin interior (abyssal plain area), we calculate autocorrelations as a function of spatial distance and temporal lag. The examination of the functional form of autocorrelation in each depth range reveals that the autocorrelation is well described by a Gaussian function in space and time. We derive decorrelation scales of 150-200 km in space and 100-300 days in time. These scales are directly applicable to quantify the representation error, which is essential for use of ocean in situ measurements in data assimilation. We also describe how the estimated autocorrelation function and decorrelation scale should be applied for cost function calculation in a data assimilation system.

  9. The costs of nurse turnover, part 2: application of the Nursing Turnover Cost Calculation Methodology.

    PubMed

    Jones, Cheryl Bland

    2005-01-01

    This is the second article in a 2-part series focusing on nurse turnover and its costs. Part 1 (December 2004) described nurse turnover costs within the context of human capital theory, and using human resource accounting methods, presented the updated Nursing Turnover Cost Calculation Methodology. Part 2 presents an application of this method in an acute care setting and the estimated costs of nurse turnover that were derived. Administrators and researchers can use these methods and cost information to build a business case for nurse retention.

  10. An Examination of Energy Considerations in the Product Acquisition Process.

    DTIC Science & Technology

    1980-12-01

    benefits derived (Fisher, 1970; Horngren , 1977). This chapter will examine the many areas where energy costs can be accounted for to provide...overhead cost pool ( Horngren , 1977). The allocation of energy costs involved in the distribution of the documents depends upon the accounting system...Corporation, December 1970. Horngren , Charles T. Cost Accounting - A Managerial Emphasis. Englewood Cliffs NJ: Prentice-Hall, Inc., 1977. 67 / 4 Kinder

  11. Home visiting programmes for the prevention of child maltreatment: cost-effectiveness of 33 programmes.

    PubMed

    Dalziel, Kim; Segal, Leonie

    2012-09-01

    There is a body of published research on the effectiveness of home visiting for the prevention of child maltreatment, but little in the peer reviewed literature on cost-effectiveness or value to society. The authors sought to determine the cost-effectiveness of alternative home visiting programmes to inform policy. All trials reporting child maltreatment outcomes were identified through systematic review. Information on programme effectiveness and components were taken from identified studies, to which 2010 Australian unit costs were applied. Lifetime cost offsets associated with maltreatment were derived from a recent Australian study. Cost-effectiveness results were estimated as programme cost per case of maltreatment prevented and net benefit estimated by incorporating downstream cost savings. Sensitivity analyses were conducted. 33 home visiting programmes were evaluated and cost-effectiveness estimates derived for the 25 programmes not dominated. The incremental cost of home visiting compared to usual care ranged from A$1800 to A$30 000 (US$1800-US$30 000) per family. Cost-effectiveness estimates ranged from A$22 000 per case of maltreatment prevented to several million. Seven of the 22 programmes (32%) of at least adequate quality were cost saving when including lifetime cost offsets. There is great variation in the cost-effectiveness of home visiting programmes for the prevention of maltreatment. The most cost-effective programmes use professional home visitors in a multi-disciplinary team, target high risk populations and include more than just home visiting. Home visiting programmes must be carefully selected and well targeted if net social benefits are to be realised.

  12. Patient-derived tumour xenografts for breast cancer drug discovery.

    PubMed

    Cassidy, John W; Batra, Ankita S; Greenwood, Wendy; Bruna, Alejandra

    2016-12-01

    Despite remarkable advances in our understanding of the drivers of human malignancies, new targeted therapies often fail to show sufficient efficacy in clinical trials. Indeed, the cost of bringing a new agent to market has risen substantially in the last several decades, in part fuelled by extensive reliance on preclinical models that fail to accurately reflect tumour heterogeneity. To halt unsustainable rates of attrition in the drug discovery process, we must develop a new generation of preclinical models capable of reflecting the heterogeneity of varying degrees of complexity found in human cancers. Patient-derived tumour xenograft (PDTX) models prevail as arguably the most powerful in this regard because they capture cancer's heterogeneous nature. Herein, we review current breast cancer models and their use in the drug discovery process, before discussing best practices for developing a highly annotated cohort of PDTX models. We describe the importance of extensive multidimensional molecular and functional characterisation of models and combination drug-drug screens to identify complex biomarkers of drug resistance and response. We reflect on our own experiences and propose the use of a cost-effective intermediate pharmacogenomic platform (the PDTX-PDTC platform) for breast cancer drug and biomarker discovery. We discuss the limitations and unanswered questions of PDTX models; yet, still strongly envision that their use in basic and translational research will dramatically change our understanding of breast cancer biology and how to more effectively treat it. © 2016 The authors.

  13. Beyond Born-Mayer: Improved models for short-range repulsion in ab initio force fields

    DOE PAGES

    Van Vleet, Mary J.; Misquitta, Alston J.; Stone, Anthony J.; ...

    2016-06-23

    Short-range repulsion within inter-molecular force fields is conventionally described by either Lennard-Jones or Born-Mayer forms. Despite their widespread use, these simple functional forms are often unable to describe the interaction energy accurately over a broad range of inter-molecular distances, thus creating challenges in the development of ab initio force fields and potentially leading to decreased accuracy and transferability. Herein, we derive a novel short-range functional form based on a simple Slater-like model of overlapping atomic densities and an iterated stockholder atom (ISA) partitioning of the molecular electron density. We demonstrate that this Slater-ISA methodology yields a more accurate, transferable, andmore » robust description of the short-range interactions at minimal additional computational cost compared to standard Lennard-Jones or Born-Mayer approaches. Lastly, we show how this methodology can be adapted to yield the standard Born-Mayer functional form while still retaining many of the advantages of the Slater-ISA approach.« less

  14. Fast Computation of Frequency Response of Cavity-Backed Apertures Using MBPE in Conjunction with Hybrid FEM/MoM Technique

    NASA Technical Reports Server (NTRS)

    Reddy, C. J.; Deshpande, M. D.; Cockrell, C. R.; Beck, F. B.

    2004-01-01

    The hybrid Finite Element Method(FEM)/Method of Moments(MoM) technique has become popular over the last few years due to its flexibility to handle arbitrarily shaped objects with complex materials. One of the disadvantages of this technique, however, is the computational cost involved in obtaining solutions over a frequency range as computations are repeated for each frequency. In this paper, the application of Model Based Parameter Estimation (MBPE) method[1] with the hybrid FEM/MoM technique is presented for fast computation of frequency response of cavity-backed apertures[2,3]. In MBPE, the electric field is expanded in a rational function of two polynomials. The coefficients of the rational function are obtained using the frequency-derivatives of the integro-differential equation formed by the hybrid FEM/MoM technique. Using the rational function approximation, the electric field is calculated at different frequencies from which the frequency response is obtained.

  15. Multicomponent Time-Dependent Density Functional Theory: Proton and Electron Excitation Energies.

    PubMed

    Yang, Yang; Culpitt, Tanner; Hammes-Schiffer, Sharon

    2018-04-05

    The quantum mechanical treatment of both electrons and protons in the calculation of excited state properties is critical for describing nonadiabatic processes such as photoinduced proton-coupled electron transfer. Multicomponent density functional theory enables the consistent quantum mechanical treatment of more than one type of particle and has been implemented previously for studying ground state molecular properties within the nuclear-electronic orbital (NEO) framework, where all electrons and specified protons are treated quantum mechanically. To enable the study of excited state molecular properties, herein the linear response multicomponent time-dependent density functional theory (TDDFT) is derived and implemented within the NEO framework. Initial applications to FHF - and HCN illustrate that NEO-TDDFT provides accurate proton and electron excitation energies within a single calculation. As its computational cost is similar to that of conventional electronic TDDFT, the NEO-TDDFT approach is promising for diverse applications, particularly nonadiabatic proton transfer reactions, which may exhibit mixed electron-proton vibronic excitations.

  16. Utility of functioning in predicting costs of care for patients with mood and anxiety disorders: a prospective cohort study

    PubMed Central

    Cieza, Alarcos; Baldwin, David S.

    2017-01-01

    Development of payment systems for mental health services has been hindered by limited evidence for the utility of diagnosis or symptoms in predicting costs of care. We investigated the utility of functioning information in predicting costs for patients with mood and anxiety disorders. This was a prospective cohort study involving 102 adult patients attending a tertiary referral specialist clinic for mood and anxiety disorders. The main outcome was total costs, calculated by applying unit costs to healthcare use data. After adjusting for covariates, a significant total costs association was yielded for functioning (eβ=1.02; 95% confidence interval: 1.01–1.03), but not depressive symptom severity or anxiety symptom severity. When we accounted for the correlations between the main independent variables by constructing an abridged functioning metric, a significant total costs association was again yielded for functioning (eβ=1.04; 95% confidence interval: 1.01–1.09), but not symptom severity. The utility of functioning in predicting costs for patients with mood and anxiety disorders was supported. Functioning information could be useful within mental health payment systems. PMID:28383309

  17. Critical Zone Services as Environmental Assessment Criteria in Intensively Managed Agricultural Landscapes

    NASA Astrophysics Data System (ADS)

    Richardson, M.; Kumar, P.

    2016-12-01

    The critical zone (CZ) includes the biophysical processes occurring from the top of the vegetation canopy to the weathering zone below the groundwater table. CZ services provide a measure for the goods and benefits derived from CZ processes. In intensively managed landscapes (IML), the provisioning, supporting, and regulating services are altered through anthropogenic energy inputs to derive more productivity, as agricultural products, from these landscapes than would be possible under natural conditions. However, the energy or cost equivalents of alterations to CZ functions within landscape profiles are unknown. The valuation of CZ services in energy or monetary terms provides a more concrete tool for characterizing seemingly abstract environmental damages from agricultural production systems. A multi-layer canopy-root-soil model is combined with nutrient and water flux models to simulate the movement of nutrients throughout the soil system. This data enables the measurement of agricultural anthropogenic impacts to the CZ's nutrient cycling supporting services and atmospheric stabilizing regulating services defined by the flux of carbon and nutrients. Such measurements include soil carbon storage, soil carbon respiration, nitrate leaching, and nitrous oxide flux into the atmosphere. Additionally, the socioeconomic values of corn feed and ethanol define the primary productivity supporting services of each crop use.In the debate between feed production and corn-based ethanol production, measured nutrient CZ services can cost up to four times more than traditionally estimated CO2 equivalences for the entire bioenergy production system. Energy efficiency in addition to environmental impacts demonstrate how the inclusion of CZ services is necessary in accounting for the entire life cycle of agricultural production systems. These results conclude that feed production systems are more energy efficient and less environmentally costly than corn-based ethanol systems.

  18. Relationship between functional disability and costs one and two years post stroke

    PubMed Central

    Lekander, Ingrid; Willers, Carl; von Euler, Mia; Lilja, Mikael; Sunnerhagen, Katharina S.; Pessah-Rasmussen, Hélène; Borgström, Fredrik

    2017-01-01

    Background and purpose Stroke affects mortality, functional ability, quality of life and incurs costs. The primary objective of this study was to estimate the costs of stroke care in Sweden by level of disability and stroke type (ischemic (IS) or hemorrhagic stroke (ICH)). Method Resource use during first and second year following a stroke was estimated based on a research database containing linked data from several registries. Costs were estimated for the acute and post-acute management of stroke, including direct (health care consumption and municipal services) and indirect (productivity losses) costs. Resources and costs were estimated per stroke type and functional disability categorised by Modified Rankin Scale (mRS). Results The results indicated that the average costs per patient following a stroke were 350,000SEK/€37,000–480,000SEK/€50,000, dependent on stroke type and whether it was the first or second year post stroke. Large variations were identified between different subgroups of functional disability and stroke type, ranging from annual costs of 100,000SEK/€10,000–1,100,000SEK/€120,000 per patient, with higher costs for patients with ICH compared to IS and increasing costs with more severe functional disability. Conclusion Functional outcome is a major determinant on costs of stroke care. The stroke type associated with worse outcome (ICH) was also consistently associated to higher costs. Measures to improve function are not only important to individual patients and their family but may also decrease the societal burden of stroke. PMID:28384164

  19. A normative price for energy from an electricity generation system: An Owner-dependent Methodology for Energy Generation (system) Assessment (OMEGA). Volume 2: Derivation of system energy price equations

    NASA Technical Reports Server (NTRS)

    Chamberlain, R. G.; Mcmaster, K. M.

    1981-01-01

    The methodology presented is a derivation of the utility owned solar electric systems model. The net present value of the system is determined by consideration of all financial benefits and costs including a specified return on investment. Life cycle costs, life cycle revenues, and residual system values are obtained. Break-even values of system parameters are estimated by setting the net present value to zero.

  20. Application of advanced technologies to derivatives of current small transport aircraft

    NASA Technical Reports Server (NTRS)

    Renze, P. P.; Terry, J. E.

    1981-01-01

    Mission requirements of the derivative design were the same as the baseline to readily identify the advanced technology benefits achieved. Advanced technologies investigated were in the areas of propulsion, structures and aerodynamics and a direct operating cost benefit analysis conducted to identify the most promising. Engine improvements appear most promising and combined with propeller, airfoil, surface coating and composite advanced technologies give a 21-25 percent DOC savings. A 17 percent higher acquisition cost is offset by a 34 percent savings in fuel used.

  1. Method for Implementing Subsurface Solid Derived Concentration Guideline Levels (DCGL) - 12331

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lively, J.W.

    2012-07-01

    The U.S. Nuclear Regulatory Commission (NRC) and other federal agencies currently approve the Multi-Agency Radiation Site Survey and Investigation Manual (MARSSIM) as guidance for licensees who are conducting final radiological status surveys in support of decommissioning. MARSSIM provides a method to demonstrate compliance with the applicable regulation by comparing residual radioactivity in surface soils with derived concentration guideline levels (DCGLs), but specifically discounts its applicability to subsurface soils. Many sites and facilities undergoing decommissioning contain subsurface soils that are potentially impacted by radiological constituents. In the absence of specific guidance designed to address the derivation of subsurface soil DCGLs andmore » compliance demonstration, decommissioning facilities have attempted to apply DCGLs and final status survey techniques designed specifically for surface soils to subsurface soils. The decision to apply surface soil limits and surface soil compliance metrics to subsurface soils typically results in significant over-excavation with associated cost escalation. MACTEC, Inc. has developed the overarching concepts and principles found in recent NRC decommissioning guidance in NUREG 1757 to establish a functional method to derive dose-based subsurface soil DCGLs. The subsurface soil method developed by MACTEC also establishes a rigorous set of criterion-based data evaluation metrics (with analogs to the MARSSIM methodology) that can be used to demonstrate compliance with the developed subsurface soil DCGLs. The method establishes a continuum of volume factors that relate the size and depth of a volume of subsurface soil having elevated concentrations of residual radioactivity with its ability to produce dose. The method integrates the subsurface soil sampling regime with the derivation of the subsurface soil DCGL such that a self-regulating optimization is naturally sought by both the responsible party and regulator. This paper describes the concepts and basis used by MACTEC to develop the dose-based subsurface soil DCGL method. The paper will show how MACTEC's method can be used to demonstrate that higher concentrations of residual radioactivity in subsurface soils (as compared with surface soils) can meet the NRC's dose-based regulations. MACTEC's method has been used successfully to obtain the NRC's radiological release at a site with known radiological impacts to subsurface soils exceeding the surface soil DCGL, saving both time and cost. Having considered the current NRC guidance for consideration of residual radioactivity in subsurface soils during decommissioning, MACTEC has developed a technically based approach to the derivation of and demonstration of compliance with subsurface soil DCGLs for radionuclides. In fact, the process uses the already accepted concepts and metrics approved for surface soils as the foundation for deriving scaling factors used to calculate subsurface soil DCGLs that are at least equally protective of the decommissioning annual dose standard. Each of the elements identified for consideration in the current NRC guidance is addressed in this proposed method. Additionally, there is considerable conservatism built into the assumptions and techniques used to arrive at subsurface soil scaling factors and DCGLs. The degree of conservatism embodied in the approach used is such that risk managers and decision makers approving and using subsurface soil DCGLs derived in accordance with this method can be confident that the future exposures will be well below permissible and safe levels. The technical basis for the method can be applied to a broad variety of sites with residual radioactivity in subsurface soils. Given the costly nature of soil surveys, excavation, and disposal of soils as low-level radioactive waste, MACTEC's method for deriving and demonstrating compliance with subsurface soil DCGLs offers the possibility of significant cost savings over the traditional approach of applying surface soil DCGLs to subsurface soils. Furthermore, while yet untested, MACTEC believes that the concepts and methods embodied in this approach could readily be applied to other types of contamination found in subsurface soils. (author)« less

  2. Assessment of candidate-expendable launch vehicles for large payloads

    NASA Technical Reports Server (NTRS)

    1984-01-01

    In recent years the U.S. Air Force and NASA conducted design studies of 3 expendable launch vehicle configurations that could serve as a backup to the space shuttle--the Titan 34D7/Centaur, the Atlas II/Centaur, and the shuttle-derived SRB-X--as well as studies of advanced shuttle-derived launch vehicles with much larger payload capabilities than the shuttle. The 3 candidate complementary launch vehicles are judged to be roughly equivalent in cost, development time, reliability, and payload-to-orbit performance. Advanced shuttle-derived vehicles are considered viable candidates to meet future heavy lift launch requirements; however, they do not appear likely to result in significant reduction in cost-per-pound to orbit.

  3. Development of Activity-based Cost Functions for Cellulase, Invertase, and Other Enzymes

    NASA Astrophysics Data System (ADS)

    Stowers, Chris C.; Ferguson, Elizabeth M.; Tanner, Robert D.

    As enzyme chemistry plays an increasingly important role in the chemical industry, cost analysis of these enzymes becomes a necessity. In this paper, we examine the aspects that affect the cost of enzymes based upon enzyme activity. The basis for this study stems from a previously developed objective function that quantifies the tradeoffs in enzyme purification via the foam fractionation process (Cherry et al., Braz J Chem Eng 17:233-238, 2000). A generalized cost function is developed from our results that could be used to aid in both industrial and lab scale chemical processing. The generalized cost function shows several nonobvious results that could lead to significant savings. Additionally, the parameters involved in the operation and scaling up of enzyme processing could be optimized to minimize costs. We show that there are typically three regimes in the enzyme cost analysis function: the low activity prelinear region, the moderate activity linear region, and high activity power-law region. The overall form of the cost analysis function appears to robustly fit the power law form.

  4. Framework for modelling the cost-effectiveness of systemic interventions aimed to reduce youth delinquency.

    PubMed

    Schawo, Saskia J; van Eeren, Hester; Soeteman, Djira I; van der Veldt, Marie-Christine; Noom, Marc J; Brouwer, Werner; Busschbach, Jan J V; Hakkaart, Leona

    2012-12-01

    Many interventions initiated within and financed from the health care sector are not necessarily primarily aimed at improving health. This poses important questions regarding the operationalisation of economic evaluations in such contexts. We investigated whether assessing cost-effectiveness using state-of-the-art methods commonly applied in health care evaluations is feasible and meaningful when evaluating interventions aimed at reducing youth delinquency. A probabilistic Markov model was constructed to create a framework for the assessment of the cost-effectiveness of systemic interventions in delinquent youth. For illustrative purposes, Functional Family Therapy (FFT), a systemic intervention aimed at improving family functioning and, primarily, reducing delinquent activity in youths, was compared to Treatment as Usual (TAU). "Criminal activity free years" (CAFYs) were introduced as central outcome measure. Criminal activity may e.g. be based on police contacts or committed crimes. In absence of extensive data and for illustrative purposes the current study based criminal activity on available literature on recidivism. Furthermore, a literature search was performed to deduce the model's structure and parameters. Common cost-effectiveness methodology could be applied to interventions for youth delinquency. Model characteristics and parameters were derived from literature and ongoing trial data. The model resulted in an estimate of incremental costs/CAFY and included long-term effects. Illustrative model results point towards dominance of FFT compared to TAU. Using a probabilistic model and the CAFY outcome measure to assess cost-effectiveness of systemic interventions aimed to reduce delinquency is feasible. However, the model structure is limited to three states and the CAFY measure was defined rather crude. Moreover, as the model parameters are retrieved from literature the model results are illustrative in the absence of empirical data. The current model provides a framework to assess the cost-effectiveness of systemic interventions, while taking into account parameter uncertainty and long-term effectiveness. The framework of the model could be used to assess the cost-effectiveness of systemic interventions alongside (clinical) trial data. Consequently, it is suitable to inform reimbursement decisions, since the value for money of systemic interventions can be demonstrated using a decision analytic model. Future research could be focussed on testing the current model based on extensive empirical data, improving the outcome measure and finding appropriate values for that outcome.

  5. Model implementation for dynamic computation of system cost

    NASA Astrophysics Data System (ADS)

    Levri, J.; Vaccari, D.

    The Advanced Life Support (ALS) Program metric is the ratio of the equivalent system mass (ESM) of a mission based on International Space Station (ISS) technology to the ESM of that same mission based on ALS technology. ESM is a mission cost analog that converts the volume, power, cooling and crewtime requirements of a mission into mass units to compute an estimate of the life support system emplacement cost. Traditionally, ESM has been computed statically, using nominal values for system sizing. However, computation of ESM with static, nominal sizing estimates cannot capture the peak sizing requirements driven by system dynamics. In this paper, a dynamic model for a near-term Mars mission is described. The model is implemented in Matlab/Simulink' for the purpose of dynamically computing ESM. This paper provides a general overview of the crew, food, biomass, waste, water and air blocks in the Simulink' model. Dynamic simulations of the life support system track mass flow, volume and crewtime needs, as well as power and cooling requirement profiles. The mission's ESM is computed, based upon simulation responses. Ultimately, computed ESM values for various system architectures will feed into an optimization search (non-derivative) algorithm to predict parameter combinations that result in reduced objective function values.

  6. Highly Efficient Differentiation and Enrichment of Spinal Motor Neurons Derived from Human and Monkey Embryonic Stem Cells

    PubMed Central

    Wada, Tamaki; Honda, Makoto; Minami, Itsunari; Tooi, Norie; Amagai, Yuji; Nakatsuji, Norio; Aiba, Kazuhiro

    2009-01-01

    Background There are no cures or efficacious treatments for severe motor neuron diseases. It is extremely difficult to obtain naïve spinal motor neurons (sMNs) from human tissues for research due to both technical and ethical reasons. Human embryonic stem cells (hESCs) are alternative sources. Several methods for MN differentiation have been reported. However, efficient production of naïve sMNs and culture cost were not taken into consideration in most of the methods. Methods/Principal Findings We aimed to establish protocols for efficient production and enrichment of sMNs derived from pluripotent stem cells. Nestin+ neural stem cell (NSC) clusters were induced by Noggin or a small molecule inhibitor of BMP signaling. After dissociation of NSC clusters, neurospheres were formed in a floating culture containing FGF2. The number of NSCs in neurospheres could be expanded more than 30-fold via several passages. More than 33% of HB9+ sMN progenitor cells were observed after differentiation of dissociated neurospheres by all-trans retinoic acid (ATRA) and a Shh agonist for another week on monolayer culture. HB9+ sMN progenitor cells were enriched by gradient centrifugation up to 80% purity. These HB9+ cells differentiated into electrophysiologically functional cells and formed synapses with myotubes during a few weeks after ATRA/SAG treatment. Conclusions and Significance The series of procedures we established here, namely neural induction, NSC expansion, sMN differentiation and sMN purification, can provide large quantities of naïve sMNs derived from human and monkey pluripotent stem cells. Using small molecule reagents, reduction of culture cost could be achieved. PMID:19701462

  7. X-38 V201 Avionics Architecture

    NASA Technical Reports Server (NTRS)

    Bedos, Thierry; Anderson, Brian L.

    1999-01-01

    The X-38 is an experimental NASA project developing a core human capable spacecraft at a fraction of the cost of any previous human rated vehicle. The first operational derivative developed from the X-38 program will be the International Space Station (ISS) Crew Return Vehicle (CRV). Although the current X-38 vehicles are designed as re-entry vehicles only, the option exists to modify the vehicle for uses as an upward vehicle launched from an expendable launch vehicle or from the X-33 operational derivative. The Operational CRV, that will be derived from the X-38 spaceflight vehicle, will provide an emergency return capability from the International Space Station (ISS). The spacecraft can hold a crew of up to seven inside a pressurized cabin. The CRV is passively delivered to ISS, stays up to three year on-orbit attached to ISS in a passive mode with periodic functional checkout, before separation from ISS, de-orbit, entry and landing. The X-38 Vehicle 201 (V201) is being developed at NASA/JSC to demonstrate key technologies associated with the development of the CRV design. The X-38 flight test will validate the low cost development concept by demonstrating the entire station departure, re-entry, guidance and landing portions of the CRV mission. All new technologies and subsystems proposed for CRV will be validated during either the on orbit checkout or flight phases of the X-38 space flight test. The X-38 subsystems are required to be similar to those subsystems required for the CRV to the greatest extent possible. In many cases, the subsystems are identical to those that will be utilized on the Operational CRV.

  8. Cost of reactive nitrogen release from human activities to the environment in the United States

    EPA Science Inventory

    The leakage of reactive nitrogen (N) from human activities to the environment can cause human health and ecological problems. Often these harmful effects are not reflected in the costs of food, fuel, and fiber that derive from N use. Spatial analyses of economic costs and benef...

  9. 44 CFR 13.24 - Matching or cost sharing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  10. 29 CFR 97.24 - Matching or cost sharing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  11. 45 CFR 1157.24 - Matching or cost sharing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... records must show how the value placed on third party in-kind contributions was derived. To the extent...

  12. 44 CFR 13.24 - Matching or cost sharing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  13. 24 CFR 85.24 - Matching or cost sharing.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  14. 45 CFR 1157.24 - Matching or cost sharing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... records must show how the value placed on third party in-kind contributions was derived. To the extent...

  15. 29 CFR 97.24 - Matching or cost sharing.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  16. 29 CFR 97.24 - Matching or cost sharing.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  17. 49 CFR 18.24 - Matching or cost sharing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  18. 49 CFR 18.24 - Matching or cost sharing.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  19. 29 CFR 97.24 - Matching or cost sharing.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  20. 49 CFR 18.24 - Matching or cost sharing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  1. 24 CFR 85.24 - Matching or cost sharing.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  2. 29 CFR 97.24 - Matching or cost sharing.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  3. 34 CFR 80.24 - Matching or cost sharing.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  4. 44 CFR 13.24 - Matching or cost sharing.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  5. 24 CFR 85.24 - Matching or cost sharing.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  6. 44 CFR 13.24 - Matching or cost sharing.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  7. 34 CFR 80.24 - Matching or cost sharing.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  8. 44 CFR 13.24 - Matching or cost sharing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  9. 24 CFR 85.24 - Matching or cost sharing.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  10. 34 CFR 80.24 - Matching or cost sharing.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  11. 34 CFR 80.24 - Matching or cost sharing.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  12. 24 CFR 85.24 - Matching or cost sharing.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  13. 45 CFR 1157.24 - Matching or cost sharing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... records must show how the value placed on third party in-kind contributions was derived. To the extent...

  14. 49 CFR 18.24 - Matching or cost sharing.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  15. 45 CFR 1157.24 - Matching or cost sharing.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... records must show how the value placed on third party in-kind contributions was derived. To the extent...

  16. 34 CFR 80.24 - Matching or cost sharing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... must show how the value placed on third party in-kind contributions was derived. To the extent feasible...

  17. 45 CFR 1157.24 - Matching or cost sharing.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... grants or by others cash donations from non-Federal third parties. (2) The value of third party in-kind... contributions counted towards other Federal costs-sharing requirements. Neither costs nor the values of third... records must show how the value placed on third party in-kind contributions was derived. To the extent...

  18. Estimation of Unreimbursed Patient Education Costs at a Large Group Practice

    ERIC Educational Resources Information Center

    Williams, Arthur R.; McDougall, John C.; Bruggeman, Sandra K.; Erwin, Patricia J.; Kroshus, Margo E.; Naessens, James M.

    2004-01-01

    Introduction: A search of the literature on the cost of patient education found that provider education time per patient per day was rarely reported and usually not derivable from published reports. Costs of continuing education needed by health professionals to support patient education also were not given. Without this information, it is…

  19. Comparison of several Brassica species in the north central U.S. for potential jet fuel feedstock

    USDA-ARS?s Scientific Manuscript database

    Hydrotreated renewable jet fuel (HRJ) derived from crop oils has been commercially demonstrated but full-scale production has been hindered by feedstock costs that make the product more costly than petroleum-based fuels. Maintaining low feedstock costs while developing crops attractive to farmers to...

  20. Serials vs. the Dollar Dilemma: Currency Swings and Rising Costs Play Havoc with Prices.

    ERIC Educational Resources Information Center

    Ketcham, Lee; Born, Kathleen

    1995-01-01

    This periodical price survey examines pricing trends, currency fluctuation and other predictors of 1996 serials costs. Tables are derived from analysis of three Institute for Scientific Information (ISI) databases and reflect subscription rates of large libraries. Sidebars include cost history information specific to mid-size to smaller academic…

Top